• Pokmon Company and Niantic pledge $1m for LA fire relief
    www.gamesindustry.biz
    Pokmon Company and Niantic pledge $1m for LA fire reliefBoth firms will donate $500,000 each to GlobalGiving's California Wildfire Relief FundImage credit: The Pokmon Company International News by Sophie McEvoy Staff Writer Published on Feb. 17, 2025 The Pokmon Company International and Pokmon Go developer Niantic have pledged $1 million in donations to support Los Angeles wildfire relief.The Pokmon Company will make a $500,000 donation to GlobalGiving's California Wildfire Relief Fund, which will be matched by a $500,000 donation from Niantic.Nonprofit organisations, including the Pasadena Community Foundation and the Downtown Women's Center, have received grants from GlobalGiving's fund, which will continue to support local nonprofits following the recent wildfires."Our thoughts are with those affected by the devastating wildfires in Southern California," said Pokmon Company chief diversity officer Raquel Daniels."We are grateful for the opportunity to support efforts in the greater Los Angeles area alongside our partners at Niantic and GlobalGiving."GlobalGiving CEO Victoria Vrana added: "Thanks to The Pokmon Company group's donation, we've already provided emergency grants to local nonprofit partners working on the frontlines of relief and recovery."Their continued efforts will help ensure that the fire-impacted communities receive the vital support they need in the weeks and months ahead."The Pokmon Company is one of many video games companies to donate money to support ongoing wildfire recovery efforts in LA. Activision donated $1 million to the LAFD Foundation and Direct Relief. It raised a further $1.6 million from proceeds of its Call of Duty LA Fire Relief Pack. Sony donated $5 million to support first responders, community relief, rebuilding efforts, and assistance programs for those affected.
    0 Comentários ·0 Compartilhamentos ·74 Visualizações
  • Obituary: Half-Life 2 art director Viktor Antonov has passed away
    www.gamedeveloper.com
    Celebrated artist and art director Viktor Antonov, known for shaping the worlds of Half-Life 2 and Dishonored, has passed away.Eschatology Entertainment, a studio co-founded by Antonov in 2022, confirmed the news on Linkedin and praised his contributions to the game industry."Today, our studio mourns a dear colleague, an inspiring friend, and a legendary visual director. One of the founders of Eschatology Entertainment, Viktor Antonov, passed away a week ago. We are still waiting for official papers, but unfortunately, these are not rumors," wrote the studio."The journey we shared with Viktor, the inspiration he brought, and the world we built together hold deep meaning for us. We are grateful for the incredible experience, the remarkable history we created side by side, and for his talent and visionwithout which the very foundation of our studio would not have been possible."According to his Linkedin page, Antonov joined Valve in 1999 and spent seven years at the company as art director. During his tenure, he worked on notable projects including Counter-Strike: Source and Half-Life 2.After departing Valve in 2006, Antonov joined Arkane Studios to oversee the visual direction of the Dishonored franchise. He eventually moved to ZeniMax Media in 2011, taking on the role of visual design director and working on titles such as Wolfenstein: The New Order, Doom, and Fallout 4.Throughout his career, Antonov received multiple awards from organizations such as BAFTA and The Visual Effects Society.
    0 Comentários ·0 Compartilhamentos ·59 Visualizações
  • Report: 10:10 Games making layoffs after branding Funko Fusion 'commercial and critical failure'
    www.gamedeveloper.com
    Chris Kerr, News EditorFebruary 17, 20251 Min ReadImage via 10:10 GamesFunko Fusion developer 10:10 Games has reportedly laid off a number of employees.Anonymous sources speaking to Insider Gaming claim the UK studio placed 19 jobs at risk after branding Funko Fusion a "complete commercial and critical failure."It's also claimed the studio has struggled to secure funding for future projects.Sources explained the cuts were "entirely based on the needs of the next project," but added that no senior employees or managerial staff were impacted.10:10 Games is also accused of failing to effectively support workers throughout the process. The studio reportedly told some employees they were being laid off while they were on holiday and provided severance packages that meet "the bare legal minimum" requirements."Management seemed very keen to wrap up the process quickly, and despite saying that they were fully open to suggestions and feedback, none of it has been taken on board," said one source.10:10 Games was established in 2021 by a group of veterans from TT Games. Last year, 10:10 Games head of publishing Arthur Parsons lambasted a perceived lack of support for new studios in the UK during an interview with Game Developer."We as an industry are not being sustainable," he stated. "It would be great if there were more grants, more incentives to do stuff. We've had to do a lot of stuff off our own back, off our own money."Internally, Parsons purportedly told staff the studiowhich claims to always put "our people and culture first" on its Linkedin pagewould take responsibility for the layoffs, but the company has yet to officially comment on the situation.Game Developer has reached out to 10:10 Games for more information.Read more about:LayoffsAbout the AuthorChris KerrNews Editor, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, andPocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Comentários ·0 Compartilhamentos ·56 Visualizações
  • The New York Times adopts AI tools in the newsroom
    www.theverge.com
    The New York Times has reportedly approved artificial intelligence tools that newsroom staff can use for editing copy, summarizing information, coding, and writing. The publication announced in an internal email that product and editorial staff will receive AI training, according to Semafor, and introduced a new internal AI tool called Echo for summarizing articles, briefings, and other company activity.Staff were reportedly sent new editorial guidelines detailing permitted uses for Echo and other AI tools, which encourage newsroom employees to use them to suggest edits and revisions for their work, and generate summaries, promotional copy for social media, and SEO headlines.Other examples mentioned in a mandatory training video shared with staff include using AI to develop news quizzes, quote cards, and FAQs, or suggesting what questions reporters should ask a start-ups CEO during an interview. There are restrictions, however the company told editorial staff that AI shouldnt be used to draft or significantly alter an article, circumvent paywalls, input third-party copyrighted materials, or publish AI-generated images or videos without explicit labeling.It isnt clear how much AI-edited copy The Times will allow in published articles. The outlet promised that Times journalism will always be reported, written and edited by our expert journalists, in a memo it released last year, and it reaffirmed that commitment to human involvement a few months later.Generative A.I. can sometimes help with parts of our process, but the work should always be managed by and accountable to journalists, read The Times generative AI principles, adopted in May 2024. We are always responsible for what we report, however the report is created. Any use of generative A.I. in the newsroom must begin with factual information vetted by our journalists and, as with everything else we produce, must be reviewed by editors.Alongside Echo, other AI tools apparently greenlit for use by The Times include GitHub Copilot as a programming assistant, Google Vertex AI for product development, NotebookLM, the NYTs ChatExplorer, OpenAIs non-ChatGPT API, and some of Amazons AI products.These AI tools and training guidelines are rolling out as The Times remains embroiled in a legal battle with OpenAI and Microsoft, alleging that ChatGPT was trained on Times content without permission. Many other publications have also introduced AI into their newsrooms at varying scales, ranging from tools for spelling and grammar to generating entire articles.See More:
    0 Comentários ·0 Compartilhamentos ·38 Visualizações
  • Enhancing Reasoning Capabilities in Low-Resource Language Models through Efficient Model Merging
    www.marktechpost.com
    Large Language Models (LLMs) have shown exceptional capabilities in complex reasoning tasks through recent advancements in scaling and specialized training approaches. While models like OpenAI o1 and DeepSeek R1 have set new benchmarks in addressing reasoning problems, a significant disparity exists in their performance across different languages. The dominance of English and Chinese in training data for foundation models like Llama and Qwen has created a substantial capability gap for low-resource languages. However, these models face challenges such as incorrect character usage and code-switching. These issues become pronounced during reasoning-focused fine-tuning and reinforcement learning processes.Regional LLM initiatives have emerged to address low-resource language limitations through specialized pretraining and post-training approaches. Projects like Typhoon, Sailor, EuroLLM, Aya, Sea-lion, and SeaLLM have focused on adapting models for specific target languages. However, the data-centric approach to adapting reasoning capabilities lacks transparency in reasoning model data recipes. Moreover, scaling requires substantial computational resources, as evidenced by DeepSeek R1 70Bs requirement of 800K examples for distillation and general SFT, far exceeding academic efforts like Sky-T1 and Bespoke-Stratos. Model merging has emerged as an alternative approach, showing promise in combining multiple specialized models weights to improve performance across tasks without additional training.Researchers from SCB 10X R&D and SCBX Group Bangkok, Thailand have proposed an innovative approach to enhance reasoning capabilities in language-specific LLMs, particularly focusing on Thai language models. The research combines data selection and model merging methods to incorporate advanced reasoning capabilities similar to DeepSeek R1 while maintaining target language proficiency. The study addresses the critical challenge of improving reasoning abilities in low-resource language models, using only publicly available datasets and a modest computational budget of $1,201, matching DeepSeek R1s reasoning capabilities without compromising performance on target language tasks.The implemented methodology utilizes Typhoon2 70B Instruct and DeepSeek R1 70B Distill as base models. The approach involves applying Supervised Fine-Tuning (SFT) to Typhoon2 70B and merging it with DeepSeek R1 70B. The training configuration employs LoRA with specific parameters: rank 32 and of 16. The system uses sequence packing with 16,384 maximum lengths, alongside Liger kernels, FlashAttention-2, and DeepSpeed ZeRO-3 to optimize computational efficiency. Training runs on 4H100 GPUs for up to 15 hours using axolotl4, with model merging performed via Mergekit. The evaluation focuses on two key aspects: reasoning capability and language task performance, utilizing benchmarks like AIME 2024, MATH-500, and LiveCodeBench, with Thai translations for assessment.Experimental results reveal that DeepSeek R1 70B Distill excels in reasoning tasks like AIME and MATH500 but shows reduced effectiveness in Thai-specific tasks such as MTBench-TH and language accuracy evaluations. Typhoon2 70B Instruct shows strong performance in language-specific tasks but struggles with reasoning challenges, achieving only 10% accuracy in AIME and trailing DeepSeek R1 by over 20% in MATH500. The final model, Typhoon2-R1-70B combines DeepSeek R1s reasoning capabilities with Typhoon2s Thai language proficiency, achieving performance within 4% of Typhoon2 on language tasks while maintaining comparable reasoning abilities. This results in performance improvements of 41.6% over Typhoon2 and 12.8% over DeepSeek R1.In conclusion, researchers present an approach to enhance reasoning capabilities in language-specific models, through the combination of specialized models. While the study proves that SFT and model merging can effectively transfer reasoning capabilities with limited resources, several limitations exist in the current methodology. The research scope was confined to merging DARE in a two-model setup within a single model family, without optimizing instruction tuning despite available high-quality datasets like Tulu3. Significant challenges persist in multilingual reasoning and model merging including the lack of culturally aware reasoning traces. Despite these challenges, the research marks a step toward advancing LLM capabilities in underrepresented languages.Check outthePaper.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our75k+ ML SubReddit. Sajjad AnsariSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.Sajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/TransMLA: Transforming GQA-based Models Into MLA-based ModelsSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Microsoft Research Introduces Data Formulator: An AI Application that Leverages LLMs to Transform Data and Create Rich VisualizationsSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/ByteDance Introduces UltraMem: A Novel AI Architecture for High-Performance, Resource-Efficient Language ModelsSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Adaptive Inference Budget Management in Large Language Models through Constrained Policy Optimization Recommended Open-Source AI Platform: IntellAgent is a An Open-Source Multi-Agent Framework to Evaluate Complex Conversational AI System' (Promoted)
    0 Comentários ·0 Compartilhamentos ·56 Visualizações
  • Higher-Order Guided Diffusion for Graph Generation: A Coarse-to-Fine Approach to Preserving Topological Structures
    www.marktechpost.com
    Graph generation is a complex problem that involves constructing structured, non-Euclidean representations while maintaining meaningful relationships between entities. Most current methods fail to capture higher-order interactions, like motifs and simplicial complexes, required for molecular modeling, social network analysis, and protein design applications. Diffusion-based methods, first developed for image synthesis, have been popularized widely in the application domain but tend to lose important topological information. The rapid decay of structural dependencies throughout diffusion leads to unrealistic graph outputs. Second, traditional methods add isotropic Gaussian noise to adjacency matrices, which destroys key properties like sparsity and connectivity. To overcome these issues, an approach with higher-order structural guidance throughout graph generation is needed to preserve higher topological integrity.Current models for graph generation are based on methods like recurrent neural networks, variational autoencoders, and generative adversarial networks. While such methods can learn structural properties, they are computationally expensive and lack scalability. Recently, diffusion-based frameworks have been proposed that improve graphs step by step progressively. While such models provide some improvement, they are inherently designed for continuous image data and thus fail to capture graphs discrete and hierarchical nature. One major weakness of current methods is the destruction of meaningful structure in adjacency matrices after a few steps of diffusion, resulting in random, unrealistic representations of graphs. Further, the models are often not equivariant because they fail to preserve consistency when permuting nodes, leading to an inaccurate estimation of graph distributions.To address these challenges, HOG-Diff introduces a systematic approach based on a coarse-to-fine learning paradigm that progressively refines graphs in a way that maintains critical topological features. By decoupling the generation process into successive steps, the method first builds higher-order graph skeletons followed by the refinement of pairwise relations and intricate details. An intermediate-stage diffusion bridge mechanism maintains properly organized intermediate steps and realistic intermediate representations without any loss of detailed topological features. In contrast to traditional approaches, which act by manipulating adjacency matrices, this paradigm leverages spectral diffusion with noise injection in the eigenvalue space of the Laplacian matrix. This process damps excessive modification of connectivity patterns, leading to more structurally coherent outputs. Additionally, the model architecture uses graph convolutional networks integrated with graph transformer networks to learn localized relationships and global dependencies to enable improved model performance in general.The generative process uses a structured multi-stage architecture where each stage refines the graph without eliminating its higher-order features. Elimination of unhelpful nodes and edges through a filtering process using cell complexes is utilized to enable this controlled graph construction. The diffusion process is controlled by a Generalized Ornstein-Uhlenbeck bridge, ensuring through mathematical means a smooth transition from one structural arrangement to another. Spectral diffusion replaces the traditional method of noise injection in the adjacency matrix by injecting perturbations into the eigenvalue space of the Laplacian matrix, preserving important connectivity and sparsity patterns. The model architecture provides a balance between the preservation of local and global structure by using the integration of graph convolutional and transformer networks for the capture of informative features across scales.Large-scale experimentation verifies that HOG-Diff attains better performance than state-of-the-art approaches on both molecular and generic graph generation tasks. In the context of applications in molecular space, this model performs to a remarkable degree in major similarity measures, such as lower Neighborhood Subgraph Pairwise Distance Kernel and Frchet ChemNet Distance scores, thus reflecting higher consistency with realistic molecular distributions. Higher validity, uniqueness, and novelty scores further demonstrate its capability to generate chemically meaningful structures. Apart from the specific context of molecular graphs, the model also demonstrates unparalleled capability in abstracting complex topological dependencies in generic datasets, attaining lower error rates in degree distribution, clustering coefficient, and orbit structure accuracy. Maintenance of higher-order features during generative transformation leads to generating graphs that not only attain realism but are also structurally stable, providing a more reliable solution than existing practices.With the integration of higher-order structural information directly into the generative model, HOG-Diff offers an improved solution for graph synthesis overcoming the limitations of traditional diffusion models. The integration of a coarse-to-fine generation strategy, diffusion bridge operations, and spectral diffusion ensures the generated graphs maintain topological fidelity and semantic correctness. Large-scale evaluation on diverse datasets confirms its capability to generate high-quality graphs with improved structural correctness. Systematic exploration of diverse topological guides improves explainability, making this framework a valuable tool in applications from drug discovery and urban modeling to network science. Maintenance of advanced graph structures, this method demonstrates an important advance in deep generative modeling over structured data.Check outthePaper.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our75k+ ML SubReddit. Aswin AkAswin AK is a consulting intern at MarkTechPost. He is pursuing his Dual Degree at the Indian Institute of Technology, Kharagpur. He is passionate about data science and machine learning, bringing a strong academic background and hands-on experience in solving real-life cross-domain challenges.Aswin Akhttps://www.marktechpost.com/author/aswinak/Can Users Fix AI Bias? Exploring User-Driven Value Alignment in AI CompanionsAswin Akhttps://www.marktechpost.com/author/aswinak/Anthropic AI Launches the Anthropic Economic Index: A Data-Driven Look at AIs Economic RoleAswin Akhttps://www.marktechpost.com/author/aswinak/Meet Huginn-3.5B: A New AI Reasoning Model with Scalable Latent ComputationAswin Akhttps://www.marktechpost.com/author/aswinak/LLMDet: How Large Language Models Enhance Open-Vocabulary Object Detection Recommended Open-Source AI Platform: IntellAgent is a An Open-Source Multi-Agent Framework to Evaluate Complex Conversational AI System' (Promoted)
    0 Comentários ·0 Compartilhamentos ·58 Visualizações
  • Xbox Boss Phil Spencer Issues Update on Rares Long in Development Everwild
    www.ign.com
    What happened to Rares Everwild? Its been over five years since the game was announced back during Microsofts X019 presentation. Repeated no-shows during Xbox showcases and rumors of reboots have caused some to wonder whether Everwild had fallen by the wayside. Not so, Xbox boss Phil Spencer has said.In an interview with XboxEra, Spencer listed Everwild as one of the games yet to come out that he was excited for, adding hed recently visited UK studio Rare, which runs live service pirate adventure game Sea of Thieves, to get a look at Everwild and the progress the developers were making. Yeah, State of Decay is just one of the franchises I love back from the original one, so that one stays on the board. I do think the work that Double Fines doing and how Tim [Schafer] kind of solicits feedback from the team. And the other one, Ill say because I was recently out at Rare. Its nice to see the team with Everwild and the progress that theyre making.PlaySpencer said Microsoft had been able to give the developers of those games (State of Decay, the next game from Double Fine, and Everwild) time while still having a packed schedule of releases (bolstered, obviously, by the acquisitions of Bethesda and Activision Blizzard).We can give those teams time, Spencer said. And next week Im going to be up in Vancouver with the Coalition [Gears of War developer] and how fun is that?As for Everwild, its faced concern over the years after the aforementioned reboot rumor, which Microsoft has denied, and the exit of creative director Simon Woodroffe in 2020. Rare filled the director's chair with veteran designer Gregg Mayles, who previously worked on Donkey Kong Country, Banjo-Kazooie, Viva Pinata, and Sea of Thieves.PlayBut what is Everwilds? Reports have indicated its a third-person adventure game with god game elements, but given how long its been in development, that may have changed. The last Everwilds trailer, released in July 2020, carried the following description: Everwild is a brand new IP from Rare. A unique and unforgettable experience await in a natural and magical world.Microsoft has a long list of in-development games, including the Perfect Dark reboot, the next Halo, and Playground's new Fable game. Meanwhile, Bethesda is working on The Elder Scrolls 6, and Activision is of course working on this year's Call of Duty. In the shorter term, id Software's Doom: The Dark Ages launches in May.Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.
    0 Comentários ·0 Compartilhamentos ·44 Visualizações
  • 'Fallouts Like That Happen, It's Just Part of the Deal' Mass Effect 1 and 2 Composer Jack Wall Discusses Why He Failed to Return for Mass Effect 3
    www.ign.com
    Composer Jack Wall has discussed why he failed to return for Mass Effect 3 having created the much-loved music for the first two games in the series.Wall worked with developer BioWare to create the 80s sci-fi music-styled soundtracks for Mass Effect, released in 2007, and its sequel, 2010s Mass Effect 2. Mass Effect 2 in particular is often cited as one of the greatest action role-playing games ever made, and Walls soundtrack, which includes the rousing Suicide Mission, is considered a series high-point by fans.But Wall failed to return for 2012s Mass Effect 3, which came as a shock to fans. Now, in a new interview with The Guardian, Wall discussed why, pointing to a falling out with then Mass Effect development chief Casey Hudson.Casey was not particularly happy with me at the end, Wall said. But Im so proud of that score. It got nominated for a Bafta, and it did really well [even if] it didnt go as well as Casey wanted.The Guardian suggested a creative tension between Wall and Hudson, but Wall remained vague. Fallouts like that happen, its just part of the deal, he added. Its one of the few times in my career thats happened, and it was a tough time, but it is what it is.Wall did, however, go into a bit more detail on the challenges he and BioWare faced getting Mass Effect 2 out the door and Suicide Mission into the finished product, which may provide some insight into Wall and Hudsons relationship at the end of the project.It was the biggest mind-f***ing thing Ive ever done in my entire life, Wall said. And there was no one available to walk me through it, because they were all freaking out trying to finish the game. I handed it in, and they had to do a lot of massaging on their end in order to get it to work, but they did it... and the result is still one of the best ending sequences to a game that Ive ever played. It was worth all that effort.After Mass Effect 2, Wall went on to make music for Call of Duty games, most recently composing the soundtrack for Black Ops 6. BioWare, meanwhile, is currently working on the next Mass Effect game following the release of Dragon Age: The Veilguard. BioWare is yet to announce the composer.Wesley is the UK News Editor for IGN. Find him on Twitter at @wyp100. You can reach Wesley at wesley_yinpoole@ign.com or confidentially at wyp100@proton.me.
    0 Comentários ·0 Compartilhamentos ·44 Visualizações
  • Captain America: Brave New World Box Office and Measuring a Glass Half Full
    www.denofgeek.com
    Captain America: Brave New World is crossing $100 million over the four-day weekend according to Disney. That is a record for the latest entry in the Marvel Cinematic Universe, making it the fourth highest-earning debut of any film to open on Presidents Day weekend ever. That is even more respectable when you realize the other three films that earned more were all MCU or Marvel associated jointswith the second highest earner over Valentines Day weekend being the 20th Century Fox-produced Deadpool and its $152 million haul in 2016.Considering the movie is suffering from apparently horrid word-of-mouth as judged by its B- CinemaScore (the lowest for any MCU movie to date), for Brave New World to rally and prove skeptics in the industry wrong is a small victory, albeit of the PR variety. Industry prognosticators a few days ago were indeed predicting the movie as due to open at around $95 million or less, so through the prism of diminished expectations, $100 million is a winespecially as historically the WOM on movies that earn B- CinemaScores leads to severe dropoffs in the second weekend.But therein lies the conundrum of evaluating Brave New World grossing $100 million: is this a glass half full or half empty?As aforementioned, the film was able to overperform against expectations and deliver a healthy box office return not too far off from the last MCU movie to open over Valentine/Presidents Day weekend: Ant-Man and the Wasp: Quantumania. Its a face-saving win for Marvel Studios and proof that audiences still care about this mega-franchise.However, lets also consider that previous February holiday MCU win, Ant-Man 3. Released two years ago over the same weekend, the third Ant-Man flick managed to gross $120.4 million over its first four days. In other words, it earned 20 percent more than Disneys estimates for Brave New World. And Ant-Man 3 was deemed by many a financial disappointment, likely because its budget has been estimated to exceed $300 million due to copious amounts of reshoots and heavy post-production CGI work. That movie also earned what studios consider an anemic B CinemaScore, which presaged the films eye-watering 70 percent drop in its second weekend.Its $120 million opening was also notably far behind Ryan Reynolds plucky R-rated comedy from seven years earlier, never mind the admittedly hard to duplicate cultural phenomenon that was Black Panther over the same weekend in 2018 (where it grossed $242 million in just four days!)While Brave New Worlds opening is not the outright disaster for Marvel that The Marvels was in 2023 ($46.1 million) or Eternals in 2021 ($71.3 million), it signals diminished interest, especially when inflation is factored in. The last two movies bearing the title Captain America opened at $179.1 million and $95 million, and the latter was 11 years ago when expectations from studios and (perhaps more crucially) the superhero genres biggest fans were different. The Winter Soldier earned an A CinemaScore and was largely celebrated by fans as being a great addition to the Marvel canon. Brave New Worlds reception seems more divided and likely doomed to a dropoff similar to Quantumania.With that said, expectations are a big part of the MCUs problem. While Brave New Worlds budget is reportedly $180 million, rumors persist it is much more after the film endured its own extensive reshoots. While we cannot confirm the veracity of the reshoot costs, they certainly would have caused the movies price tag to creep upward, even though audience excitement is demonstrably on the wane. In other words, these movies are costing more than they did a decade ago despite being less popular.So one way to read the tea leaves of Brave New World is that audience loyalty in the MCU remains strong among the diehards, and Disney and Marvel Studios just need to figure out a way to make these things cost less. (We might suggest having the entire script, or at least story, mapped out before production and subsequent quicksand traps in post.)However, there could be a bigger problem in these numbers if you look outside the MCUs own box office history. Traditionally, when franchises start dipping toward B- CinemaScores, it predicts bigger box office problems to come. The only MCU movies to earn a B CinemaScore are Eternals, The Marvels, and Ant-Man 3. And as you might have noticed, none of them have sequels forthcoming. But some superhero movies did get the dreaded B and arrived ahead of larger franchise continuations which could not be canceled.Batman v Superman: Dawn of Justice debuted at $166 million with terrible reviews and a B CinemaScore. The bottom fell out about 18 months later when the first and only Justice League movie with a wide theatrical release opened to $94 million despite supposedly decades of anticipation (and being another victim of extensive and expensive reshoots). The Flash, meanwhile, earned a B CinemaScore opposite its disastrous $55 million debut. That still was higher than Aquaman and the Lost Kingdom, which somehow only opened to $27.7 million despite being a sequel to a film that grossed $1 billion in the superhero movie glory days of the 2010s.Join our mailing listGet the best of Den of Geek delivered right to your inbox!There are of course extenuating circumstances, especially in the case of Aquaman 2, which was essentially sent out to die in December 2023 after Warner Bros. Discovery signaled they were rebooting the whole cinematic DC universe. Nonetheless, superhero movies that are received poorly by fans have usually acted as prelude to awful second-weekend drops and then diminished interest in the larger franchise a few months later.All of which puts the MCU in a strange place. Right now, Brave New Worlds overperformance might suggest audiences enjoyed it better than industry pollsters foresaw on Friday night. Maybe. In which case, a better-than-expected opening weekend might lead to a less grim-than-anticipated second weekend drop. Next week would be the bigger test, then, in Brave New Worlds long-term prospects.Beyond the fourth Captain America movie, however, remains the health of the MCU brand itself. Last year Marvel saw another movie cross the $1 billion threshold, but it was one rife with nostalgia for 2010s and even 2000s superhero movies: Deadpool & Wolverine. Brave New World, by contrast, had an eye on the future as gleaned from its title and the fact it introduced a new Captain America (played with charisma and charm by Anthony Mackie). The film even hints Mackies Cap will be instrumental in assembling a new Avengers roster in next years Avengers: Doomsday.But the muted CinemaScore and diminishing ticket sales when compared to other Captain America movies, or even the last Ant-Man flick, suggests audiences are in a nebulous place with the MCU, and as the fortunes of the DCEU proved, that is a dangerous position to be in for a long-running superhero franchise.We suppose the real tests, then, will be to see not only how the film does in its second weekend, but whether its troubles are an isolated incident related to a movie that critics and audiences were ambivalent toward, or a more systemic reception to the next slate of Marvel movies, two of which open later this year.
    0 Comentários ·0 Compartilhamentos ·52 Visualizações
  • How VCs are killing climate tech and how they can save it
    thenextweb.com
    Sustainability tech has been all the buzz in the last few years. Investors are hunting promising ESG businesses, governments are pushing ambitious legislation, and companies are getting on board to adopt new solutions. Sustainability funding is projected to reach unprecedented levels, with BCG Henderson Institute estimating accumulated global investment to achieve net zero to hit $75 trillion by 2050.And yet, behind the curtain, the picture isnt quite as rosy. According to Statista, VC investment in sustainability and climate tech has been steadily declining since 2021. While AI startups often manage to secure funding rounds within mere weeks, sustainability-focused companies can spend years in fundraising limbo. As a partner at VC consulting agency Waveup, Ive seen dozens of exceptional startups forced to bootstrap despite having validated technology and clear market potential from sustainable agriculture solutions to carbon capture technologies.Something just doesnt add up in venture capital. Why arent investors backing the innovations needed to create a more sustainable future? The core issue lies in how they evaluate investment opportunities.When looking at sustainability tech companies, most VCs expect rapid adoption, hockey-stick growth, and massive total addressable markets (TAMs) (understandably so, as otherwise, the VC formula might simply not work). They apply the same metrics and expectations used for SaaS and AI startups, and while some sustainability companies might fit this mould, many simply are too early in market adoption to demonstrate these characteristics.Register NowConsider one of the clients we worked with developing revolutionary ocean-cleaning technology. The team managed to build a product with a clear and proven ability to drastically lower ocean pollution by reducing the amount of microplastics that enter the water. Despite recognition from the UN and an excellent client roster, the company has struggled with financing for years. For VCs, an absence of rapid growth overshadowed patented tech, past environmental impact, and excellent business economics. While recognising impressive results, most investors couldnt get comfortable with the adoption timeline and speed of growth as, for many corporate clients, sustainability investments remain a nice-to-have category rather than a must-have.It doesnt help that many sustainability solutions require buy-in from multiple stakeholders within organisations, leading to longer and more unpredictable sales cycles. Worse, many companies also need significant upfront investment in physical assets or infrastructure, unlike purely software-based startups. The result? Gloomy statistics: while traditional tech companies typically take three years from Series A to Series B, sustainability technologies need an average of seven-plus years to achieve scale.The bottom line: impact investments arent yet firmly matching traditional VC returns. While theres been a concerted push since 2015 to argue that impact returns are approaching venture returns, the data often tells a different story and this performance gap creates a fundamental tension with the VC model. Venture funds operate under strict constraints: they have fiduciary duties to their limited partners, closed-end fund structures, and defined timelines for delivering returns. A funds ability to raise Fund II or III depends entirely on the performance of its previous investments. In this context, backing good investments that havent proven viable enough becomes paradoxically risky even for an industry built on taking risks.Rethinking the climate tech modelFinancing the next generation of climate techmight require new solutions from everyone. The question is, are investors truly willing to find new models?With many VCs (without calling out names), were seeing a troubling trend: rather than looking for new ways to adapt investment frameworks and funding mechanisms or dedicating more time to sourcing high-potential nascent climate tech startups, they hire consultants to reposition their existing portfolio companies as ESG-friendly. Essentially, this involves finding an ESG angle in otherwise traditional software companies to report to LPs strides made in financing sustainable tech solutions. Needless to say, this approach does little to drive meaningful environmental and social change.Whats the alternative? We have a few ideas.1. Rethink traditional funding mechanismsVC investors need to work with other ecosystem players to offset financing risks while balancing risks and returns. Today, leading impact investors are working to combine traditional VC money with impact-first capital and structuring investments with different return tranches for various investors. Some use catalytic capital to de-risk early-stage investments or create revenue-based financing options for steady-growth sustainability companies. Others develop outcome-based funding models tied to impact metrics.For companies struggling with VCs altogether, evergreen funds that dont have fixed lifecycles and allow for extended holding periods can better match sustainability techs development timelines. Corporate venture capital and large corporations facing pressures to transition to net zero can also become viable backers by providing both capital and pilot opportunities for sustainability startups.2. Provide actionable help to accelerate the road to scalingMonthly advice in board meetings will be valuable, but the true contribution lies in hands-on help driving adoption. The best impact investors put their time where their money is by partnering with corporate venture arms to secure pilot opportunities and market validation for their portfolio companies, collaborating with government agencies on grants and subsidies, and working with industry consortiums to accelerate adoption.3. Adjust metrics and expectationsInvestors need to consider new frameworks for evaluating sustainability investments. Traditional SaaS metrics could be replaced with impact-adjusted indicators that consider both financial and sustainability outcomes or allow for longer return lifecycles that align with the sectors development timeline and adoption curves.Important to note: this isnt about lowering standards; its about adapting them to match the unique characteristics of sustainability technologies.For VCs, the question shouldnt be whether to invest in sustainability tech but how to adapt their approach to these critical innovations. Without this shift in perspective, we risk missing out on the next wave of transformative technologies that could help address our most pressing environmental and social challenges. After all, the biggest risk might not be backing sustainability tech too early but too late.Tech investing is a key theme of this summers TNW Conference. The event takes place on June 19 and 20 and tickets are now on sale. Use the code TNWXMEDIA2025 for an exclusive subscriber discount. Story by Olena Petrosyuk With 10 years of experience in corporate finance and consulting, Olena Petrosyuk has worked in the world's top-tier M&A and consulting t (show all) With 10 years of experience in corporate finance and consulting, Olena Petrosyuk has worked in the world's top-tier M&A and consulting teams. She currently serves as a partner at the VC consulting agency Waveup. Get the TNW newsletterGet the most important tech news in your inbox each week.Also tagged with
    0 Comentários ·0 Compartilhamentos ·43 Visualizações