• 0 Commentaires 0 Parts 106 Vue
  • VENTUREBEAT.COM
    UT Austin’s communication school adds Karch Gaming Institute
    The University of Texas at Austin’s Moody College of Communication will be home to the Karch Gaming Institute.Read More
    0 Commentaires 0 Parts 73 Vue
  • WWW.GAMESINDUSTRY.BIZ
    The Last of Us TV show renewed for season 3
    The Last of Us TV show renewed for season 3 HBO confirms the show will continue ahead of season 2 Image credit: HBO News by Samuel Roberts Editorial Director Published on April 10, 2025 HBO's adaptation of The Last of Us has been renewed for season 3, ahead of its second season debut on Sunday, April 13. The renewal follows mostly strong reviews for The Last of Us season 2, which partially adapts 2020's PS4 game The Last of Us Part 2, and introduces the key character of Abby played by Kaitlyn Dever. The decision isn't a surprise – back in February, HBO's programming EVP Francesca Orsi told Deadline that a total of four seasons was likely for the show. "We don’t have a complete or final plan, but I think it’s looking like four seasons. I wouldn’t want to confirm that, but it’s looking like this season and then two more seasons after this and we’re done." The Last of Us – starring Pedro Pascal as Joel and Bella Ramsey as Ellie – debuted in January 2023 to critical acclaim and huge ratings, with 32 million viewers watching per episode in the US by its end. It was the highest-rated first season in HBO's history, surpassing a ratings record set by Game of Thrones prequel House of the Dragon a year earlier. Season 2 comprises seven episodes versus the first season's nine, and picks up five years after the first season's finale. The show is co-created by Chernobyl writer Craig Mazin and Naughty Dog's Neil Druckmann.
    0 Commentaires 0 Parts 123 Vue
  • WWW.GAMEDEVELOPER.COM
    Industry veterans establish Onibi to create the 'most accessible UGC open world' ever
    TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.Industry veterans establish Onibi to create the 'most accessible UGC open world' everIndustry veterans establish Onibi to create the 'most accessible UGC open world' everA group of former World of Warcraft, Fortnite, and Baldur's Gate 3 devs have joined forces to create a new anime-style multiplayer RPG.Chris Kerr, News EditorApril 10, 20251 Min ReadImage via OnibiA cohort of veterans with credits on titles like World of Warcraft, Fortnite, League of Legends, Fall Guys, Baldur's Gate 3, and more have established a new fully-remote studio called Onibi to create a procedurally generated anime-style RPG.Onibi was quietly established in 2023 but has broken cover to announce its debut title, Tomo: Endless Blue. The studio said it wants Tomo, which looks like a cross between Pokemon and Minecraft, to become the "most accessible UGC open world" to date.Onibi is led by former Twitch head of research and Gameloft head of data science, Benjamin Devienne.The studio has already received undisclosed investments from Octopus Ventures, Sequoia Capital, SeaX, Financière Saint James, and others."With decades of expertise in shipping some incredible multiplayer experiences and platforms, we have previously contributed to some of the most renowned and successful game titles. We meticulously assembled our team to craft something truly extraordinary," reads a mission statement on the Onibi website."Our work environment mirrors the ethos of old-school game developers, fostering a collaborative atmosphere where everyone can make meaningful contributions at various levels."Tomo is currently targeting a 2026 release on Steam. Related:About the AuthorChris KerrNews Editor, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commentaires 0 Parts 85 Vue
  • WWW.THEVERGE.COM
    How these guitar modeling companies are recreating rare vintage sounds for the digital age
    Around 2009, Dweezil Zappa ran into a space problem. He was busy touring the US, performing some songs written by his father, Frank. Recreating those signature “peculiar sounds,” as Zappa calls them, required lugging around a massive rig — roughly the size of two large refrigerators — held together by more than 200 connections and cables. “The challenge for me on tour was how can I recreate some of these sounds and not use the actual equipment that [Frank] used because some of it didn’t exist anymore,” Zappa says. “It was a pretty extensive system.”Zappa began seriously exploring a still relatively new technology: guitar amp modelers. These briefcase-sized devices aimed to capture the essence of analog amplifier and pedal sounds, reinterpret them digitally, and deliver them with an audio fidelity comparable to the real thing. Zappa realized modelers were more than just a space-saver: they also opened up a new dimension of creativity. With the right tweaking, Zappa says he suddenly had almost any sound or effect he could imagine at his disposal.“If I have to switch to another song [during a set] that is from 1981, I just step on a button,” Zappa says. “It’s like having a recording studio, the entire chain of the recording, at your feet.” The world of digital or “simulated” amplifiers can generally be divided into two categories: amp profilers and amp modelers. Profilers capture an audio snapshot of a guitar rig’s sound and convert it into code, allowing the tone to be reproduced without the physical rig present. These profilers can be played through an actual amplifier or, more commonly, through a speaker system. Modelers, by contrast, analyze the tonal characteristics of an amp at a granular level and digitally replicate each of its individual components. This process simulates nearly every tube, preamp, and transformer, creating a fully digital “twin” made up of ones and zeros.In both cases, the goal is to take an instrument’s signal, convert it into a digital format, and process it through the digital amp, adding tonal complexity and richness. While they may not be exact copies of their analog counterparts, most people — aside from professional musicians and audio engineers — won’t be able to tell the difference, especially in a live setting.The hardware in these devices varies widely, but mostly consists of digital signal processing (DSP) chips and integrated circuits. Devices employ specialized algorithms designed to replicate the sound and behavior of various amplifiers and effects. Audio processing tools, such as waveshapers, manipulate waveforms to recreate the breakup that occurs in analog amplifiers when vacuum tubes are overdriven. In traditional analog amps, this physical process generates distortion, a defining characteristic that shapes an amp’s unique tonal qualities. Modelers replicate this effect by introducing audio clipping through digitally manipulated sound waves, effectively mimicking the distortion found in analog circuits.Rapid innovation and competition in amp modeling technology over the past decade have made it a staple in modern recorded music. Modelers are also becoming increasingly common in live performances — a shift industry experts speaking with The Verge attribute to heavier touring schedules and growing acceptance among veteran guitarists. Generations of emerging musicians may never actually play through a “real” tube amplifier. With modelers, these artists can experiment with digital recreations of vintage or rare sounds they might otherwise never have access to. In some cases, amp modelers may even allow the most obscure pieces of music equipment to live on digitally, long after the original parts and possibly the people who know how to maintain them, disappear.The space is mostly dominated by products from Fractal Audio Systems, Line 6, Neural DSP, and Kemper. Neural DSP says it uses a robotic operator armed with a microphone to take audio recordings of incrementally adjusted gear, then processes that data through an audio interface and presents it to users as digital amp and effects presets. Neural DSP’s Quad Cortex device also has a “capture” function that allows musicians to connect their own analog setup and create a convincing digital replica within minutes. Fractal Audio, whose modeler Zappa uses, uses schematics and blueprints of analog amps to create digital versions of individual components like transformers and tubes. The ultimate goal, Fractal Audio’s director of corporate development Matt Picone says, is to build “virtual gear” that performs almost identically to its analog inspiration.For most non-musicians, the difference in audio fidelity between an analog amp and a modeler is imperceptible — and has been for several years. Where modelers have fallen short, at least for some musicians, is when trying to imitate the more difficult to pin down “feel” of their tube predecessors. Feel broadly refers to both the physical sensation of air being pushed through amplifiers into a jam-packed room, as well as the precision and immediacy of a player hitting a note and getting instantaneous feedback. Modelers, like any digital technology, introduce latency. Even a millisecond of latency can be enough to impact an advanced player’s connection with their instrument. “The goal with any system is to get the latency as low as possible so that the perceived experience is that of a physical rig,” Cooper Carter, a professional musician and production consultant who has helped lock digital guitar sounds for major artists like Metallica, Journey, and Def Leppard, tells The Verge. “In an analog environment, it’s literally operating at the speed of electrons moving through copper.”Some, who Carter and others refer to as “tone purists,” argue modelers, sophisticated though they may be, still lack a quintessential human quality. Dave Friedman, a veteran amp designer who has helped craft custom equipment for guitarists like Eddie Van Halen and Jerry Cantrell, summed up that tension during a 2020 interview with guitar YouTuber Rhett Shull. Friedman acknowledged modelers are a “great tool” and can obtain good tones, but said he worried they allowed for less interaction between the performer and the amplifier.“There’s an impact that the real thing has that the modeler doesn’t have,” Friedman said. “There’s no danger left.”But Zappa and Carter both said the newest generation of advanced modelers deliver in terms of audio fidelity, realism, and feel. Zappa is currently using a modeler during live performances on a Jimi Hendrix tribute tour. Those improvements, Carter notes, are partly why many of those “tone purists” are finally coming around to the technology.“We’ve reached the point now where even the best players in the world, when presented with their rig that they’ve toured with for 40 years and the modeler, many of them end up preferring the modeler, not only because of how it sounds, but also because of what it offers as far as creative freedom,” Carter says.Both Carter and Zappa, it’s worth noting, still have a fondness for classic tube amps. Carter compares it to an old muscle car versus a new EV. The former is beautiful and nostalgic, but not necessarily the best daily tool.”It’s rock and roll, but [tube amps] are susceptible to damage,” Carter sats. “They are heavy as shit [and] expensive. They are kind of one-trick ponies or, at the most, like three-trick ponies.”But there are other more practical reasons for the transition. Carter says the recent uptick in modelers, especially in live performance settings, has coincided with the music industry’s growing emphasis on touring. Artists of all sizes are traveling more than ever, and one of the biggest expenses, Carter notes, is transporting gear. That gets real expensive, real fast.“Every major tour that’s switched to [modelers] has saved a lot of money on cartage,” Carter said.Modelers are also far less finicky and prone to breaking than cumbersome mounds of analog gear. Individual components in tube amps can easily get rattled and broken during transport, which can result in slightly varying sounds from day to day. They are also susceptible to the elements, something Metallica reportedly found out firsthand during the band’s 2013 Freeze ‘Em All concert in Antarctica. That performance, according to Guitar World, was conducted using solar power that did not provide enough energy to power the band’s traditional full rig — but it was enough power for a Fractal amp modeler. Now, more than a decade later, Metallica lead guitarist Kirk Hammett still uses a variation of that same modeler.“I have a studio quality sound in my Fractal,” Hammet told music YouTuber Rick Beato. “Album quality sound. That’s a hard thing to do.” Veteran touring musicians aren’t the only ones benefiting from the vintage tones captured in modern modelers. Newer artists — many of whom could never afford a rare vintage Fender amp from the 1970s — can now experience a close replica of that sound simply by plugging their guitar into a modeler. Carter says that’s possible, in part, because amp companies can’t patent or trademark a particular sound. Modeler companies have taken advantage of this, creating close digital recreations of classic amps with slightly altered names that clearly pay homage to their analog ancestors. The result: new artists can preserve and carry forward the iconic sounds of the past, even as the original gear that produced them fades into obscurity.See More:
    0 Commentaires 0 Parts 51 Vue
  • GAMEFROMSCRATCH.COM
    ezEngine – The Easiest C++ Game Engine?
    ezEngine is an open-source cross-platform C++ based game engine with a focus on ease of use. It provides a full Unity-like editing environment as well as both a visual scripting interface as well as AngelScript support. You may be asking yourself, why would I (or wouldn’t I) choose to use the ezEngine? Well, their website has that covered: When to use ezEngine ezEngine is designed to be a great basis for complicated projects. It provides you with lots of functionality that is tedious and difficult to build, such as efficient STL like container classes, a high-performance scenegraph, resource streaming and much more. It can be used to build the tech for games, as well as for industry applications. In many code bases the lower level functionality is messy and buggy, because it is hard (and boring) to build these parts, and game developers rather spend time on making pretty pictures. In EZ the base functionality is clean, consistent, efficient and fully unit-tested. It builds on Windows, Mac, Linux and Android. Out of the box EZ can be used to create games just with scripting. However, it is meant for people who need or want to build their own technology and are looking for a great foundation to build on top of. The ezEditor is a powerful and robust tool that enables quick iteration on ideas with fast startup-times and WYSIWYG real-time editing. It is also completely optional, in case you need a different kind of workflow. EZ is also a very good fit for students interested in learning how modern game engines work. It is easy to setup, compiles fast, is well documented, and straight-forward to extend. We also welcome contributions in the form of code or art. When not to use ezEngine ezEngine is mainly developed on Windows. The renderer currently uses DX11. A Vulkan port is in development and the tools are being ported to Linux as well, however this is still in the early phase and not yet productively usable. It is also not comparable in feature completeness to commercial offerings such as Unreal or Unity. Although it does support scripting game logic both with AngelScript and Visual Scripting, it is not meant for low-code or no-code development. The scripting capabilities are limited, for many game ideas you need to be comfortable writing C++ code. Key Links ezEngine Homepage ezEngine GitHub Repository You can learn more about the C++ based ezEngine and see it in action in the video below.
    0 Commentaires 0 Parts 102 Vue
  • WWW.MARKTECHPOST.COM
    This AI Paper Introduces a Machine Learning Framework to Estimate the Inference Budget for Self-Consistency and GenRMs (Generative Reward Models)
    Large Language Models (LLMs) have demonstrated significant advancements in reasoning capabilities across diverse domains, including mathematics and science. However, improving these reasoning abilities at test time remains a challenge researchers are actively addressing. The primary focus lies in developing methods to scale test-time compute effectively while maximising reasoning performance. Current methodologies include generating multiple chains-of-thought (CoTs) solutions for problems and implementing voting or selection mechanisms to identify the best solutions. Although these approaches have shown promise, they often require considerable computational resources and may not consistently identify optimal solutions when incorrect reasoning pathways dominate. Finding efficient ways to enhance LLM reasoning while minimizing computational overhead represents a critical challenge for the field’s advancement. Previous research has explored various approaches to enhance LLM reasoning capabilities. Generative Reward Models (GenRM) have emerged as a promising technique, framing verification as a next-token prediction task. These models enable test-time scaling by generating multiple verification chains-of-thought and aggregating their verdicts to score solutions. Initial comparisons between GenRM with Best-of-N (BoN) selection and Self-Consistency (SC) showed that GenRM appeared more efficient, achieving comparable performance with fewer solution candidates. However, these evaluations were conducted with fixed numbers of solutions rather than fixed computational budgets. This methodology creates misleading conclusions in practical scenarios where inference compute is limited, as it fails to account for the substantial computational costs associated with generating multiple verifications for each candidate solution. The key limitation of existing approaches is their failure to consider the true computational efficiency when comparing verification-based methods with simpler majority voting techniques. The proposed method introduces a comprehensive framework for accurately estimating the inference computational budget required by Self-Consistency and GenRMs. This framework enables a fair, compute-matched analysis that compares these test-time scaling strategies under fixed computational constraints. The approach assumes a single Large Language Model serves dual functions as both the solution generator and generative verifier, with verification capabilities activated either through specialized prompting or task-specific fine-tuning. By establishing this unified framework, researchers can systematically analyze the performance trade-offs between generating more solution candidates for Self-Consistency versus allocating compute resources to verification processes in GenRMs. The comparative analysis focuses on measuring effectiveness based on the total number of solutions and verifications generated by the LLM, providing clear metrics for computational efficiency across different reasoning approaches. The methodology employs a compute-matched analysis framework with a detailed architectural design for comparing test-time scaling strategies. For an autoregressive LLM with P parameters performing 2P FLOPs per output token, the total inference compute is calculated using the formula C(S, V) = S(1+λV), where S represents the number of solutions, V the number of verifications, and λ the ratio of tokens per verification to tokens per solution. This framework enables systematic evaluation of both Self-Consistency and Generative Reward Models under equivalent computational constraints. The architecture includes scaling solutions for SC across S ∈ {2^0, 2^1, …, 2^N} and evaluating GenRM across combinations of solutions and verifications S, V ∈ {S × V}. Also, the research introduces inference scaling laws for GenRM through a six-step methodology that determines optimal allocation between solutions and verifications. This process involves computing success rates across increasing verification counts, plotting results against compute budgets, and fitting power laws to establish relationships between optimal solution counts (S_opt ∝ C^a) and verification counts (V_opt ∝ C^b). The results demonstrate a clear pattern when comparing the performance of Generative Reward Models against Self-Consistency across different computational budgets. SC exhibits superior performance in low-compute scenarios, making it the more efficient choice when computational resources are limited. Conversely, GenRM begins to outperform SC only after reaching approximately 8× the computational budget, requiring an additional 128× inference compute to achieve a modest performance improvement of 3.8% over SC. These findings prove robust across diverse experimental conditions, including various model families such as Llama and Qwen, different model sizes ranging from 7B to 70B parameters, specialized thinking models like QwQ-32B, and different reasoning tasks, including mathematics. The performance patterns remain consistent regardless of the specific LLM architecture employed, indicating the broad applicability of these comparative insights across the spectrum of language models and reasoning tasks. The study introduces GenRMs as an innovative approach to scaling test-time compute through verification processes. Previous research demonstrated that scaling both solutions and verifications could outperform SC, but often neglected to account for the computational costs of verification. This comprehensive investigation reveals a clear pattern: SC proves more effective at lower computational budgets, while GenRMs deliver superior performance when higher computational resources are available. These findings maintain consistency across multiple model families, including specialized thinking models, various parameter sizes from 7B to 70B, and diverse reasoning tasks. In addition, the research establishes robust inference scaling laws that optimize budget allocation between solution generation and verification processes within GenRM frameworks. These insights provide valuable practical guidance for researchers and practitioners seeking to implement compute-efficient scaling strategies to maximize reasoning performance in large language models. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 85k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/MMSearch-R1: End-to-End Reinforcement Learning for Active Image Search in LMMsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Anthropic’s Evaluation of Chain-of-Thought Faithfulness: Investigating Hidden Reasoning, Reward Hacks, and the Limitations of Verbal AI Transparency in Reasoning ModelsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Building Your AI Q&A Bot for Webpages Using Open Source AI ModelsMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/DeltaProduct: An AI Method that Balances Expressivity and Efficiency of the Recurrence Computation, Improving State-Tracking in Linear Recurrent Neural Networks
    0 Commentaires 0 Parts 75 Vue
  • WWW.IGN.COM
    PS5 Pro and Slim Disc Drive Is Back in Stock at PS Direct
    Back on the digital shelves at PlayStation Direct, you can finally grab a PS5 disc drive for $79.99 in the US and £99.99 in the UK. While these aren't as rare as they were back when the PS5 Pro was announced, it's still surprising to find these from all retailers at the standard list price.Since these are the only way PS5 Pro or digital edition PS5 Slim owners can play physical games and films, or series on Blu-ray with their console, this add-on can be an essential one depending on your needs. However, both in the UK and the US, PS5 disc drives at other stores are either out of stock or have a costly price markup.Disc Drive For PS5Amazon, for example, has only used disc drives from $106.81 in the US. While Amazon UK does have new ones in stock, you'll have to hand over £117.00. There are still some exceptions, as PS5 disc drives are also still available at Best Buy, Walmart, and GameStop, thankfully at the base $79.99 price tag. Among other UK stores, though, stock at the lower price is limited—Very being the only other besides PS Direct selling the physical media players for £99.99. Meanwhile, PS5 disc drives are unavailable at Currys while being out of stock at Argos, Smyths, and ShopTo—the latter of which was selling them at a higher £109.85 anyway.While stock has gladly stabilised more since the Pro hit the market, it's highly likely we still won't be seeing disc drives for the PS5 going down for quite some time, since demand is still currently outweighing the supply. PlayIf you're still waiting for the right time to buy or upgrade to a PS5 Pro—so you can enjoy enhanced versions of games like Assassin's Creed Shadows, Final Fantasy 7 Rebirth, and Clair Obscur: Expedition 33—we'd recommend you may as well pull the trigger right now. Since $79.99/£99.99 is the best price you're going to get, and we don't know how quickly stocks will be replenished. That's especially in the UK, where availability is the most limited.Ben Williams – IGN freelance contributor with over 10 years of experience covering gaming, tech, film, TV, and anime. Follow him on Twitter/X @BenLevelTen.
    0 Commentaires 0 Parts 62 Vue
  • WWW.DENOFGEEK.COM
    Alex Garland and Ray Mendoza Want to Reinvent the War Movie with Something Radical: Truth
    When Alex Garland first worked with Ray Mendoza, he was immediately struck by the latter’s precision and intuitive storytelling instincts. At the time, the pair were collaborating on Civil War, Garland’s blistering speculative fiction about, perhaps, the direction things could head in the U.S. Garland wrote and directed that movie, but Mendoza gave the violence in its title a ferocious believability as the military advisor. The way Garland tells it now, Mendoza even had a prominent hand in shaping the movie’s spectacularly kinetic, and chilling, climax wherein rebel forces shoot their way from room to room in the West Wing, culminating in the execution of an unnamed, tyrannical POTUS on the Oval Office floor. “In the editing of that sequence, I just tried to stick as closely as possible to what Ray had created with those soldiers in terms of the rhythms of it, and the strange silences or explosions of movement—the staccato quality of it,” Garland says while chatting alongside Mendoza with Den of Geek ahead of the release of their next film together, this time as co-directors: Warfare. “I actually sent that sequence to Ray, before it was locked, to say, ‘Do you think this is correct? Have we got it right in the edit?’” Garland continues. “And I remember he said, ‘No, you’ve left too long a gap between these two events. That gap should be shorter.” Garland, a perfectionist filmmaker on projects like Ex Machina and Annihilation, was impressed by Mendoza’s attention to the minute details. He was also impressed by Mendoza himself, a veteran who served as an American Navy SEAL for more than 16 years. So the ever-inquisitive storyteller asked if Mendoza had a personal experience in his time on SEAL Team 5 that could make for a cinematic experience. “Would you be interested in telling an account of real combat that lasted, let’s say, 90 minutes or 100 minutes?” Garland inquired at the time. “We would not take any liberties with anything inside that window. We could have no time compressions, no conflated characters or omitted characters. We would just try to recreate reality as closely as possible.” The answer, of course, was that Mendoza had several, and one in particular he’d been hoping to make for about a decade. He had Ramadi, and one of the grisliest firefights during the Iraq War, which occurred on a bloody morning in November 2006. “I have [long wanted to make this],” Mendoza says, “but I didn’t think it was going to be this big.” Introduced to filmmaking on 2012’s Act of Valor (2012), Mendoza has worked steadily in the movie industry for nearly 15 years, including as an advisor on films like The Outpost (2019), Jurassic World (2015), and Peter Berg’s Lone Survivor (2013). It was also with Berg that Mendoza previously attempted to tell the story of Warfare via the 2017 History Channel series they produced, The Warfighters. “It was initially going to be maybe a 30-minute recreation,” Mendoza explains. “If we were to get a second season, this story was going to be one of them that I told.” That didn’t work out, but for a veteran who knows all too well what it is like to compartmentalize his experiences during a war—and after it—Mendoza muses it was probably for the best. “I think it’s a mechanism for being able to function in a combat zone,” Mendoza says of his ability to close experiences, and memories, off. “You can’t really dwell on that stuff because you’ll become non-functional in an environment where you’re doing that every day. So it’s a survival mechanism, to push it down, compartmentalize it. But then when you get out, that’s how you think you’re supposed to do everything, because it works for you.” Mendoza speculates this common tool among veterans might be why relationships can struggle, or jobs can fall away. “You learn quickly that it doesn’t function in regular society. You have to communicate, you can’t just explode in anger because someone upsets you or disagrees with you.” But ever since he and Berg came close to telling Warfare’s story, Mendoza knew he would have to break down the walls he’d built around some memories. He would have to return to Ramadi. “Leading up to this, I was hyper-aware of what these feelings might do,” says Mendoza. “I think had I tried to do this movie 10 years ago, it wouldn’t have happened. I don’t think I would have been able to emotionally, physically do this. It took years of being able to process and be able to talk, of finding the vocabulary to describe these things without going into depression or isolating myself.” Join our mailing list Get the best of Den of Geek delivered right to your inbox! It also allowed Mendoza to educate himself on the filmmaking craft, refining a skillset he believed necessary to bear what he called a “responsibility.” And the most remarkable thing about Warfare is he got not only himself, but most his surviving SEAL comrades who were in Ramadi that day to open up and recount their own memories to Garland and himself, providing a treasure trove of details that would immerse viewers into a 95-minute war film wherein you are buried into the dirt and debris of an Iraqi house that SEALs have commandeered while being surrounded by invisible enemy insurgents. And then the fire rages. Intriguingly, despite the meticulous research Mendoza and Garland pursued, they were adamant about not saying their movie is “based on a true story.” Even in the film’s opening insert cards of text, viewers are told the film is derived from “the memories” of the young men who were there. “We just felt it was the truest statement we could make,” Garland says. “What are we really working with? We had a handful of photographs and aside from that, it was memories.” Garland obviously had a little more, too, with the filmmakers obsessively attempting to recreate every detail they could find about that Iraqi home and battle, right down to even showing a photograph of the abode before the Warfare title card descends on it like a cloud. Yet in his quest for painstaking accuracy, Garland also wished to acknowledge the slippery nature of memory. Says the co-director: “One has to understand and embrace that memory is a subjective state. It’s imperfect, sometimes it’s in conflict with other people’s memories, and if we said this is a true story, it would actually have been disingenuous. In a film that was attempting to be as truthful as possible, that would have suddenly become an untruthful statement, strangely.” The filmmaker likens it to how two men can have memories of standing by someone else during a heated exchange of gunfire, but not remember who the other soldier was. That tunnel vision creates a riddle that Garland and Mendoza must solve by comparing interviews and notes. Sometimes though, two men can truthfully assert they both did the same action at the same moment. Neither is lying, but the literal fog of war and memory points to the elusive nature of the greater, allegedly objective truth. Be that as it may, bold choices made on Warfare intentionally cause it stand apart from its genre. For instance, unlike almost any other film made about the Iraq War, audiences never see the insurgents during the pitched battle. The only point-of-view and deaths that occur are the Americans trapped in a chokehold. “It was a COIN mission, which is a counterinsurgency,” Mendoza explains, “so there’s an insurgency and oftentimes, they’re not a uniformed enemy. They dress just like civilians. So it’s hard to differentiate. It just makes those decisions even more difficult, those shoot or no shoot scenarios, especially during the day when you don’t see muzzle flash.” For Garland, it was also about deprogramming himself from how war movies have taught audiences to understand the nature of violence. “One of the things that Ray and I and everybody who worked on this film was trying to do was to move away from the lessons that cinema has created in approaching the genre of a war movie,” Garland considers. “They create devices, and the device might be music, it might be soaring strings… but it’s also often to do with the line of sight of the enemy. It is a convention in gunfights that both sides are clearly seeing each other, because that’s helpful to cinema in some respects. But it’s not necessarily what gunfights are like. If both sides could clearly see each other, someone would get shot a lot quicker. The way these guys are trained and the way they operate, you can’t just stand there shooting at them. You will get shot.” The dedication to capturing the tension, and sometimes abject horror, of the day with an often clinical disposition makes the recent online backlash against the movie—virtually all of it from social media users who have not seen the movie—curious. Sight unseen, there are many who have attempted to dismiss Mendoza and Garland’s efforts by virtue of the film being an American perspective of the Iraq War. To which Garland has a blunt response. “Look, if you haven’t seen it, wait a beat. If you are minded to watch it, watch it and then have an opinion. This film is not propaganda. In some senses, it is exactly the opposite of propaganda. It is simply trying to say this happened. And you’re an adult, so make your own inferences from that.” The filmmaker, who’s been in the industry since seeing his first novel, The Beach, was adapted into a 2000 film by Danny Boyle, even muses it reflects how media has changed in the 21st century. “In the old days, studios were the self-appointed gatekeepers of what audiences could or could not understand,” Garland recalls. “And the phrase that often was used in relation to that was ‘dumbing down.’ Now that gatekeeping role seems to have migrated away from the studios to other places, and other people are attempting to be the gatekeepers.” Likening modern social media discourse to the times when executives would argue that filmmakers put too much faith in audiences’ sophistication, Garland adds, “Now that gatekeeper position has shifted elsewhere as a consequence of the nature of the way media has changed, I suspect, but the arguments and the issues remain the same. Ray and I were attempting to be truthful. There is a value in that. Somebody could say, you’re not truthful. That would be a legitimate position to take, but I don’t think it’s a legitimate position to say there is a requirement on you that you make your position clear. That I just disagree with.” There are deep, trenchant lessons about the Iraq War to be found in Warfare, but Garland doesn’t feel an obligation to spell them out. “I have probably learned more over the process of making this film than any other film I’ve ever been involved in,” Garland says. “Are those lessons signposted? No, because I don’t want to be infantilized. I assume that other people don’t want to be infantilized.” The hope is to experience a war movie that eschews the bombast and adrenaline high of the fictionalized Civil War. Instead this seeks to throw you into the muck, leaving you after the chaos is over to draw your own conclusions over what happened. Warfare opens in cinemas on April 11.
    0 Commentaires 0 Parts 81 Vue
  • 9TO5MAC.COM
    Craig Federighi’s leadership has already resulted in this major Siri pivot, per report
    Today a revealing new look into Apple’s recent Siri struggles was published at The Information. That report contains myriad details on internal drama and conflicts, but it also ends with a big piece of news: under Craig Federighi’s leadership, for the first time Apple engineers can now use third-party LLMs to build Siri features. Federighi’s engineers can now use third-party LLMs for Siri features Wayne Ma writes at The Information: Federighi has already shaken things up. In a departure from previous policy, he has instructed Siri’s machine-learning engineers to do whatever it takes to build the best AI features, even if it means using open-source models from other companies in its software products as opposed to Apple’s own models, according to a person familiar with the matter. According to the report, until the recent Siri leadership changes, Apple engineers could only use third-party LLMs to benchmark them against their own in-house models during testing. For additional context, engineers apparently did a lot of experimentation with OpenAI’s models but were restricted from actually using them in shipping features. Ma writes: “Apple managers told their engineers in 2023 they couldn’t include models from outside companies in final Apple products.” Building these models was mostly the responsibility of Giannandrea’s team. This frustrated members of the software group who wanted to build AI-powered features but found that Apple’s models “didn’t perform nearly as well as OpenAI’s technology.” Now, under Federighi’s leadership, it seems that all open-source LLMs are on the table for Apple’s engineers. This seems like a great move for users, who only care about getting the best Siri features possible—not so much the underlying technology that’s powering those features. There are a lot more details in the full The Information report, a variety of which you can find here. What do you think about this Siri pivot for Apple? Let us know in the comments. Best iPhone accessories Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentaires 0 Parts 48 Vue