• 76% of US kids want consoles and games for Christmas
    www.gamesindustry.biz
    76% of US kids want consoles and games for ChristmasESA survey also found that 38% of children were planning to ask for in-game currency News by Sophie McEvoy Staff Writer Published on Nov. 13, 2024 76% of children in the US are planning to ask for game-related gifts this holiday season, with almost half of those surveyed requesting consoles.This is according to the Entertainment Software Association, which surveyed 500-plus children (ages 10 to 17) on the gifts they wish to receive this Christmas.This percentage has increased slightly from last year, where 72% of those surveyed planned to ask for video game products.Aside from consoles, subscription services came in second at 43% followed by a tie between console games and game accessories at 41%. In-game currency was the least popular game-related item, with 38% of children surveyed expressing interest.The ESA also surveyed 500 adults about how much they plan to spend on themselves and others over the holidays, which amounted to $312 on average.Game-related products were top of children's wish-lists, with 67% asking for money and gift cards or clothes and electronics at 66%."Video games and video game technology have evolved quite a bit since today's parents were kids themselves, but the fun and joy created by gameplay for families remains the same," said ESA president and CEO Stanley Pierre-Louis."Parents see video games as more than simply a gift. With 83% of US parents who say they play video games with their children, games serve as a powerful tool for families to connect with one another, especially during the holidays."
    0 Commentaires ·0 Parts ·101 Vue
  • Reflections on my score for Tactical Breach Wizards
    www.gamedeveloper.com
    Within this article, I will be going over various aspects of my score for Tactical Breach Wizards, including my creative process, inspirations, experiences, challenges, and reflections post-release. I hope that this gives hopeful composers some behind-the-scenes insight into the production of a full game score.Creative process and visionI feel like the biggest obstacle for any score, whether its a film, game, or other type of media, is to land the vision correctly and stay true to it. When I sent my work to Tom over at Suspicious Developments when he announced the composer opening, the two tracks that helped me stand out were two trailer tracks I had written that heavily used percussion. With that in mind, I wanted to ensure that percussion was at the forefront when the game called for more serious or dramatic music mixed with orchestral and sometimes electronic elements. It helped that I was a trailer composer, so I had access to a fair bit of trailer samples to bring some punch and intensity.In Tactical Breach Wizards, there are four main acts and a prologue to start the game, showing the last mission between Liv and Zan. Along with the references Tom had sent to me, I also wanted to try to give each act a different direction while staying true to the vision of the overall score, sometimes with different instrumentation focuses, such as Act 1 (DSR) having a lot of synths and a lead electric guitar, a choir for when dealing with Chapel in Act 3 (Kalan), and using bells to put the players in a historical setting in Act 4 (Medil). The prologue is more of an opening sequence made of two more orchestral tracks with some of my favorite trailers, SFX.The two outliers with the vision of the overall soundtrack were At Mas and The Necromedic, which were Reggae and Jazz, respectively.Technical challengesWhen it came to the challenges of writing the soundtrack, at times, I was my own worst enemy, which I will talk more about later in this article. At times, my own tools led to limits I had to overcome. Sometimes, it was my production level at the time. The challenges all led me to find creative ways to overcome them, and eventually, they all went away fairly quickly. Dont be afraid to face them head-on, as you may be surprised by what you can do.InfluencesOverall, the score was meant to have different militaristic elements mixed with elements that worked well with the setting. I thought back to older game scores and developed ideas of what I liked from those scores. Some of it was the use of percussion, some of it was the use of synths, and some was the use of strings and brass, which I loved.I remember once trying to take the whimsical approach when the players were selecting their perks; however, it didnt work. There were other moments I attempted it, but it never stuck. I do know that when you think of wizards and magic, you think whimsical and fantasy, but that had no place here! We had a more gritty theme that went with this, though there was a lot of humor around them.Evolving as a composerWhen working on the score, I went through an evolution over the three and a half years I was involved with Tactical Breach Wizards. In ways, my process was all about experimentation with sounds, textures, and themes, but as time went on, my process evolved into a more methodical approach, mainly thanks to my studies with Penka Kouneva, who helped me clean up my methods, have more of a detailed plan with each cue, and most importantly, executing it. I know that has been a common issue, so it was very important for me to get it right. This also showed an evolution of the score with sometimes more complex writing in the latter half of the soundtrack.Also, I had really worked on my production skills so I could achieve great results with whatever orchestral libraries and synths I owned at any time. I should also thank developers such as Rocky Mountain Sounds (Complexity, Majestica 2, etc.) for making great libraries and expansions that really contributed to the soundtrack in various ways.With all that said, all of these changes led to an issue I hadnt accounted for, such as having to go back to a few older tracks and use what I learned to improve their quality. Keep in mind that if youre working on a score over the course of several years and are still really developing, you may be in a similar position!Feedback from players and reflectionsOver the course of the beta rounds, the feedback was great, but there was one surprise. The Necromedic stood out to many players in a good way, and it showed when looking at the feedback for the score. It was the favorite, which honestly made me chuckle because I didnt consider my jazz chops to be much to write about. This process has shown me to be open to such nice surprises.The overall reception to the soundtrack upon release had been something I was nervous about as this is one of the first times many people would hear my work, AND they know my name behind it as many other projects I worked on (trailers and films) didnt thrust me into the open like this one did, especially with that hype around it. I appreciate the great response to the game and the soundtrack, and I hope more people play Tactical Breach Wizards, as the game is awesome with great writing, level design, art, and more.When I look back at the soundtrack as a whole, I plan to keep using the knowledge gained during the experience and continue writing great scores, as this has so far been my best work. You never truly know how you will handle the challenges thrown at you at work like this until youre in the thick of it, and I urge everyone to pursue these challenges. I really hope you enjoy the score as much as I enjoyed writing it.
    0 Commentaires ·0 Parts ·148 Vue
  • Why Animal Well's home-brewed engine was key to its success
    www.gamedeveloper.com
    For most of his development career, Billy Basso has attended gaming conventions on a work and promotional capacity. He'd stand near an incomplete game demo kiosk, watching people play it, taking notes, figuring out if he's on the right track. Analyze, focus, guess, and hope.In September, he finally got a taste of the other side of the booth."PAX West was the first show that happened after Animal Well's launch," Basso says weeks later from his home studio in Chicago. "I was looking forward to taking, in a way, a victory lap." He describes a crowd of people lining up at the Seattle exponot to play a new game, but to greet Blasso and share "all of these stories" about their positive experiences with his months-old title.As Animal Well's sole coder, artist, and designer, Basso deserves the lap. Since its April 2024 launch on consoles and PCs, the indie "Metroidvania" game has amassed nearly universal acclaim. Sales estimator Gamalytic points to sales figures exceeding 300,000 copies on Steam alone. During its launch month, Animal Well at one point outsold every first-party Nintendo game on Switch's eShop.In a chat with Game Developer, Basso and his business partner, Dan Adelman, reflect on Animal Well's development, launch, and success. What worked? How did surprising fare, like a Wisconsin tourist attraction and the 3D adventure Shadows of the Colossus, inspire and guide the game's unique spin on Metroidvanias? And what might come next?Related:Opting out of battles with pre-made enginesThe Animal Well journey began in two phases, Basso says: a quick stab at a Metroidvania-styled prototype in 2012, and a bespoke engine project that he took more seriously, which he began coding alone in 2014 during his day job's off hours. That day job, to clarify, was programming work at the video game studio Phosphor Games."I was typically working on mobile games with live service elements that were heavily dependent on network infrastructure," he says. "We were using off-the-shelf game engines that had a lot of bloat and were laggy." Having his own bespoke, offline engine was a priority for whatever kind of game he would eventually make on his own, to avoid the "sour taste" of inefficient tools and workflows.But Basso's work on a "general purpose, 3D game engine" dragged on during nights and weekends for three years before he came to a tough conclusion: the engine-focused project had been too focused on "preemptively solving problems," instead of "having enough direction to actually turn into a game."It was 2017, and some of the design ideas from his 2012 Metroidvania lark had continued pulling on him, so Basso "wrote off" his 3D engine as "a learning experience" and started anew with 2D search action as a priority. Like before, off-the-shelf solutions weren't going to cut it, but at least this time, he had a genre and some design directives in mind.For his new game, he wanted pixels to scale perfectly to a variety of common screen resolutions, along with visual effects that meshed well with his integer-scaled pixel art. He disliked "built-in lighting effects" that developers had to "fight against quite a bit" in engines like Unity: "they have point lights and stuff, they have a smooth gradient, and that clashes with pixel art."Build your own technology, then let it set the toneWith that baseline in mind, Basso's early project took shape, but roughly one year in, he found it "looked like an NES game, where the art wasn't that striking." Thanks to his full control of the engine and rendering efficiency at the fore, Basso had more wiggle room at this point in development to add what he describes as a crucial technical upgrade: a rim light shader."Everything was now kind of in shadow, but the edges were highlighted, and your character sprite in the bushes got blended in with the foreground tiles in a way that tied it all together," Basso says. "Immediately, when I got that shader working, I'm like, 'This is a unique look."The inspiration dominos began falling. Basso noticed that his effects' shadow-and-light interplay "informed the stylistic direction" of the game's environments and animals. Newly inspired, he built a "full screen Navier-Stokes fluid simulation" to power the game's smoke and water effects for more haziness and dreaminess, along with more lighting and particle-effect systems, painting more highlights and posterization effects over backgrounds and sprites.Image via Billy Basso/BigMode."This is interesting," Basso recalls thinking, "where I have this high-fidelity physics simulation working, but it's being paired with this, in some ways, primitive pixel art." As Basso's work bounced between design, art, and engine development, "the richness of the game got layered on, and I found the game's tone and identity gradually over that process." Despite his earlier hiccups with a 3D engine, he still recommends an engine-first mentality for new game development: "That opens up a lot of unique creative avenues. Once I get a cool system working, then I can play around with it and see what it allows for."Digging the well with Adelman's aidWith his 2D game in full swing, Basso inadvertently learned about Adelman, a business- and marketing-focused industry veteran, through a pair of interviews in late 2017 and early 2018. Basso appreciated hearing how Adelman had transitioned from revolutionizing Nintendo's digital distribution channels to dedicated support for indie devs, particularly for the breakout Metroidvania hit Axiom Verge. "I'm naturally a pretty introverted person, so having someone like him to work with sounded very appealing to me," Basso says. He sent a cold-call email.In hindsight, Adelman admits that his near-instant email reply turned out to be a bit disingenuous. "What was special enough about [Animal Well] that made me think, 'oh I want to work on this,' versus what it eventually became, were actually very different games," he says on the PAX West 2024 show floor. "I really like both games." The game he originally envisioned landed somewhere between VVVVVV and You Have To Win The Gamesince both eschewed combat in favor of exploration, puzzles, and hardcore platforming. For Adelman, the cherry on top was the early game's emphasis on nooks and crannies, which might otherwise be hidden in shadow or other Basso-developed effects: "I really loved finding secret passageways that I walked past, like, four or five times before I noticed them," Adelman says. "Like, oh, this opens up into a whole new area!"With Adelman on board, Basso's more richly developed prototypes started getting in front of industry players, and Animal Well's first major reveal came in a small booth at the inaugural Summer Game Fest in June 2022, an in-person event in the wake of the defunct Electronics Entertainment Expo (E3). That event's biggest Animal Well payoff came one week later, as a six-second blip in a wrap-up video made by popular YouTube creator Jason "Dunkey" Gastrow.A big leap with BigMode"Jason, in his normal style, basically shit on the whole event, but he did give us a positive shout-out," Adelman says. "Just from that one mention, our wishlist numbers had a big spike." Shortly after this, Gastrow and his wife Leah traded follows with the Animal Well team on Twitter, which led to a series of DMs between Adelman and Leah during that September's Tokyo Game Show. Leah requested a playable build of Animal Well; Adelman asked that it not be used as YouTube or Twitch content. Leah replied that she had something else in mind: "We're starting a publishing label, called BigMode," Adelman recalls her saying. "Would you have any interest in talking about that?"Adelman had not planned to add a formal publisher to the Animal Well development period; after all, he didn't need one for Axiom Verge. And the Dunkey YouTube channel's reputationfull of curse words and overt trolling in online gamesgave Adelman some pause: "is [Gastrow's] real-life persona the same as his online persona?" (The past few years have seen other YouTube and Twitch creators take huge missteps in their own game-publishing efforts.)Image via Billy Basso/BigMode.But Adelman and Gastrow's initial conversations were polite and promising. Game promotion and marketing were changing in the years since Axiom Verge. Jason's online reach and the couple's genuine enthusiasm filled an information gap for Animal Well at this point in development. Jason had made a career out of tapping into a modern gaming audience's attention spanhow people engage with new games and pick favorites out of a deluge of indies. If Gastrow saw something special in Animal Well, he could convince a lot of people to do the same. In short, what a good publisher does."This could be a disaster, or it could be a really good fit," Adelman recalls thinking.(When asked for comment on Animal Well's development and success at PAX West 2024, Jason Gastrow replies by repeating the same sentence twice: "I feel like one hundred penguins." It is unclear whether this statement was in line with Gastrow's humorous YouTube persona, or whether he was offering a hint about an undiscovered Animal Well secret, since the game prominently features penguins.)"Layers" workbut they need to be distinct and appealingAnimal Well wastes no time hinting at its range of secrets. With no plot or narration, players begin running-and-jumping around a dark, eerie cave. In one direction, players can pick up an "egg" from a treasure chest with no indication of what it means or does. In the other direction, a squirrel runs away, and a rabbit is seen hiding in a seemingly unreachable corner of the screen. Neither of these hints is borne out with prompt clarification, and the mysteries and layers keep coming.But that opening beat is nothing compared to Animal Well's "layers" of deeper puzzles. Some required dozens of players' collaboration to figure out, and others required connecting specific hardware to a PC while playing the game. Layers found their way into Animal Well in a few ways:With a publishing deal in place and budget settled, Bigmode, Adelman, and Basso agreed not to rush the game's launchand to let Basso ultimately lead on the development, implementation, and difficulty of the game's crisscrossing puzzles. Basso would often hand a new build to Adelman and the BigMode team without spoiling its four "layers" of puzzles, though the deepest layer was so obtuse that its puzzles simply couldn't be tested by Bigmode's tiny team without cheat codes.This part of the game's development never exceeded Adelman's "scope creep" bandwidth, as he found the deeper puzzle additions never added "bloat" to the development process. Basso would review feedback after each build was turned in and accept "maybe half" of the suggestions. "It wasn't so much like I was asking permission ever to do something," Basso says. "I kind of just did it."For Basso, Animal Well's deeper layers would have no value unless players connected with Animal Well's surface-level content. He drew inspiration from an unlikely 3D-adventure game: 2005's Shadow of the Colossus, whose online community eventually began hunting for a "hidden" 17th "colossus" boss (which never existed)."It was almost like a religious conviction that they wanted to find this missing part of the game," Basso says. "But then you realize that that's only true because Shadow of the Colossus is one of the most finely crafted games ever made, and people care that much because the base game is so compelling."Early on, Basso committed to one principle in particular: none of its items could be considered "staples" in other Metroidvania games. Midway through development, Basso inadvertently realized he'd leaned on children's toys as the game's foundation, including a Slinky, a yo-yo, a bubble wand, and a frisbeewhich gave him immediate reference points for their designs, plus the freedom to add clever surprises to each (particularly the bubble wand's unique "climbing" system). Basso likes the "safe bet" nature of the popular toys he picked out: "That's already a proven interaction."Inspiration from The House on the RockThe last step was to coalesce Basso's artistic and design leanings in a way that would make his mysterious world inviting and exciting to explore, not confusing. Basso had his vision catalyzed after visiting The House on the Rock in Wisconsin. "It's this ramshackle collection of weird objects that the original owner of the house collected, and he built all these expansions to store them all," Basso says. "You can go in any direction and find cool stuff to look at. It's always surprising, and you can't really predict what the through line is between all of them, but it still feels consistent."Basso says he wanted to make "the video game version" of the tourist attraction, though his version would include an unspoken mythology defined by "all the weird, Midwestern lawn ornaments I was seeing in my neighborhood." This explains some of the game's atypical animals standing amongst overgrown, lawn-like weeds (hello, pink flamingos), but Basso also describes an effort to keep his game's animals as unique as its items.Basso admits leaning towards a cast of animals in the game because he finds them "more enjoyable" to draw than humans, and that he can have more creative license since people on average are "more forgiving about the [visual] details, about what they look like." His initial impulse was to draw "more zoo animals," Basso says. "There's a zebra in the [early versions]. And there's just some, like, bears."Image via Billy Basso/BigMode.As the game's more mysterious tone took shape, Basso began watching nature documentaries on YouTube, then found himself compelled by the weirder, less popular animals he saw. "I remember learning that the beluga whale can imitate the sound of children underwater," Basso says. "They trick a lot of divers into thinking there's a child drowning." He might have otherwise picked a better-known creature to fit whatever biome he needed (water, forest, cavern, etc.), but documentaries shaped his excitement: "Oh, I like this detail. That's scary and creepy and interesting." He thus filled out the Animal Well.Community matterswith a pre-release twistAdelman admits that before working on Animal Well, he'd never focused pre-release efforts on the concept of community, and he was particularly ambivalent about one popular service, which he'd seen other game developers adopt for their own games with mixed success. "I guess we've got to start up a Discord server and hope people show up," Adelman recalls initially thinking. But in the game's run-up, Basso hid a number of puzzles across the game's promotional channels, including videos and website postsall hinting to the caliber of puzzles that would eventually debut in the game's deeper layers. The resulting Discord engagement was immediately massive. "These ARGs brought a lot of people together, and friendships started to form," Adelman says. "So I was like, 'Oh, I get what [people] mean now, when they talk about building community.'"Before that full community could coalesce, Adelman elected to test one unique version of "community": a pre-release, critics-only Discord channel. In addition to giving critics a few weeks to play the final game, Adelman wanted to provide a meeting place for critics to "share information and collaborate on certain puzzles," admitting that Basso's design vision always included an expectation of community-driven puzzle solving. (In other words: it's not cheating to ask for Animal Well help!)This had two outcomes for the very small Animal Well development team: First, the developers could see in real time a miniature version of how Animal Well would be digested by a community at largesomething that couldn't necessarily be replicated by a dedicated QA team. Second, it arguably bumped the game's review scores. "I think if everyone had been playing in isolation, they might have missed out on some things," Adelman says. "Sometimes a reviewer would say, 'I just found this,' and everyone would be, like, 'What?! I gotta look for that." We might have gotten a lot of 7 out of 10s, 8 out of 10s, if people didn't seek out some of the more interesting content or didn't know where to find it."Not "niche," but "intense"The community's response to Animal Well's pre-release ARG puzzles gave Basso confidence to build complex puzzles for the final game, and that community responded even more fiercely to the final retail game. "People really appreciated the secrets in the game," he says. "I'd assumed there'd be a hardcore group of people that that appealed to. And most people would just enjoy the game. Maybe that is the case, but I think everybody that did experience those deeper, later parts of the game, they all seemed to really resonate with it.Still, when the game's deepest riddles were uncovered by the communityparticularly a cypher whose only hint is the pixel-long direction a bunny's ears are turnedAdelman and Basso admit they were shocked by how quickly it happened.Adelman suggests the internal estimate for full puzzle discovery was somewhere around ten years. "There were so many people who were so passionate about Animal Well, that there were probably, you know, ten man-years dedicated in that condensed amount of time," Adelman says. "It was just way more people working way harder on [the game's puzzles] than we ever expected." (Basso insists that "there are some things the community hasn't found" in Animal Well but has not elaborated further.)As Basso speaks to trusting his development instincts, he confirms that he's "thinking about another game" and is thus in the "very early stages of setting a new project and engine up." He intends to deliver something similar to Animal Well's scope, if not something "bigger.""I'm looking forward to releasing a game within the context of having already released Animal Well," Basso says. "I think I'll have more flexibility with how it is marketed, and it will be fun to play off, and subvert, existing fans' expectations." And he clarifies that the afterglow of Animal Well's launch, including a "victory lap" where feedback has moved from comment sections to critical accolades and real-life lines, made years of solo-development isolation worth it. He had ultimate control over its tools, engines, art, and mechanics. Now, and forever, he says that's out of his hands."Animal Well was this sort of pure side project where I could implement things in what I felt was the ideal way," Basso says. "Robust, not dependent on things. [The result was] a timeless quality that older cartridge games havethat still have, like, really intense speed running communities. I wanted to make a game that, you know, could still have a fandom around it 30 years from now."
    0 Commentaires ·0 Parts ·155 Vue
  • OpenAI reportedly plans to launch an AI agent early next year
    www.theverge.com
    OpenAI is preparing to release an autonomous AI agent that can control computers and perform tasks independently, code-named Operator. The company plans to debut it as a research preview and developer tool in January, according to Bloomberg.This move intensifies the competition among tech giants developing AI agents: Anthropic recently introduced its computer use capability, while Google is reportedly preparing its own version for a December release. The timing of Operators eventual consumer release remains under wraps, but its development signals a pivotal shift toward AI systems that can actively engage with computer interfaces rather than just process text and images.Do you work at OpenAI? Id love to chat. You can reach me securely on Signal @kylie.01 or via email at kylie@theverge.com.All the leading AI companies have promised autonomous AI agents, and OpenAI has hyped up the possibility recently. In a Reddit Ask Me Anything forum a few weeks ago, OpenAI CEO Sam Altman said we will have better and better models, but I think the thing that will feel like the next giant breakthrough will be agents. At an OpenAI press event ahead of the companys annual Dev Day last month,chief product officer Kevin Weil said: I think 2025 is going to be the year that agentic systems finally hit the mainstream.AI labs face mounting pressure to monetize their costly models, especially as incremental improvements may not justify higher prices for users. The hope is that autonomous agents are the next breakthrough product a ChatGPT-scale innovation that validates the massive investment in AI development.
    0 Commentaires ·0 Parts ·102 Vue
  • Sonos revenue falls in the aftermath of companys messy app debacle
    www.theverge.com
    Sonos is still trying to climb out from the hole it dug itself earlier this year by recklessly shipping an overhauled mobile app well before the software was actually ready. Today, just a couple weeks after the release of its latest hardware products the Arc Ultra and Sub 4 Sonos reported its fiscal Q4 2024 earnings. And the damage done by the app debacle is clear. Revenue was down 8 percent year over year, which Sonos attributed to softer demand due to challenging market conditions and challenges resulting from our recent app rollout. During the quarter, the company sank $4 million into unspecified app recovery investments. (Sonos previously estimated it could spend up to $30 million to resolve all of the trouble that has stemmed from the rebuilt app.)To date, we have released 16 updates and restored 90 percent of missing features, the company wrote in its earnings presentation. Moving forward, well alternate between major and minor releases. This will allow us to maintain our momentum of making improvements while also ensuring adequate beta testing.CEO Patrick Spence has taken accountability for the app situation, and last month, Sonos announced multiple commitments that it believes will prevent another colossal misstep like this from happening again. Some aspects of the plan are focused on more rigorous testing and greater transparency both inside the company and out. But others, like executives potentially losing out on their annual bonuses, have been mocked by customers as meaningless, half-hearted measures. Do you know more about whats ahead at Sonos? The company is rumored to be working on a video streaming box.As with headphones, Im curious how Sonos plans to differentiate itself in this category. If you have anything to share on whats happening at the company, I can be reached securely (andconfidentially) via Signal at chriswelch.01 or (845) 445-8455.The Sonos flywheel remains strong, as evidenced by the fact that the number of new products per home increased in fiscal 2024, Spence said in todays press release. The company also reported its all-time highest annual market share in home theater, another positive sign at a time when morale among Sonos employees has taken a serious hit.The rebuilt app is in a better place now, which youd hope would be the case after several months of bug fixes and performance enhancements. The mood within Sonos community spaces like the companys subreddit has also improved, with less of the vitriol that felt non-stop (understandably so) from late spring through the early fall. As far as hardware is concerned, Sonos seems to be getting back on track. Early reviews of the Arc Ultra have been largely positive. (Yes, Ill have one coming in the near future.) One early bug with the new soundbar affected Trueplay tuning and, for some customers, resulted in lackluster bass response from a paired subwoofer. Sonos just rectified this issue with a software update that went out earlier today.But some of the companys most loyal customers are still feeling a sense of wariness and frayed trust towards the brand. Sonos next major new product is rumored to be a video streaming box. Im still flummoxed as to just how the company plans to stand out from competitors in that space. But hopefully there wont be another major controversy to derail the product, as was the case with the Sonos Ace headphones.
    0 Commentaires ·0 Parts ·99 Vue
  • Fixie AI Introduces Ultravox v0.4.1: A Family of Open Speech Models Trained Specifically for Enabling Real-Time Conversation with LLMs and An Open-Weight Alternative to GPT-4o Realtime
    www.marktechpost.com
    Interacting seamlessly with artificial intelligence in real time has always been a complex endeavor for developers and researchers. A significant challenge lies in integrating multi-modal informationsuch as text, images, and audiointo a cohesive conversational system. Despite advancements in large language models like GPT-4, many AI systems still encounter difficulties in achieving real-time conversational fluency, contextual awareness, and multi-modal understanding, which limits their effectiveness for practical applications. Additionally, the computational demands of these models make real-time deployment challenging without considerable infrastructure.Introducing Fixie AIs Ultravox v0.4.1Fixie AI introduces Ultravox v0.4.1, a family of multi-modal, open-source models trained specifically for enabling real-time conversations with AI. Designed to overcome some of the most pressing challenges in real-time AI interaction, Ultravox v0.4.1 incorporates the ability to handle multiple input formats, such as text, images, and other sensory data. This latest release aims to provide an alternative to closed-source models like GPT-4, focusing not only on language proficiency but also on enabling fluid, context-aware dialogues across different types of media. By being open-source, Fixie AI also aims to democratize access to state-of-the-art conversation technologies, allowing developers and researchers worldwide to adapt and fine-tune Ultravox for diverse applicationsfrom customer support to entertainment.Technical Details and Key BenefitsThe Ultravox v0.4.1 models are built using a transformer-based architecture optimized to process multiple types of data in parallel. Leveraging a technique called cross-modal attention, these models can integrate and interpret information from various sources simultaneously. This means users can present an image to the AI, type in a question about it, and receive an informed response in real time. The open-source models are hosted on Hugging Face at Fixie AI on Hugging Face, making it convenient for developers to access and experiment with the models. Fixie AI has also provided a well-documented API to facilitate seamless integration into real-world applications. The models boast impressive latency reduction, allowing interactions to take place almost instantly, making them suitable for real-time scenarios like live customer interactions and educational assistance.Ultravox v0.4.1 represents a notable advancement in conversational AI systems. Unlike proprietary models, which often operate as opaque black boxes, Ultravox offers an open-weight alternative with performance comparable to GPT-4 while also being highly adaptable. Analysis based on Figure 1 from recent evaluations shows that Ultravox v0.4.1 achieves significantly lower response latencyapproximately 30% faster than leading commercial modelswhile maintaining equivalent accuracy and contextual understanding. The models cross-modal capabilities make it effective for complex use cases, such as integrating images with text for comprehensive analysis in healthcare or delivering enriched interactive educational content. The open nature of Ultravox facilitates continuous community-driven development, enhancing flexibility and fostering transparency. By mitigating the computational overhead associated with deploying such models, Ultravox makes advanced conversational AI more accessible to smaller entities and independent developers, bridging the gap previously imposed by resource constraints.ConclusionUltravox v0.4.1 by Fixie AI marks a significant milestone for the AI community by addressing critical issues in real-time conversational AI. With its multi-modal capabilities, open-source model weights, and a focus on reducing response latency, Ultravox paves the way for more engaging and accessible AI experiences. As more developers and researchers start experimenting with Ultravox, it has the potential to foster innovative applications across industries that demand real-time, context-rich, and multi-modal conversations. Check out the Twitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Upcoming Live LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Commentaires ·0 Parts ·105 Vue
  • Whisper Variants Comparison: What Are Their Features And How To Implement Them?
    towardsai.net
    Author(s): Yuki Shizuya Originally published on Towards AI. Photo by Pawel Czerwinski on UnsplashRecently, I research automatic speech recognition (ASR) to make transcription from speech data. When it comes to an open-source ASR model, Whisper [1], which is developed by OpenAI, might be the best choice in terms of its highly accurate transcription. However, there are many variants of Whisper, so I want to compare their features. In this blog, I will quickly recap Whisper and introduce the variants and how to implement them in Python. I will explain vanilla Whisper, Faster Whisper, Whisper X, Distil-Whisper, and Whisper-Medusa.Table of Contents1. What is Whisper?Whisper [1] is an automatic speech recognition (ASR) model developed by OpenAI. It is trained on 680,000 hours of multilingual and multi-task supervised data, including transcription, translation, voice activity detection, alignment, and language identification. Before the arrival of Whisper, there were no models trained by such a massive amount of data in a supervised way. Regarding architecture, Whisper adopts an Encoder-Decoder Transformer for scalability. The architecture illustration is shown below.Whisper architecture illustration adapted by [1]Firstly, Whisper converts audio data into a log-mel spectrogram. A log-mel spectrogram is a visual representation of the spectrum of signal frequencies in the mel scale, which is commonly used in speech processing and machine learning tasks. For further information, you can check this blog [2]. After Whisper inputs a log-mel spectrogram to some 1-D convolution layers and positional encoding, it processes data in a similar way to the natural language processing Transformer. Whisper can work in the multilingual setting to leverage byte-level BPE tokenizer utilized by GPT-2. Thanks to multi-task learning, Whisper can also perform transcription, timestamp detection, and translation.Official Whisper has six model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Smaller models provide only English-only capability.Whisper size variation tableJust recently (2024/10), OpenAI released the new version, turbo, which has almost the same capability as the large-size model but offers significant speed-up (8 times!) by fine-tuning the pruned large-size model. All Whisper models are compatible with the HuggingFace transformer library.Now, we quickly recap Whisper. It is based on the Encoder-Decoder Transformer architecture and performs outstandingly, even including in commercial models. In the next section, we will discuss the Whisper variants.2. Whisper variants : Faster Whisper, Whisper X, Distil-Whisper, and Whisper-MedusaIn this section, we will go through Whisper variants and their features. I focus on the Python and Pytorch implementations. Although Whisper.cpp and Whisper JAX are popular variants, I will not examine them. Moreover, Whisper-streaming is also a popular variant for real-time inference, but it needs a high-end GPU, so I will not discuss it either. We will check Faster-Whisper, Whisper X, Distil-Whisper, and Whisper-Medusa.Faster-WhisperFaster-Whisper is a reimplementation of Whisper using CTranslate2, which is a C++ and Python library for efficient inference with Transformer models. Thus, there is no change in architecture. According to the official repository, Faster-Whisper can speed up ~4 times faster than the original implementation with the same accuracy while using less memory. Briefly, Ctranslate2 utilizes many optimization techniques, such as weights quantization, layers fusion, batch reordering, etc. We can choose type options, such as float16 or int8, according to our machine type; for instance, when we select int8, we can run Whisper even on the CPU.WhisperX (2023/03)WhisperX [3] is also an efficient speech transcription system integrated Faster-Whisper. Although vanilla Whisper is trained by multiple tasks, including timestamp prediction, it is prone to be inaccurate for word-level timestamps. Moreover, due to its sequential inference nature, vanilla Whisper generally takes computation time for long-form audio inputs. To overcome these weak points, WhisperX introduces three additional stages: (1) Voice Activity Detection (VAD), (2) cut & merge results of VAD, and (3) forced alignment with an external phoneme model to provide accurate word-level timestamps. The architecture illustration is shown below.WhisperX architecture illustration adapted by [3]Firstly, WhisperX processes input audio through the VAD layer. As its name suggests, VAD detects voice segments. WhisperX utilizes the segmentation model in the pyannote-audio library for the VAD. Next, WhisperX cuts and merges the voice detected segmentation. This process allows us to run batch inference based on each cut result. Finally, WhisperX applies the forced alignment to measure word-level accurate timestamps. Lets check a concrete example as shown below.WhisperX algorithm created by the authorIt leverages Whisper for the transcription and the Phoneme model for phoneme-level transcription. The phoneme model can detect a timestamp for each phoneme; thus, if we assign the timestamp from the next nearest phoneme in the Whisper transcript, we can get a more accurate timestamp for each word.Even though WhisperX adds three additional processes compared to the vanilla Whisper, it can effectively transcribe for longer audio thanks to batch inference. The following table shows the performance comparison. You can check that WhisperX keeps low WER but increase the inference speed.Performance comparison of WhisperX adapted by [3]Distil-Whisper (2023/11)Distil-Whisper [4] was developed by HuggingFace in 2023. It is a model that compresses the Whipser Large model using knowledge distillation. It leverages common knowledge distillation techniques to train the smaller model, such as pseudo-labeling from the Whisper Large model and Kullback-Leibler Divergence loss. The architecture illustration is shown below.Distil-Whisper illustration adapted by [4]The architecture is paired with the vanilla Whisper, but the number of layers is decreased. For the dataset, the authors collect 21,170 hours of publicly available data from the Internet to train the Distil-Whisper. Distil-Whisper records 5.8 times faster than the Whisper Large model, with 51% fewer parameters, while performing within a 1% word error rate (WER) on out-of-distribution data. The following table shows the performance comparison.Performance comparison of Distil-Whisper adapted by [4]As you can see, Distil-Whisper keeps a word error rate as low as vanilla Whisper but can decrease the latency.Whisper-Medusa (2024/09)Whisper-Medusa [5] is the variant that utilizes Medusa to increase Whispers inference speed. Medusa is an efficient LLM inference method that adds extra decoding heads to predict multiple subsequent tokens in parallel. You can understand well using the following illustration.Medusa and Whisper-Medusa architecture comparison by the author. Illustrations are adapted from original papers [5][6]In the left part, the Medusa has three additional heads to predict subsequent tokens. If an original model outputs y1 token, the three additional heads predict y2, y3, and y4 tokens. Medusa can increase the number of predictions by adding additional heads and reduce the inference time overall. Note that the necessary VRAM amount will be increased because of additional heads.Whisper-Medusa applies the Medusa idea to Whisper, as shown in the right part. Since Whisper has a disadvantage in inference speed because of the sequential inference nature, Medusas feature helps speed up the inference. The comparison results between Whisper-Medusa and vanilla Whisper are shown below.The performance comparison of Whisper-Medusa adapted by [5]For several language datasets, Whisper-Medusa records a lower word error rate (WER) than vanilla Whisper. It can also speed up 1.5 times on average.In this section, we check the Whisper variants and their features. The following section will explore how to implement them in Python and check their capability for real-world audio.3. Python implementation of Whisper variants : Compare the results using real-world audio dataIn this section, we will learn how to implement Whisper and Whisper variants in Python. For real-world audio data, I will use audio from this YouTube video downloaded manually. The video size is around 14 minutes. I will attach the code on how to convert an mp4 file into an mp3 file later.Environment setupDue to library incompatibility, we created two environments: one for Whipser, Faster-Whisper, WhisperX, and Distil-Whisper, and the other for Whisper-Medusa.For the former environment, I used a conda environment with Python 3.10. I experimented on Ubuntu 20.04 with cuda 12.0, 16 GB VRAM.conda create -n audioenv python=3.10 -yconda activate audioenvNext, we need to install the libraries below via pip and conda. After the installation below, you need to downgrade numpy to 1.26.3.conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidiapip install python-dotenv moviepy openai-whisper accelerate datasets[audio]pip install numpy==1.26.3Next, we need to install whisperX repository. However, whisperX is no longer maintained frequently so far. Thus, we use the forked repository called BetterWhisperX.git clone https://github.com/federicotorrielli/BetterWhisperX.gitcd BetterWhisperXpip install -e .First environment preparation is done.For Whisper-Medusa environment, I used a conda environment with Python 3.11. I also experimented on Ubuntu 20.04 with cuda 12.0, 24 GB VRAM.conda create -n medusa python=3.11 -yconda activate medusaYou need to install the following libraries via pip.pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 --index-url https://download.pytorch.org/whl/cu118pip install wandbgit clone https://github.com/aiola-lab/whisper-medusa.gitcd whisper-medusapip install -e .All preparation is done. Now, lets check Whisper variants capabilities!How to implement Whisper variants in PythonWhisper turboWe use the latest version of Whisper, turbo. Thanks to the official repository, we can implement vanilla Whisper with only a few lines of code.import whispermodel = whisper.load_model("turbo")result = model.transcribe("audio.mp3")Whisper can only work for audio data within 30 seconds, but transcribe method reads the entire file and processes the audio with a sliding 30-second window, so we dont care about how to feed the data.2. Faster-WhisperWe use the Whisper turbo backbone of Faster-Whisper. Faster-Whisper has the original repository, and we can implement it as follows.from faster_whisper import WhisperModelmodel_size = "deepdml/faster-whisper-large-v3-turbo-ct2"# Run on GPU with FP16model = WhisperModel(model_size_or_path=model_size, device="cuda", compute_type="float16")segments, info = model.transcribe('audio.mp3', beam_size=5)beam_size is used for beam search on decoding. Since the capability of Faster-Whisper is the same as the vanilla Whisper, we can process long-form audio using a sliding window.3. WhisperXWe use the Whisper turbo backbone of WhisperX. Since WhisperX utilizes Faster-Whisper as a backbone, some parts of the codes are shared.import whisperxmodel_size = "deepdml/faster-whisper-large-v3-turbo-ct2"# Transcribe with original whisper (batched)model = whisperx.load_model(model_size, 'cuda', compute_type="float16")model_a, metadata = whisperx.load_align_model(language_code='en', device='cuda')# inferenceaudio = whisperx.load_audio('audio.mp3')whisper_result = model.transcribe(audio, batch_size=16)result = whisperx.align(whisper_result["segments"], model_a, metadata, audio, 'cuda', return_char_alignments=False)WhisperX integrates with Faster-Whisper and adds additional layers that process VAD and forced alignment. We can also process long-form audio more than 30 seconds thanks to the cut & merge.4. Distil-WhisperWe will use the large-v3 models distilled version because the latest turbo version has yet to be released. Distil-Whisper is compatible with the HuggingFace Transformer library, so we can easily implement it.import torchfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipelinedevice = "cuda:0" if torch.cuda.is_available() else "cpu"torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32model_id = "distil-whisper/distil-large-v3"model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)model.to(device)processor = AutoProcessor.from_pretrained(model_id)pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, torch_dtype=torch_dtype, device=device, return_timestamps=True)result = pipe('audio.mp3')pipeline class automatically processes long-form audio using sliding window. Note that this method only outputs the relative timestamps.5. Whisper-MedusaWe use the large model as the Whisper backbone. Following the official implementation, we can implement it as follows:import torchimport torchaudiofrom whisper_medusa import WhisperMedusaModelfrom transformers import WhisperProcessorSAMPLING_RATE = 16000language = "en"regulation_factor=1.01regulation_start=140device = 'cuda'model_name = "aiola/whisper-medusa-linear-libri"model = WhisperMedusaModel.from_pretrained(model_name)processor = WhisperProcessor.from_pretrained(model_name)model = model.to(device)input_speech, sr = torchaudio.load(audio_path)if input_speech.shape[0] > 1: # If stereo, average the channels input_speech = input_speech.mean(dim=0, keepdim=True)if sr != SAMPLING_RATE: input_speech = torchaudio.transforms.Resample(sr, SAMPLING_RATE)(input_speech)exponential_decay_length_penalty = (regulation_start, regulation_factor)input_features = processor(input_speech.squeeze(), return_tensors="pt", sampling_rate=SAMPLING_RATE).input_featuresinput_features = input_features.to(device)model_output = model.generate( input_features, language=language, exponential_decay_length_penalty=exponential_decay_length_penalty,)predict_ids = model_output[0]pred = processor.decode(predict_ids, skip_special_tokens=True)Unfortunately, Whisper-Medusa currently doesnt support long-form audio transcription, so we can only use it for up to 30 seconds audio data. When I checked the quality of the 30-second transcription, it was not as good as other variants. Therefore, I skip its result from the comparison among other Whisper variants.Performance comparison among Whisper VariantsAs I mentioned before, I used around 14 minutes audio file as an input. The following table compares the results of each model.The performance result table from the authorIn summary,Whisper turbo sometimes tends to put the same sentences and hallucinations.Faster-Whisper transcription is almost good, and calculation speed is the best.WhisperX transcription is the best, and it records a very accurate timestamp.Distil-Whisper transcription is almost good. However, it only records relative timestamps.If you can allow subtle mistranscription and dont care about timestamps, you should use Faster-Whisper. Meanwhile, if you want to know the accurate timestamps and transcriptions, you should use WhisperX.WhisperX and Faster-Whisper can get better results than the vanilla Whisper probably because Faster-Whisper has beam search for better inference results, and Whisper X has forced alignment. Hence, they have chance to fix their mistranscription in postprocessing.In this blog, we have learned about Whisper variants architecture and their implementation in Python. Many researchers use various optimization techniques to minimize the inference speed for real-world applications. Based on my investigation, Faster-Whisper and WhisperX keep the capability but succeed in decreasing the inference speed. Here is the code that I used in this experiment.References[1] Alec Radford, Jong Wook Kim, et.al., Robust Speech Recognition via Large-Scale Weak Supervision, Arxiv[2] Leland Roberts, Understanding the Mel Spectrogram, Analytics Vidhya[3] Max Bain, Jaesung Huh, Tengda Han, Andrew Zisserman, WhisperX: Time-Accurate Speech Transcription of Long-Form Audio, Arxiv[4] Sanchit Gandhi, Patrick von Platen & Alexander M. Rush, DISTIL-WHISPER: ROBUST KNOWLEDGE DISTILLATION VIA LARGE-SCALE PSEUDO LABELLING, Arxiv[5] Yael Segal-Feldman, Aviv Shamsian, Aviv Navon, Gill Hetz, Joseph Keshet, Whisper in Medusas Ear: Multi-head Efficient Decoding for Transformer-based ASR, Arxiv[6] Tianle Cai, Yuhong Li, et.al., MEDUSA: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads, ArxivJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentaires ·0 Parts ·128 Vue
  • Let AI Instantly Parse Heavy Documents: The Magic of MPLUG-DOCOWL2s Efficient Compression
    towardsai.net
    LatestMachine LearningLet AI Instantly Parse Heavy Documents: The Magic of MPLUG-DOCOWL2s Efficient Compression 0 like November 13, 2024Share this postAuthor(s): Florian June Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Today, lets take a look at one of the latest developments in PDF Parsing and Document Intelligence.In our digital age, the ability to understand documents beyond mere text extraction is crucial. Multi-page documents, such as legal contracts, scientific papers, and technical manuals, present unique challenges.Traditional document understanding methods heavily rely on Optical Character Recognition (OCR) techniques, which present a significant challenge: the inefficiency and sluggish performance of current OCR-based solutions when processing high-resolution, multi-page documents.These methods generate thousands of visual tokens for just a single page, leading to high computational costs and prolonged inference times. For example, InternVL 2 requires an average of 3,000 visual tokens to understand a single page, resulting in slow processing speeds.Figure 1: (a) mPLUG-DocOwl2 achieves state-of-the-art Multi-page Document Understanding performance with faster inference speed and less GPU memory; (b-c) mPLUG-DocOwl2 is able to provide a detailed explanation containing the evidence page as well as the overall structure parsing of the document. Source: MPLUG-DOCOWL2.As shown in Figure 1, a new study called MPLUG-DOCOWL2 (open-source code) aims to address this issue by drastically reducing the number of visual tokens while maintaining, or even enhancing, comprehension accuracy.A Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Commentaires ·0 Parts ·135 Vue
  • AU Deals: 90% off LEGO Titles, DRM-free RPG Discounts, and Upgraded GTA Trilogies for Dirt Cheap!
    www.ign.com
    If you highly revere the first three Grand Theft Auto titles, as I did and still do, today's your day to revisit that iffy remaster bundle. Work has been done and they're going for mighty cheap prices now. How much better is the spit shine? Well, I'll let this article/video tell the tale on its own.Today in Wow You're Aging news, I'm celebrating the 21st of cult hit Beyond Good & Evil. Weirdly, most modern gamers probably only know this title through the talk of its ludicrously delayed prequel, a project 16 bloody years in the making. But the fact that Ubisoft can't seem to let that initiative die is a ringing testament to how gamers like me felt about that very special 2003 original. Buy the 20th Anniversary remaster, and you'll get a unique blend of photojournalism meets Wind Waker-esque action-adventuring, stealthing, and semi-open-water exploration.Happy Bday Beyond Good & Evil This Day in Gaming - Daytona USA: CCE (SAT) 1996. eBay- Dragon Ball Z: Budokai (PS2) 2002. eBay- Beyond Good & Evil (PS2) 2003. Redux- Amped 2 (XB) 2003. eBay- WoW: Wrath of the Lich King (PC) 2008. eBay- DuckTales: Remastered (PC) 2013. Get- This War of Mine (PC) 2014. GetTable of ContentsNice Savings for Nintendo SwitchHogwarts LegacyIn almost every way, Hogwarts Legacy is the Harry Potter RPG Ive always wanted to play. Its downright magical. 9/10, amazing.Ring Fit Adventure (-37%) - A$37Oddworld Stranger's Wrath HD (-33%) - A$29.95Batman: Arkham Trilogy (-50%) - $44.97Scribblenauts Showdown (-91%) - $4.94LEGO Skywalker Saga Del. (-80%) - $19.99LEGO DC Super-Villains Del. (-91%) - $9.89LEGO Marvel Super Heroes 2 (-90%) - $8.99LEGO Worlds (-87%) - $6.49LEGO CITY Undercover (-92%) - $7.19Expiring Recent DealsAC: Rebel Collection (-32%) - A$55Just Dance 2020 (-21%) - A$44.58Super Mario Bros. Wonder (-20%) - A$64Borderlands Legendary Col. (-80%) - A$17.99Borderlands: Handsome Col. (-75%) - A$17.48Or gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch Zelda: $629 $509 | Switch Original: $499 $429 | Switch OLED Black: $539 $469 | Switch OLED White: $539 $489 | Switch Lite: $329 $299 | Switch Lite Hyrule: $339 $309See itBack to topPurchase Cheap for PCGOG RPGs AplentyI'm talkin' Fallout 4: GOTY ($21.99 / -60%) | Deus Ex GOTY($1.49 / -86%) | Baldur's Gate II: Enhanced ($8.69 / -70%) | Disco Elysium Final Cut ($14.29 / -75%) | Icewind Dale: Enhanced ($8.69 / -70%) | Anachronox ($1.39 / -86%) and more.See it at GOGWarhammer: Vermintide 2 (-95%) - A$2.09Prey + Rage 2 (-91%) - A$7.44Total War: WARHAMMER III (-65%) - A$35.48The Escapists (78-%) - A$5.93Overcooked 2 (-78%) - A$7.90Expiring Recent DealsShadow Tactics: BotS (-90%) - A$6.10Shin Megami Tensei V (-39%) - A$70.11The Invincible (-55%) - A$20.42Logitech G733 headset (-45%) - A$164Logitech G203 mouse (-37%) - A$43Or just get a Steam Wallet Card.PC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: $649 | Steam Deck 512GB OLED: $899 | Steam Deck 1TB OLED: $1,049See it at SteamBack to topExciting Bargains for XboxTurtle Beach VelocityOneI can vouch for this as it's brilliant for Microsoft Flight Sim and stuff like Ace Combat 7. Not exactly the most advanced flight peripheral but certainly a robust entry point for anybody looking for a more authentic stick jockey experience.Expiring Recent DealsJedi Survivor (-50%) - A$55.29Turtle Beach Stealth Ultra cont. (-$57) - A$2721TB Expansion Card (-$58) - A$242Mortal Kombat 1 (-53%) - A$36Lego Jurassic World (-65%) - A$22.44Or just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box?Series S Black: $549 $513 | Series S White:$499 $449 | Series X: $799 $776 | Series S Starter: N/ASee itBack to topPure Scores for PlayStationDualSense SilverQuite a fetching DualSense is the silver fox. I dont believe Ive ever seen it as low as this, so strike now.GTA Trilogy [Updated](-59%) - A$40.95Dragon Quest III HD-2D Remake (-10%) - A$89Prince Of Persia: TLC (-53%) - A$28FF VII: Rebirth (-46%) - A$65Sand Land (-53%) - A$47Lost Judgment (-60%) - A$39.80Judgment (-47%) - A$34.43Expiring Recent DealsOr purchase a PS Store Card.PS5 Pro Enhanced BargainNeed a cheap Pro showcase title?Ratchet & Clank: Rift Apart - $124.95 / $79Insomniac's Pro Enhancement approach on the Spidey titles is mirrored heretwo modes: "Fidelity Pro" and "Performance Pro." The former targets 30 fps and lets you individually tune the ray tracing features to achieve higher frame rates with the VRR and 120 Hz Display options. My recommended is Performance Pro, as it targets a silky 60 fps and uses PSSR to retain (almost) all of the ray tracing stuff. You will get RT Ambient Occlusion, but not the "High" settings of RT Key Light Shadows (better shadowing at mid-to-extreme distances) or RT Reflections (smoother animations within reflections). Again, I'm all about fluid action that still blows minds like a RYNO 8 to the faceso this is my pick.PlayStation Console PricesWhat you'll pay to 'Station.PS5 Pro $1,199 | PS5 Slim Disc:$799 $759 | PS5 Slim Digital:$679 | PS VR2: $879 | PS VR2 + Horizon: $959 | PS Portal: $329See itBack to topLegit LEGO DealsHarry Potter Advent CalendarTis soon to be the season for opening small cardboard hatches filled with delightful micro-builds and minifigs. Get this cheap. Make it magical.Expiring Recent DealsBack to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.
    0 Commentaires ·0 Parts ·97 Vue
  • New Rey Movie Rumors Are Very Worrying for the Future of Star Wars
    www.denofgeek.com
    Its no secret that the film side of Star Wars has really struggled to get off the ground again following the end of the Sequel Trilogy. Countless Star Wars movie projects have been announced in the last five years, but The Mandalorian & Grogu is the only film thats actually completed production since The Rise of Skywalker. A trilogy of films said to be helmed by The Last Jedi director Rian Johnson was announced and then put on hold indefinitely; new movies to be developed by Game of Thrones creators David Benioff and D.B. Weiss were shelved; Patty Jenkinss exciting Rogue Squadron movie was announced, then shelved, then put back in development; and no one knows what the heck is going on with Taika Waititis movie. Theres apparently a Jedi origin movie coming from Indiana Jones director James Mangold at some point, but nobody seems to know when. In short, things are a mess for Star Wars on the big screen. There was at least a modicum of hope that the upcoming Rey movie, which is rumored to follow the hero as she establishes a new Jedi Order after the events of the Sequel Trilogy, would be the next film to hit the big screen and usher Star Wars movies into a new era. But a series of creative shakeups on the projects, as well as new reports about a potential continuation of the Skywalker saga developed by producer Simon Kinberg, have muddled Reys return, too.The Rey movie has already lost several writers at the development stage, each shakeup delaying the the project just a bit more, with Steven Knight (Peaky Blinders) stepping away from the film in October. Knight was already the second writer on the project, following Damon Lindelof and Justin Britt-Gibson. Director Sharmeen Obaid-Chinoy is still currently attached to direct the movie. But when she and Ridley will actually get to make their film is a big questionand how the movie will fit alongside Kinbergs trilogy plans, which, according to a report from THR, may also use Rey.Theres been a lot of speculation as to whether or not Kinbergs planned Star Wars trilogy is, in fact, another expansion of the central Skywalker Saga or something else entirely, but the fact that were even having this conversation doesnt bode well for the franchises cinematic future or its central star. Lets face it: Lucasfilm doesnt have a great track record when it comes to carving out a clear path for Rey (or any of the Sequel Trilogy characters, for that matter). The Last Jedi and The Rise of Skywalker very overtly clashed with each other in terms of not only Reys identity but also the overarching themes of her journey. Clashing visions hurt Rey on the big screen the last time around. We, of course, dont know for sure that Kinberg is working on his own Rey moviesand if he is, how closely hes coordinating his take with Obaid-Chinoys movie. (Theres always the possibility that Obaid-Chinoys film has now become the first installment of that trilogy, but were speculating here.) But the stops and starts regarding Reys film future are already a little worrying. According to industry insiders via THR, Rey is the most valuable cinematic asset, in some ways maybe the only one, Star Wars has right now, as the Sequel Trilogy closed the book on all of the franchises biggest characters, including Luke, Han, and Leia. Which at least means that well likely get to see Rey again at some point, as long as Daisy Ridley is willing to reprise her role. However, this language is indicative of so many problems with Star Wars and the industry at large right now.By looking at characters and their stories as little more than numbers and dollar signs and nostalgia grabs, were losing a lot of the heart that drew many of us to this franchise in the first place. For many women and young girls, watching Reys journey (at least until The Rise of Skywalker) was and is inspiring. She was a nobody from a desert planet who was able to discover this power within herself and channel it into saving herself and the galaxy from oppression. Before she was a Palpatine or a Skywalker, she was just Rey, and it was easy for us to see ourselves in her and her desire to prove herself.Rey deserves more time on the big screen. She deserves to carve her own path forward and make her own legacy rather than only being a vessel to carry the legacy of others. But its hard to get excited about her potential future when shes being passed around by creatives and executives like the latest toy on Christmas.And whats worse, is this just seems to be the way that Lucasfilm operates now, according to insiders. There are multiple projects in the works that may or may not overlap characters and timelines. Some writers and directors are privy to what their peers are working on, but others arent. Its a different way of development, according to an insider. Theres so much parallel work going on.All of this parallel work feels akin to throwing something at the wall and hoping it sticks. Which might work for some people, but for a franchise as large as this one, doesnt seem to be working thus far. What happens if Reys solo movie takes her in one direction, but Kinberg sees her future differently? Its The Rise of Skywalker all over again. (Seriously, that film is absolutely dreadful.)Rebellions may be built on hope, but the current trajectory of Star Wars movies is far from hopeful. At this point, its hard to even imagine a world where either of these Rey projects, let alone both of them, actually make it to our screens. Star Wars needs to choose a direction and stick with it, for better or for worse. Otherwise were probably not going to see anything outside of the Mandoverse for the foreseeable future.
    0 Commentaires ·0 Parts ·103 Vue