• Casual and cute exploration in Revenge of the Savage Planet | hands-on preview
    venturebeat.com
    GamesBeat had a chance to play Revenge of the Savage Planet in a hands-on preview.Read More
    0 Σχόλια ·0 Μοιράστηκε ·133 Views
  • Jingle Jam raises 2.7 million after impressive 2024 fundraising event
    www.gamesindustry.biz
    Jingle Jam raises 2.7 million after impressive 2024 fundraising eventOver 800 creators raised money for the likes of War Child, Campaign Against Living Miserably (CALM), Autistica, and Sarcoma UK News by Christopher Dring Head of Games B2B Published on Dec. 17, 2024 The Jingle Jam livestreaming event has raised an impressive 2.7 million over two weeks, the charity has announced.The money was raised by a series of fundraising livestreams hosted by more than 800 creators, including high profile names such as TommyInnit, The Spiffing Brit and Smosh Games, alongside The Yogscast, which founded the Jingle Jam in 2011 (you can read more about it here).The money has been received by eight charities this year, with 314,000 going to Autistica, 470,000 to Campaign Against Living Miserably (CALM), 280,000 for Cool Earth, 269,000 for Sarcoma UK, 305,000 for The Trevor Project, 396,000 for Wallace and Gromit's Grand Appeal, 300,000 for War Child and 339,000 for Whale and Dolphin Conservation.During the fundraising project, anyone who donated 35 or more would receive a collection of games, including Two Point Campus, For The King 2, Shadows of Doubt and Wildfrost.Events that took place between December 1st - December 14th included Tommyinnit playing Minecraft alongside Technodad to raise money for Sarcoma UK. Technodad is the father of creator Technoblade who died in 2022 from sarcoma cancer. The event also saw the Smosh Games cast singing karaoke to raise funds for The Trevor Project, and The Spiffing Brit playing Skyrim to support Wallace & Gromits Grand Appeal.A huge thank you to all the viewers who donated generously to such wonderful causes, the developers and publishers who gave their games to the Collection for free and to all the creators who took part in this years Jingle Jam, putting together some of the most entertaining streams weve seen to date, said Jingle Jam chair Rich Keith.
    0 Σχόλια ·0 Μοιράστηκε ·156 Views
  • Alan Wake 2 - Night Springs & The Lake House | Games of the Year 2024
    www.gamesindustry.biz
    Alan Wake 2 - Night Springs & The Lake House | Games of the Year 2024Unsurprisingly, Sophie McEvoy found herself caught in another endless time loop Feature by Sophie McEvoy Staff Writer Published on Dec. 17, 2024 It's been over a year since I fell down the Remedy rabbit hole, and I'm happy to report there's no end in sight.Since the launch of Alan Wake 2 in October 2023, I've played nearly every game in Remedy's catalogue, in some cases more than once (I'm lovingly looking at you, Control and Max Payne 2). All those wonderful games left almost no space for anything else this year.That's not to say I haven't experienced any new releases in 2024. Little Kitty, Big City stole my heart, Thank Goodness You're Here had me in stitches, and Star Wars Outlaws gave me a much-needed dose of nostalgia. Baldur's Gate 3 is also a fresh experience that I've managed to put in over 120 hours in 20 days."The absurdity is off the charts, from Alan communicating as a mounted bass ornament to a motorcycle turning into a werewolf (yes, really)"Alas, Remedy just had to continue releasing content for Alan Wake 2, which came in the form of two expansions: Night Springs and The Lake House.Night Springs arrived on June 8, consisting of three episodes centered around characters from Remedy's connected universe. Night Springs itself is a homage to The Twilight Zone, and appears as an actual TV show throughout 2010's Alan Wake.This was expanded in 2012 with Alan Wake's American Nightmare, which is framed as a Night Springs episode written by Alan as a means to escape The Dark Place an alternate nightmare dimension he's been trapped in for 13 years.The Night Springs expansion follows the same premise, with Alan using characters from Remedy games as a means to escape.One is based on the overarching plot of 2019's Control, following its protagonist Jesse Faden the director of a secret government agency called the Federal Bureau of Control. In Night Springs, she is simply known as The Sibling and is looking for her brother at a theme park that appears in Alan Wake 2.The other is a head-spinning trip through parallel universes, where players take on the role of actual actor Shawn Ashmore. He portrays Sheriff Tim Breaker in Alan Wake 2 as well as protagonist Jack Joyce in 2016's Quantum Break. But in this episode, he plays an unnamed hero in a game called Time Breaker developed by Poison Pill Entertainment (likely because Microsoft still owns the Quantum Break IP).But what captivated me immediately was the first episode of Night Springs, Number One Fan.This instalment is centered around Rose Marigold, a waitress from the game's fictional diner. In Alan Wake, she was an endearing but stereotypical fangirl. Thankfully, her character was fleshed out in Alan Wake 2, and even more so in Number One Fan.Rose is tasked with saving her beloved writer Alan from the clutches of his jealous twin brother, Scratch (in the main games, Scratch is a manifestation of a supernatural entity called the Dark Presence, which takes on Alan's appearance). Number One Fan hinges on Rose's dedication to Alan, but not in a derogatory way. The episode doesn't make fun of how dedicated she is to the writer she loves; it embraces it.Since falling for Alan Wake, I've become known as "the Alan Wake writer" it doesn't take long to see why if you follow me on social media. But as a neurodivergent person who hyperfixates on things, I've often been made fun of for loving a game or TV show "too much."But the Remedy community has accepted me with open arms, and I'm often referred to as Rose by my close friends... although, that was very much my own doing after I followed in her footsteps and managed to procure a life-size cutout of Alan.Rose being the center of attention and the overall hero of Number One Fan made me feel seen. It's a token of appreciation to Alan Wake fans, which game director Kyle Rowley emphasised when I spoke to Remedy's dev team about the expansion earlier this year. Don't worry Alan, your cutout is safe with meAfter pouring over 100 hours into replaying Alan Wake 2, I really wanted to see Remedy's kookiness take centre stage and it does so exponentially here. The absurdity is off the charts, from Alan communicating to Rose as a mounted bass ornament and a deer to a motorcycle turning into a werewolf (yes, really).On top of that, Jessica Preddy's fantastic portrayal of Rose and the unbeatable duo that is Matthew Porretta and Ilkka Villi (as both Alan and Scratch) cemented the first expansion as my game of the year right off the bat.And then came The Lake House.Released on October 22, the game's second DLC amplified the survival horror all the way up to 11, but in a truly Remedy way.Set up as a Control crossover event, you play as Kiran Estevez an agent from the Federal Bureau of Control. She's been tasked with investigating the situation unfolding in the game's fictional town of Bright Falls. Estevez appears mid-way through Alan Wake 2, during which she refers to an incident that occurs during this expansion. This incident takes place at The Lake House: a research facility set up to monitor paranatural occurrences at Cauldron Lake near Bright Falls. Cauldron Lake just so happens to be a portal to The Dark Place.Estevez arrives at an abandoned Lake House, only to discover that horrific supernatural entities have been unleashed through an unlikely source: abstract paintings. It's down to her to find out why, and more importantly, how to stop them.Playing as Estevez reminded me of how I connected to Jesse in Control. Not only was I experiencing the world as a strong female protagonist, but both characters are brilliantly sarcastic and unphased by the weirdness transpiring around them."[Estevez] uses a grounding technique of taking six deep breaths when overwhelmed, which was refreshing to see as with anxiety"Although, that's what it seems like with Estevez. Once you start diving deeper into the horrors unfolding within The Lake House, her seemingly calm demeanor starts to falter. She uses a grounding technique of taking six deep breaths when things get overwhelming, which was refreshing to see as someone who suffers with anxiety. It gave me a new coping mechanism that I've gone on to use a few times since.There's also a new song for the DLC centered around this theme, written by singer-songwriter Poe. She contributed the song 'This Road' for the main game, which appears in segments after you finish chapters in Alan's section. That song became an important mantra for me, as has this one. '6 Deep Breaths' reminds me to stop, take a step back, breathe, and face your fears.With that in mind, I often found myself reacting to situations in the same way as Estevez does, much like I did with Jesse in Control. There were countless times where I would say the exact same thing Estevez would say in reaction to what was being uncovered.For example, there's an entire floor of the facility housing rows upon rows of typewriters, eerily clacking and pinging away on their own. A system has been created to automate Alan's writing style to mimic his ability to make fiction become reality. As soon as it dawned on me what the researchers were trying to do, I audibly groaned and said to myself this is such a stupid idea as did Estevez. It's also a really interesting commentary on the future of AI, something I was not expecting to appear in this survival horror experience. Alan's writing is unique to him via his tone, use of metaphors, and the way he sets a scene. The prose these typewriters are pumping out can't match that at all, which is further proven by the experiment being unable to replicate Alan's power since he's not there to fuel it.As a massive fan of Remedy's games, I found how The Lake House combined Control and Alan Wake to be joyous. From the questionable experiments to the weird and wonderful moments mixed with pure horror, this expansion was the perfect endnote to Alan Wake 2.Of course, it left me wanting more not only from Alan Wake but also Control 2, as a sneaky teaser was hidden towards the end. Hopefully, it's not going to be that long a wait to see what Remedy has in store for its ever-expanding connected universe, and I can't wait to experience it.
    0 Σχόλια ·0 Μοιράστηκε ·167 Views
  • How newspaper games like Wordle became behemoths
    www.gamedeveloper.com
    One of video games' biggest recent success stories involves Wordle, a once-per-day word guessing game developed by software engineer Josh Wardle for him and his partner to play. It has a simple interface, is easy to understand, features no ads, and is free to play. Guess a five-letter word in six tries, come back the next day for another. So when it was released in October 2021 during the COVID-19 pandemic, it caught on very quickly.Just a few months later, The New York Times bought it for a price "in the low seven figures." A few years later, it's still a gigantic hit, having been played more than 4.8 billion times in 2023 alone. It's so big that when the organization's tech union went on strike in November 2024, workers made versions of its games, including Wordle, that users could play instead of crossing the picket line.Wordle's massive popularity is just one inflection point in the history of newspaper gamestypically word or number puzzles, crosswords, Sudoku, and other games you play once per day. These kinds of games have been popular for over a century, and almost every mainstream subscription publication you can think of has their own.via New York TimesBut there was something about Wordle that made publications and platforms take notice. LinkedIn launched three "thinking-oriented games" in May 2024, while Vulture unveiled Cinematrix, a grid-based movie trivia guessing game, in February. Subscription-based games platform Puzzmo launched in late 2023, offering standard fare like crosswords along with experimental endeavors like Pile-Up Poker, an oddly satisfying and challenging combination of poker and Sudoku. It was acquired by Hearst Newspapers a couple months later, and can be played across multiple websites like the San Francisco Chronicle.Wardle says Wordles success is tied to its simplicity. "I think people kind of appreciate that theres this thing online thats just fun, he told the New York Times. "It's something that encourages you to spend three minutes a day Like, it doesnt want any more of your time than that."Experts interviewed for this article agree on this to a point. Wordle is a simple yet effective game that appeals to almost everyone. However, it also benefited from great timing, releasing during a pandemic where people were aching for community in a world where building it felt impossible, and showed what still needed to be done to push daily games to the next level. It was time for a change.Is The New York Times a gaming company?The leader in the newspaper games space is undoubtedly the New York Times. Its crossword is one of the most well known, and puzzle editor Will Shortz might be as close to a household name as puzzle editors get. It was already a huge draw before the publication bought Wordle, and has become even more important in the age of digital subscriptions.As traditional publications struggle with subscriber counts and making a profit, the New York Times has increased its numbers almost every year since 2014. This is thanks in part to a bundle that costs $25 per month and packs in subscriptions for both its games app and the paper itself. Its investment into Wordle is just one part of its growth strategy. As of November 2023, it had around 100 team membersup from around a dozen over the past decadeand has since hired more in community and design. The half joke that is repeated internally is that The New York Times is now a gaming company that also happens to offer news," one anonymous staffer told Vanity Fair.You'd be forgiven for wanting to call the New York Times a gaming company. According to Semafor, the NYT Games app was downloaded more than 10 million times in 2023. But despite this success and growth, Times executive editor Joseph Kahn maintains it's not looking to create a games studio, telling Vanity Fair that the company is not "Activision, and I dont think were looking to become that." "These are brainteaser games for smart people who want a challenge in the course of the day. So I see them as very complementary, but not replacement, products for a news organization."The New York Times is very much a media organization first, but since buying Wordle, it's launched Connections, which requires players to find four groups of words among 16 new ones each day. Semafor reports that it's been played around 2.3 billion times. It also released Strands, a game inspired by word searches, and is currently testing Zorse, a phrase guessing game."Its undeniable that Wordle was a big tipping point for us, chief product officer Alex Hardiman told Vanity Fair. But its not Wordle only. Its Wordle driving more attention to other games, allowing us to invest more in games.The pandemic's affect on daily gamesWordle's massive popularity isn't the only reason for the rise in daily games. It was also spurred on in part by the pandemic, where people stuck inside with little to do were looking for a bit of routine to fill their day. Video game popularity and sales surged during the pandemic, with even the World Health Organization encouraging people to play games during lockdown. Wordle capitalized on that by being one of those tiny daily tasks. You solve one puzzle per day, and you're locked out until the next.Stella Zawistowski, a puzzle constructor for Vulture, the New Yorker, and other publications, says that the pandemic "accelerated" the daily games space because people were looking for something that "makes you feel a little smarter." Wordle also had a secret weapon: a feature that automatically created a colored grid of your results that you could copy onto Twitter/X or a group message."I don't think [Wordle] would have been successful if you could just play as many times as you want every day," Zawistowski said. "I don't think it would have been nearly as successful if you couldn't post your score on Insta, on Twitter, on Facebook because then it gets people talking about it."That community building is one of video games' greatest strengths, and that's all the more relevant with daily games. The New York Times has been slowly building up community, adding stats to its games and allowing people to join forums (although they're mostly just links to comment sections), but its success is often in spite of its lack of features. So many people play New York Times games, and there are a lot of chances to go viral. For example, Connections has become a viral hit in certain circles thanks to the chaotic nature of some of the solutions (a long running joke about how users see editor Wyna Liu as their nemesis has been the subject of many TikToks and memes). That's why Puzzmo co-creator Zach Gage believes that it doesn't have a lot to offer for many online players."Their platform is terrible," Gage said. "It's terrible in the sense that it is non-existent. Their platform is a website with a bunch of links to games, and then you go and play the games, and the games don't really interact with that website where you started."Gage noted that during the pandemic, he noticed that his wife was playing Words with Friends, a mobile, multiplayer version of Scrabble, with family through multiple group chats. I see something similar among people who play Wordle. Even in 2024, I know people in group chats that are specifically for sharing Wordle results."If this was any other game that was big, there would be a social space that was connected to this game that everybody would just be able to enjoy Why isn't there a social space for players like that?" Gage said.Via PuzzmoGage, known for daily games like Really Bad Chess and SpellTower, launched Puzzmo in 2023 with engineer Orta Therox not only as a place to house his games, but to fill a gap that the New York Times left behind in terms of community building. There are leaderboards, social features like friend requests, easy access to a Discord server where players and constructors gather, and daily announcements discussing how well people did on puzzles.Even LinkedIn noted the potential for games to connect people as its reasoning for adding daily games to its platform. You play one of the three four available at the time of this writingPinpoint, Queens, or Crossclimbthen see which of your connections have played. You can also then head over to leaderboards or immediately hop into the official post to talk with other people. It's barebones, but it works as a little push to socialize over its games. "You share your knowledge and get knowledge back, you share your experiences and hear about others own roads. And with games, you finish a puzzle and then talk about it with colleagues, friends, and distant connections," editor in chief and VP at LinkedIn said in the games announcement.The evolution of an old formatIt makes sense that the New York Times would be the market leader in the daily games space just due to age and brand recognitionit debuted in 1942, so it's had the time to build up a name for itself. But it's been making changes to keep it up to date for 2024. It's fallen behind in regards to community building, but it's been working on its reputation for being stodgy, traditional, highbrow, and almost completely inaccessible. The only way to get good at a New York Times crossword isn't to know random trivia, but to do them over and over again so you start learning the answers to favorite clues and notice patterns.There are better ways to do this, if you want them. A priority for Puzzmo was to provide multiple difficulty experiences in one app. If you want to solve a crossword without any hints, you can. If you want to go for some extra sidequests in Pile-Up Poker and engage further with a game's mechanics, you can do that as well.Via Puzzmo"One thing I've noticed is The New York Times tends to target either super high-end players or super low-end players," Gage said. "Their crossword is not very approachable for people who've never played a crossword, and their game Tiles is not very interesting for people who are really deep into games and want a deep experience In Puzzmo, our focus is on building games that work for anybody."Puzzle constructor Brooke Husic leads the Puzzmo crossword, which is a great example of this balancing act. Puzzmo crosswords vary in terms of difficulty, but that's not defined by the obtuseness of the clues. The crossword allows you to use multiple hints before revealing the answer, and"We want somebody who's like, we have some of the best speed solvers in the world solving [the crossword] every day. And I want them to be there. I really want them to be there," Husic said. "But at the same time, I want someone who's never solved a crossroad before to go to post about any day and have it be their first crossword, and have a good experience."The New York Times, for what it's worth, has been making changes to increase the accessibility of its puzzles, specifically its crossword. Everdeen Mason became the newspaper's first editorial director of games in 2021, and she told Vanity Fair that while the Times had to maintain the difficulty it's known for, it wants to be more accessible for newer players. "If were asking people to pay for a product thats primarily this thing that they cant access, then thats not very smart," she said.Beyond the sheer amount of thematic variety out there that lets players choose which crosswords or games they might prefer, a more diverse array of constructors and editors have also been behind some of the most well-known puzzle sections. The New York Times is no longer run by purely white men; Mason is a black woman who dyes her hair, wears anime shirts in interviews, and immediately wanted to challenge the team and "get people out of their comfort zones" with a Black History Month theme and more freelance constructors of different backgrounds.Zawistowski has written before about how crosswords are mostly constructed by men, and began her career struggling against that establishment. "The partner I was working with was an older, retired, white guy, and so he would put Boomer references in his puzzles, and then I would have to clue them. It's just that's not who I am, whereas now I can put in the things that I love and then the puzzle feels more like me," Zawistowski said. But now, she's been able to stop freelancing in advertising and make puzzles full-time, and can make them on her terms.Making puzzles feel more personal has been successful for Puzzmo. Husic ensures that crossword players have the chance to learn about the constructor and the process behind making that particular puzzle in notes that pop up after you've completed it."What was important to me was to make it very clear that humans made these, individuals made these and they cared so much about every choice," Husic said. It's a chance to engage with the player beyond just presenting a daily puzzle. "I have always wanted to exalt individual voices. Yeah, there's so many people who don't know that humans write crosswords. People think that computers are generating them, or they think Will Shortz writes every New York Times crossword."How long will this boom last?Like with many fads and industries, there will be players who get tired and get interested in something else. The New York Times is invested in its games because of how much they do for the company, and it's hired dozens of staffers to get that done.But daily games must continue innovating to stay relevant. Puzzmo releases new games all the timemost in an experimental, early access phase, and ropes in its audience for feedback. It also tries out twists on old formats. Crosswords typically fit inside a standard square grid thanks to old-school paper restraints, but Husic and the constructors literally think outside the box by playing around with the size and shape of the puzzles. "Puzzmos approach to grid design exemplifies its goals in bringing a human touch to games," wrote one intern back in August."There are, like, over 100 million people who are ready for the next Wordle game and will jump in and play it when it happens that's the sort of thing that all these businesses that are looking into the space are looking at right now. They want to be the people who have that game when it happens, because it's going to happen," Gage said.That's all expected. Industries ebb and flow, and the same will happen to daily games. But in the meantime, the daily games surge is providing new outlets for constructors and new ways to play. It's unclear if we'll ever get another Wordle, but the daily games space and its millions of players will be there when it's ready.
    0 Σχόλια ·0 Μοιράστηκε ·168 Views
  • 'This is comedy': Balatro developer Localthunk baffled after PEGI hands title 18+ rating
    www.gamedeveloper.com
    Chris Kerr, News EditorDecember 17, 20243 Min ReadScreenshot via Localthunk / 18+ sticker via PEGIAt a GlanceLocalthunk said they're more disgruntled at what they perceive as inconsistency on PEGI's part than the decision itself.European rating agency PEGI has handed Balatro an 18+ rating because it could teach players how to dabble in real-world poker.The decision looks to have surprised developer Localthunk, who questioned why Balatro has been deemed "adults only" when other titles that feature in-game spending and randomized item packs are considered suitable for children."Since PEGI gave us an 18+ rating for having evil playing cards maybe I should add microtransactions/loot boxes/real gambling to lower that rating to 3+ like EA sports FC," reads an X post from the developer. "This is comedy."In a follow-up post, Localthunk said they're more disgruntled at what they perceive as inconsistency on PEGI's part than the decision to give Balatro an 18+ rating."Just to clear it upI'm way more irked at the 3+ for these games with actual gambling mechanics for children than I am about Balatro having an 18+ rating," they added. "If these other games were rated properly Id happily accept the weirdo 18+. The red logo looks kinda dope."Then PEGI ratings explainer for Balatro states the 2D deck-builder is being restricted because it "features prominent gambling imagery" and essentially teaches players how to navigate a poker game."As the game goes on, the player becomes increasingly familiar with which hands would earn more points. Because these are hands that exist in the real world, this knowledge and skill could be transferred to a real-life game of poker," adds PEGI.Localthunk doesn't want Balatro to become a 'true gambling game'By contrast, PEGI has handed EA Sports FC 25 a 3+ rating that indicates the title is "suitable for all ages." That's despite the ratings agency acknowledging the soccer sim "offers players the opportunity to purchase in-game items, in the form of an in-game currency, which can be used to purchase random card packs and other game items.""Some parents or carers may want to be aware of this," it adds.It's worth noting that Balatro doesn't let players place bets in-game. Instead, they must accrue points by collecting offbeat joker cards that imbue regular playing cards with new abilities, dish out score multipliers, and generally turn the concept of poker on its head.It's possible to obtain new jokers and other special cards by opening randomized booster packs, but those packs can only be purchased using in-game currency obtained through play. There are no microtransactions in Balatro.In August, Localthunk said they "hate the thought" of Balatro becoming a "true gambling game" and have created a will that stipulates the IP may never be sold or licensed to any gambling companies or casinos.The ESRB, which handles video game ratings in Canada, the US, and Mexico, gave Balatro an 'Everyone 10+' rating and noted it contains "gambling themes" but "no interactive elements.""The game has a poker theme, which includes the names of hands, scoring system, and types of playing cards, but does not include making wagers," it added.Game Developer has reached out to PEGI for more information.Read more about:Top StoriesAbout the AuthorChris KerrNews Editor, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, andPocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Σχόλια ·0 Μοιράστηκε ·163 Views
  • Fab December 2024 Asset Giveaway #2
    gamefromscratch.com
    It is the third Tuesday of the month and that means its time for anotherUnreal EngineFab marketplace giveaway. You can get three free game development assets, this weeks assets are entirely for Unreal Engine however guides below instruct you on how to export from Unreal Engine to other engines such as Godot or Unity.This Months free assets include:Newtonian Falling and Momentum Damage SystemModular Japanese Architecture PackOLD OFFICE (MODULAR)You can see all of these assets in action in thevideobelow. Also, a quick reminder tograb all of the Quixel Megascans assetsif you havent already as this offer expires at the end of 2024!If you are interested in getting these assets into other game engines, check out our various guides available here:
    0 Σχόλια ·0 Μοιράστηκε ·179 Views
  • Meshingun 3D Environment Unreal Engine & Unity Bundle Returns
    gamefromscratch.com
    As part of the ongoing Humble Bundle holiday rerun, the Deluxe Dev Dreams 3D Unreal Engine & Unity Humble Bundle for 2 days only. The bundle is composed mostly of 3D environments for Unreal Engine with a couple of Unity versions available as well. Details on exporting from Unreal Engine to various different game engines and tools is available below. Unlike the first run, this bundle has no tiers with only a single $30 USD tier available:Gothic Cemetery PackFoliage PackGothic Texture PackAsian Temple PackMedieval Props PackMedieval Dinnerware PackGothic Dungeon Props Vol1Stylized Foliage Pack V.01Cyber-Town PackGothic Furniture Props Vol1Flag GeneratorBook GeneratorFeudal Japan Interior Props Vol1Egyptian Props Vol1Feudal Japan MegapackGothic Interior Megapack (UE)Gothic Megapack (UE)Medieval Village Megapack (UE)Stylized Village FatpackBrooke Industrial TownThe BazaarGothic Megapack (Unity)Gothic Interior Megapack (Unity)Medieval Village Megapack (Unity)If you wish to convert the assets from one game engine to another, consider the following guides.You can learn more about the Deluxe Dev Dreams 3D Unreal Engine & Unity Humble Bundle in the video below. Using links on this page helps support GFS (and thanks so much if you do!). If you have any trouble opening the links simply paste it into a new tab and it should work just fine.
    0 Σχόλια ·0 Μοιράστηκε ·187 Views
  • This AI Paper from Microsoft and Novartis Introduces Chimera: A Machine Learning Framework for Accurate and Scalable Retrosynthesis Prediction
    www.marktechpost.com
    Chemical synthesis is essential in developing new molecules for medical applications, materials science, and fine chemicals. This process, which involves planning chemical reactions to create desired target molecules, has traditionally relied on human expertise. Recent advancements have turned to computational methods to enhance the efficiency of retrosynthesisworking backward from a target molecule to determine the series of reactions needed to synthesize it. By leveraging modern computational techniques, researchers aim to solve long-standing bottlenecks in synthetic chemistry, making these processes faster and more accurate.One of the critical challenges in retrosynthesis is accurately predicting chemical reactions that are rare or less frequently encountered. These reactions, although uncommon, are vital for designing novel chemical pathways. Traditional machine-learning models often fail to predict these reactions due to insufficient representation in training data. Also, multi-step retrosynthesis planning errors can cascade, leading to invalid synthetic routes. This limitation hinders the ability to explore innovative and diverse pathways for chemical synthesis, particularly in cases requiring uncommon reactions.Existing computational methods for retrosynthesis have primarily focused on single-step models or rule-based expert systems. These methods rely on pre-defined rules or extensive training datasets, which limits their adaptability to new and unique reaction types. For instance, some approaches use graph-based or sequence-based models to predict the most likely transformations. While these methods have improved accuracy for common reactions, they often need more flexibility to account for the complexities and nuances of rare chemical transformations, leading to a gap in comprehensive retrosynthetic planning.Researchers from Microsoft Research, Novartis Biomedical Research, and Jagiellonian University developed Chimera, an ensemble framework for retrosynthesis prediction. Chimera integrates outputs from multiple machine-learning models with diverse inductive biases, combining their strengths through a learned ranking mechanism. This approach leverages two newly developed state-of-the-art models: NeuralLoc, which focuses on molecule editing using graph neural networks, and R-SMILES 2, a de-novo model employing a sequence-to-sequence Transformer architecture. By combining these models, Chimera enhances both accuracy and scalability for retrosynthetic predictions.The methodology behind Chimera relies on combining outputs from its constituent models through a ranking system that assigns scores based on model agreement and predictive confidence. NeuralLoc encodes molecular structures as graphs, enabling precise prediction of reaction sites and templates. This method ensures that predicted transformations align closely with known chemical rules while maintaining computational efficiency. Meanwhile, R-SMILES 2 utilizes advanced attention mechanisms, including Group-Query Attention, to predict reaction pathways. This models architecture also incorporates improvements in normalization and activation functions, ensuring superior gradient flow and inference speed. Chimera combines these predictions, using overlap-based scoring to rank potential pathways. This integration ensures that the framework balances the strengths of editing-based and de-novo approaches, enabling robust predictions even for complex and rare reactions.The performance of Chimera has been rigorously validated against publicly available datasets such as USPTO-50K and USPTO-FULL, as well as the proprietary Pistachio dataset. On USPTO-50K, Chimera achieved a 1.7% improvement in top-10 prediction accuracy over the previous state-of-the-art methods, demonstrating its capability to accurately predict both common and rare reactions. On USPTO-FULL, it further improved top-10 accuracy by 1.6%. Scaling the model to the Pistachio dataset, which contains over three times the data of USPTO-FULL, showed that Chimera maintained high accuracy across a broader range of reactions. Expert comparisons with organic chemists revealed that Chimeras predictions were consistently preferred over individual models, confirming its effectiveness in practical applications.The framework was also tested on an internal Novartis dataset of over 10,000 reactions to evaluate its robustness under distribution shifts. In this zero-shot setting, where no additional fine-tuning was performed, Chimera demonstrated superior accuracy compared to its constituent models. This highlights its capability to generalize across datasets and predict viable synthetic pathways even in real-world scenarios. Further, Chimera excelled in multi-step retrosynthesis tasks, achieving close to 100% success rates on benchmarks such as SimpRetro, significantly outperforming individual models. The frameworks ability to find pathways for highly challenging molecules further underscores its potential to transform computational retrosynthesis.Chimera represents a groundbreaking advancement in retrosynthesis prediction by addressing the challenges of rare reaction prediction and multi-step planning. The framework demonstrates superior accuracy and scalability by integrating diverse models and employing a robust ranking mechanism. With its ability to generalize across datasets and excel in complex retrosynthetic tasks, Chimera is set to accelerate progress in chemical synthesis, paving the way for innovative approaches to molecular design.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Nikhil+ postsNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Σχόλια ·0 Μοιράστηκε ·158 Views
  • Meta AI Releases Apollo: A New Family of Video-LMMs Large Multimodal Models for Video Understanding
    www.marktechpost.com
    While multimodal models (LMMs) have advanced significantly for text and image tasks, video-based models remain underdeveloped. Videos are inherently complex, combining spatial and temporal dimensions that demand more from computational resources. Existing methods often adapt image-based approaches directly or rely on uniform frame sampling, which poorly captures motion and temporal patterns. Moreover, training large-scale video models is computationally expensive, making it difficult to explore design choices efficiently.To tackle these issues, researchers from Meta AI and Stanford developed Apollo, a family of video-focused LMMs designed to push the boundaries of video understanding. Apollo addresses these challenges through thoughtful design decisions, improving efficiency, and setting a new benchmark for tasks like temporal reasoning and video-based question answering.Meta AIs Apollo models are designed to process videos up to an hour long while achieving strong performance across key video-language tasks. Apollo comes in three sizes 1.5B, 3B, and 7B parameters offering flexibility to accommodate various computational constraints and real-world needs.Key innovations include:Scaling Consistency: Design choices made on smaller models are shown to transfer effectively to larger ones, reducing the need for large-scale experiments.Frame-Per-Second (fps) Sampling: A more efficient video sampling technique compared to uniform frame sampling, ensuring better temporal consistency.Dual Vision Encoders: Combining SigLIP for spatial understanding with InternVideo2 for temporal reasoning enables a balanced representation of video data.ApolloBench: A curated benchmark suite that reduces redundancy in evaluation while providing detailed insights into model performance.Technical Highlights and AdvantagesThe Apollo models are built on a series of well-researched design choices aimed at overcoming the challenges of video-based LMMs:Frame-Per-Second Sampling: Unlike uniform frame sampling, fps sampling maintains a consistent temporal flow, allowing Apollo to better understand motion, speed, and sequence of events in videos.Scaling Consistency: Experiments show that model design choices made on moderately sized models (2B-4B parameters) generalize well to larger models. This approach reduces computational costs while maintaining performance gains.Dual Vision Encoders: Apollo uses two complementary encoders: SigLIP, which excels at spatial understanding, and InternVideo2, which enhances temporal reasoning. Their combined strengths produce more accurate video representations.Token Resampling: By using a Perceiver Resampler, Apollo efficiently reduces video tokens without losing information. This allows the models to process long videos without excessive computational overhead.Optimized Training: Apollo employs a three-stage training process where video encoders are initially fine-tuned on video data before integrating with text and image datasets. This staged approach ensures stable and effective learning.Multi-Turn Conversations: Apollo models can support interactive, multi-turn conversations grounded in video content, making them ideal for applications like video-based chat systems or content analysis.Performance InsightsApollos capabilities are validated through strong results on multiple benchmarks, often outperforming larger models:Apollo-1.5B:Surpasses models like Phi-3.5-Vision (4.2B) and LongVA-7B.Scores: 60.8 on Video-MME, 63.3 on MLVU, 57.0 on ApolloBench.Apollo-3B:Competes with and outperforms many 7B models.Scores: 58.4 on Video-MME, 68.7 on MLVU, 62.7 on ApolloBench.Achieves 55.1 on LongVideoBench.Apollo-7B:Matches and even surpasses models with over 30B parameters, such as Oryx-34B and VILA1.5-40B.Scores: 61.2 on Video-MME, 70.9 on MLVU, 66.3 on ApolloBench.Benchmark Summary:ConclusionApollo marks a significant step forward in video-LMM development. By addressing key challenges such as efficient video sampling and model scalability, Apollo provides a practical and powerful solution for understanding video content. Its ability to outperform larger models highlights the importance of well-researched design and training strategies.The Apollo family offers practical solutions for real-world applications, from video-based question answering to content analysis and interactive systems. Importantly, Meta AIs introduction of ApolloBench provides a more streamlined and effective benchmark for evaluating video-LMMs, paving the way for future research.Check out the Paper, Website, Demo, Code, and Models. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Σχόλια ·0 Μοιράστηκε ·154 Views
  • Ways to Deal With Hallucinations in LLM
    towardsai.net
    LatestMachine LearningWays to Deal With Hallucinations in LLM 1 like December 16, 2024Share this postAuthor(s): Igor Novikov Originally published on Towards AI. Image by the authorOne of the major challenges in using LLMs in business is that LLMs hallucinate. How can you entrust your clients to a chatbot that can go mad and tell them something inappropriate at any moment? Or how can you trust your corporate AI assistant if it makes things up randomly?Thats a problem, especially given that an LLM cant be fired or held accountable.Thats the thing with AI systems they dont benefit from lying to you in any way but at the same time, despite sounding intelligent they are not a person, so they cant be blamed either.Some tout RAG as a cure-all approach, but in reality it only solves one particular cause and doesnt help with others. Only a combination of several methods can help.Not all hope is lost though. There are ways to work with it so lets look at that.So not to go too philosophical about what is hallucination, lets define the most important cases:The model understands the question but gives an incorrect answerThe model didnt understand the question and thus gave an incorrect answerThere is no right or wrong answer, and therefore if you disagree with the mode it doesnt make it incorrect. Like if you ask Apple vs Android whatever it answers is technically just an opinionLets start with the latter. These are reasons why a model can misunderstand the questions:The question is crap (ambiguous, not clear, etc.), and therefore the answer is crap. Not the model's fault, ask better questionsThe model does not have contextLanguage: the model does not understand the language you are usingBad luck or, in other words, stochastic distribution led the reasoning in a weird wayNow lets look at the first one: why would a model lie, that is give factually and verifiably incorrect information, if it understands the questions?It didnt follow all the logical steps to arrive at a conclusionIt didnt have enough contextThe information (context) in this is incorrect as wellIt has the right information but got confusedIt was trained to give incorrect answers (for political and similar reasons)Bad luck, and stochastic distribution led to the reasoning in a weird wayIt was configured so it is allowed to fantasize (which can be sometimes desirable)Overfitting and underfitting: the model was trained in a specific field and tries to apply its logic to a different field, leading to incorrect deduction or induction in answeringThe model is overwhelmed with data and starts to lose contextIm not going to discuss things that are not a model problem, like bad questions or questions with no right answers. Lets concentrate on what we can try to solve, one by one.The model does not have enough context or information, or the information that was provided to it is not correct or fullThis is where RAG comes into play. RAG, when correctly implemented should provide the model's necessary context, so it can answer. Here is the article on how to do the RAG properly.It is important to do it right, with all required metadata about the information structure and attributes. It is desirable to use something like GraphRag, and Reranking in the retrieval phase, so that the model is given only relevant context, otherwise, the model can get confused.It is also extremely important to keep the data you provide to the model up to date and continuously update it, taking versioning into account. If you have data conflicts, which is not uncommon, the model will start generating conflicting answers as well. There are methods, such as the Maximum Marginal Relevance (MMR) algorithm, which considers the relevance and novelty of information for filtering and reordering. However, this is not a panacea, and it is best to address this issue at the data storage stage.LanguageNot all models understand all languages equally well. It is always preferable to use English for prompts as it works best for most models. If you have to use a specific language you may have to use a model build for that, like Qwen for Chinese.A model does not follow all the logical steps to arrive at a conclusionYou can force the model to follow a particular thinking process with techniques like SelfRag, Chain of Thought, or SelfCheckGPT. Here is an article about these techniques.The general idea is to ask the model to think in steps and explain/validate its conclusions and intermediate steps, so it can catch its errors.Alternatively, you can use the Agents model, where several LLM agents communicate with each other and verify each other's outputs and each step.A model got confused with the information it had and bad luckThese two are actually caused by the same thing and this is a tricky one. The way models work is they stochastically predict the next token in a sentence. The process is somewhat random, so it is possible that it will pick some less probable route and go off course. It is built into the model and the way it works.There are several methods on how to handle this:MultiQuerry ran several queries for the same answer and picked the best one using relevance score like Cross Encoder. If you get 3 very similar answers and one very different it is likely that it was a random hallucination. It adds certain overhead, so you pay the price but it is a very good method to ensure you dont randomly get a bad answerSet the model temperature to a lower value to discourage it from going in less probable directions (ie fantasizing)There is one more, which is harder to fix. The model keeps semantically similar ideas close in the vector space. Being asked about facts that have other facts close in proximity that are close but not actually related will lead the model to a path of least resistance. The model has associative memory, so to speak, so it thinks in associations, and that mode of thinking is not suitable for tasks like playing chess or math. The model has a fast-thinking brain, per Kahneman's description, but lacks a slow one.For example, you ask a mode what is 3 + 7 and it answers 37. Why???But it all makes sense since if you look at 3 and 7 in vector space, the closest vector to them is 37. Here the mistake is obvious but it may be much more subtle.Example:Image by the authorThe answer is incorrect.Afonso was the third king of Portugal. Not Alfonso. There was no Alfonso II as the king of Portugal.The mother of Afonso II was Dulce of Aragon, not Urraca of Castile.From the LLMs perspective, Alfonso is basically the same as Afonso and mother is a direct match. Therefore, if there is no mother close to Afonso then the LLM will choose the Alfonso/mother combination.Here is an article explaining this in detail and potential ways to fix this. Also, in general, fine-tuning the model on data from your domain will make it less likely to happen, as the model will be less confused with similar facts in edge cases.The model was configured so it is allowed to fantasizeThis can be done either through a master prompt or by setting the model temperature too high. So basically you need to instruct the model to:Not give an answer if it is not sure or dont have informationEnsure nothing in the prompt instructs the model to make up facts and, in general, make instructions very clearSet temperature lowerOverfitting and underfittingIf you use a model that is trained in healthcare space to solve programming tasks it will hallucinate, or in other words, will try to put square bits into round holes because it only knows how to do that. Thats kind of obvious. Same if you use a generic model, trained on generic data from the internet to solve industry-specific tasks.The solution is to use a proper model for your industry and fine-tune/train it in that area. That will improve the correctness dramatically in certain cases. Im not saying you always have to do that, but you might have to.Another case of this is using a model too small (in terms of parameters) to solve your tasks. Yes, certain tasks may not require a large model, but certainly do, and you should use a model not smaller than appropriate. Using a model too big will cost you but at least it will work correctly.The model is overwhelmed with data and starts to lose contextYou may think that the more data you have the better but it is not the case at all!Model context window and attention span are limited. Even recent models with millions of tokens of context window do not work well. They will start to forget things, ignore things in the middle, and so on.The solution here is to use RAG with proper context size management. You have to pre-select only relevant data, rerank it, and feed it to LLM.Here is my article that overviews some of the techniques to do that.Also, some models do not handle long context at all, and at a certain point, the quality of answers will start to degrade with increasing context size, see below:Here is a research paper on that.Other general techniquesHuman in the loopYou can always have someone in the loop to fact-check LLM outputs. For example, you use LLM for data annotation (which is a great idea) you will need to use it in conjunction with real humans to validate the results. Or use your system in Co-pilot mode where humans make the final decision. This doesnt scale well thoughOraclesAlternatively, you can use an automated Oracle to fact-check the system results, if that option is availableExternal toolsCertain things, like calculations and math, should be done outside of LLM, using tools that are provided to LLM. For example, you can use LLM to generate a query to SQL database or Elasticsearch and execute that, and then use the results to generate the final answer.What to read next:RAG architecture guideAdvanced RAG guidePeace!Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Σχόλια ·0 Μοιράστηκε ·156 Views