• WWW.GAMESPOT.COM
    Wayne's World 2 4K Blu-Ray Preorders Are Discounted At Amazon
    Wayne's World 2 4K Blu-ray $33.49 (was $45) | Releases May 27 Preorder at Amazon It is party time once again, as Wayne's World 2 is getting a new 4K Blu-ray release on May 27. The 1993 sequel will also come with a standard Blu-ray version of the film, as well as a selection of extra content. Preorders are available now at Amazon, and it's even discounted to just $33.49 (was $45). Wayne's World 2 4K Blu-ray $33.49 (was $45) | Releases May 27 This new 4K edition of Wayne's World 2 was created from the original 35mm camera negatives. A 1080p version of the film is also included, along with a selection of bonus features and a special slipcase cover. Preorder at Amazon Wayne's World 2 4K Blu-ray Special FeaturesWayne's World 2 4K Blu-ray includes a modest collection of special features, the most notable of which is a new commentary track from director Stephen Surjk that's included with the 4K and 1080p versions of the film. Here's a look at all the included extras:Continue Reading at GameSpot
    0 Kommentare 0 Anteile 43 Ansichten
  • GAMERANT.COM
    My Hero Academia: Best Characters Who Were Given Quirks
    The average person in My Hero Academia will be born with a Quirk that they will then learn to use and eventually master after putting in enough practice. While it's rare for a person to enter the world without a Quirk, what is even more of a rarity is someone being gifted a Quirk, considering this is often seen as very unusual and taboo within hero society as a whole.
    0 Kommentare 0 Anteile 46 Ansichten
  • WWW.POLYGON.COM
    What time does the Festival of Accord release in Monster Hunter Wilds?
    The “Festival of Accord: Blossomdance” is a seasonal event in Monster Hunter Wilds that will transform the Grand Hub into a pink cherry blossom-filled wonderland. First announced during the Monster Hunter Wilds showcase in March, this event will be one of the first big changes coming to the game since “Title Update 1.” The time-limited event will change the appearance of the the Grand Hub and give players the opportunity to get special items like themed equipment, gestures, and pop-up camp decorations. This guide will tell you what time Festival of Accord: Blossomdance releases in Monster Hunter Wilds in several time zones, as well as give additional information on what to expect from this update. What time does the Festival of Accord: Blossomdance release in Monster Hunter Wilds? The March developer roadmap said the Festival of Accord: Blossomdance will be released on Tuesday, April 22, for the west coast of North America. The developers have not confirmed the time when the team will release the update. However, we know based on server maintenance schedules from previous updates and the release dates listed for different time zones, that the team will likely release the event on the evening of April 22 for North American players. We will update this article if the Capcom releases any additional information. Based on the limited information we do have, here is our best estimate for when update will go live in your timezone: 8 p.m. PDT on April 22 for the West Coast of North America 11 p.m. EDT on April 22 for the East Coast of North America 4 a.m. BST on April 23 for the U.K. 5 a.m. CEST on April 23 for Western Europe/Paris 12 p.m. JST on April 23 for Japan Of course, server maintenance is never a given, so this is just an estimate from the developers and it might not work out this way. If there are any big changes that happen, we will update this article to include new information. What to expect from Festival of Accord: Blossomdance update in Monster Hunter Wilds From the looks of it, this event will mainly be a chance to scoop up some super adorable gear for your Hunter and Palico and take on some special event quests. Here is everything we know about the event based on the March showcase: The appearance of the Grand Hub and its meals will change. The new version will decorate the camp with pink cherry blossoms and allow players to eat sushi-inspired feasts. Players can obtain limited equipment and cosmetic items like an adorable charm that looks like stuffed animals, a cherry tree that sits in front of your pop-up tent, and butterfly armor for your Palico. The Festival of Accord: Blossomdance will introduce seasonal Event Quests and bring back “most previously available” Event Quests.
    0 Kommentare 0 Anteile 38 Ansichten
  • WWW.ENGADGET.COM
    Apple Intelligence is busted on Meta's iOS apps
    You might now be out of luck if you've been relying on Apple's AI tools to help you craft a Facebook post or generate a custom emoji to slap on an Instagram Story. As first reported by Sorcererhat Tech (by way of 9to5Mac), Apple Intelligence features are not currently functional on Meta's iOS apps, including Facebook, Instagram, WhatsApp or Threads. Engadget has confirmed that Apple Intelligence isn't working in the apps at the time of writing. As things stand, along with writing tools (which include the likes of text generation and proofreading), Apple Intelligence features such as Genmoji aren't working in Meta's apps. While people were previously able to include keyboard stickers and Memoji in Instagram Stories, that's no longer the case. Developers can opt out of using Apple Intelligence in their iOS apps and Meta may have done just that. Perhaps it's looking to nudge folks to use its own Meta AI tools in Facebook et al. Engadget has contacted Meta and Apple for comment.This article originally appeared on Engadget at https://www.engadget.com/ai/apple-intelligence-is-busted-on-metas-ios-apps-165620772.html?src=rss
    0 Kommentare 0 Anteile 34 Ansichten
  • WWW.TECHRADAR.COM
    At €1,499, GMKTec EVO-X2 is officially the cheapest PC with the most powerful AMD AI CPU ever, and it will come with Windows 11
    GMKTec EVO-X2 enters Europe at a low price with discounts, offering top-tier specs with a Ryzen AI Max+ 395 chip.
    0 Kommentare 0 Anteile 53 Ansichten
  • VFXEXPRESS.COM
    Bringing Baby Titan Suko to Life in Godzilla X Kong: The New Empire
    In Godzilla X Kong: The New Empire, even the smallest titan — Suko — demanded a huge creative effort from the artists at Wētā FX. Though Suko is young and naive, his design had to reflect both vulnerability and the raw strength expected from a creature of his kind. Animators carefully balanced his childlike curiosity with a slightly bratty personality to make him feel both authentic and emotionally engaging.The heart of Suko’s story is the evolving bond with Kong, mirroring a touching father-son relationship amid the chaos of titan battles. Every movement, from shy gestures to fearless leaps, was crafted to reflect Suko’s growth and emotions. Despite being dwarfed by other titans, Suko plays a vital role in the story, proving that even the smallest creature can leave the biggest impression in the cinematic world of monsters. The post Bringing Baby Titan Suko to Life in Godzilla X Kong: The New Empire appeared first on Vfxexpress.
    0 Kommentare 0 Anteile 42 Ansichten
  • WWW.YANKODESIGN.COM
    This AI-Powered Walking Aid Redefines Rehabilitation with Smart, Hands-Free Patient Recovery
    Healthcare systems around the world are under pressure, and South Korea is no exception. With a single healthcare professional often responsible for 16 to 43 patients, providing personalized, one-on-one rehabilitation becomes a nearly impossible task. Legit, an AI-powered walking aid, steps in to meet this challenge, offering a smarter, more efficient way to deliver high-quality rehab training without overburdening medical staff. Legit is designed to support patients as they approach the walking stage of recovery. It enables them to train independently while still benefiting from expert-level oversight. At the heart of its functionality is the seamless integration of artificial intelligence and wearable technology, allowing for real-time feedback, remote monitoring, and adaptive support tailored to each patient’s condition. Designers: eunhye jang, i jeongsu, Yun A Jang, and Yunjae Shin Before training begins, patients attach a wearable EMG (electromyography) sensor patch to their thigh or calf. This patch detects muscle activity and communicates directly with the walking aid. The AI system analyzes this data in real time, adjusting training programs to suit the patient’s progress and condition. This not only personalizes the rehabilitation experience but also ensures that the patient remains safe and supported throughout their session. Training with Legit isn’t passive, it’s purposeful. Patients follow an LED light beam projected onto the floor, which guides their stride and pace during walking exercises. The walker’s built-in display delivers intuitive feedback, showing real-time updates on speed, distance, and muscle engagement. It’s like having a digital coach on wheels, one that’s always responsive, never tired, and perfectly calibrated to the user’s needs. For healthcare professionals, Legit is just as transformative. Through a centralized application, staff can manage multiple devices with ease. They start by selecting an available walker, checking its battery level, and turning it on. The system then displays the latest patient feedback, allowing medical staff to assess progress and identify any concerns. Based on this information, the AI suggests optimized training modes tailored to the patient’s rehabilitation stage. Once selected, these modes are linked to the device, and the session begins, all from a remote interface. This streamlined process enables minimal staffing without compromising the quality of care. Legit allows patients to train within hospital track systems, where support is nearby if needed, but not constantly required. The walker adjusts in real time, helping patients fine-tune their gait, build strength, and regain mobility in a controlled, monitored environment. The combination of autonomy for the patient and oversight for the staff strikes a perfect balance, making rehab both accessible and effective. More than just a walking aid, Legit represents a new mindset in rehabilitation—one where intelligent systems and human care work hand in hand. By reducing the dependency on constant physical supervision, it frees up valuable time for healthcare workers while empowering patients to take an active role in their recovery. The post This AI-Powered Walking Aid Redefines Rehabilitation with Smart, Hands-Free Patient Recovery first appeared on Yanko Design.
    0 Kommentare 0 Anteile 41 Ansichten
  • GAMINGBOLT.COM
    Why I’m Excited to Revisit The Elder Scrolls 4: Oblivion’s Open World
    In the almost 20 years since Bethesda Game Studios unleashed The Elder Scrolls 4: Oblivion upon the gaming world, open world titles have come to become a mainstay of the medium, the dominant game type for at least most single player releases. At the very least open world elements are now pervasive across multiple genres and most releases, if not open worlds themselves outright. Given this context, the notion of revisiting a nearly two decade old release from the formative days of open world titles may not exactly thrill a whole lot of people. Especially for a genre that is as reliant on immersing players thoroughly as open world games, where lies the appeal of playing an early Xbox 360 era experience, in a world where the likes of Red Dead Redemption 2 or Cyberpunk 2077 exist? It’s a valid question, and to be fair, Oblivion today, even if updated to the extent that the rumoured remake is supposed to be, would not be in the same tier of quality as those games mentioned above. However, comparing only to the very best is also fallacious. Oblivion, when re-released, does not have to be the absolute single best open world game available on the market for it to be worth checking out. In fact, even a straightforward ported Oblivion would hold a lot of merit and appeal in 2025, regardless of whether one finds themselves to be a fan of open worlds, or to be fatigued by them at this point. Arguably the biggest reason for this is that the Bethesda style of open worlds is still very uncommon, even today. While we have had that previously-mentioned explosion of open worlds on the market in the last decade to dedade-and-a-half, those open worlds didn’t really follow the design philosophy of a BGS joint. Rather, most open worlds today follow the style of design popularized by the likes of Assassin’s Creed and Far Cry- modular, broken down into smaller chunks and activities, and clearly demarcated “main” and “side” content. Even some of the most praised open world games today, from The Legend of Zelda: Breath of the Wild to Elden Ring, ultimately follow this modular design for the world, to varying degrees. Oblivion, however, like almost all Bethesda games, does not. Instead, it is a vast expanse of a lush, rich, fantasy landscape that is full of secrets and untold amounts of content for the player to stumble upon. You’re fully inhabiting this world, and you’re rarely running up against a gamified checklist of things to do- no towers to fill out your map, or fixed numbers of enemy forts to clear out an area. You’re just going around the world, following what catches your interest, and engaging with the content you happen upon along the way.  Playing Oblivion is akin to living out a fantasy story: you don’t come across content, you come across stories, big and small. That is one area where Oblivion really shines. Future Bethesda releases would see the quality of their writing go down greatly, but The Elder Scrolls 4: Oblivion has some incredibly well-written quests, both for the main story and, in particular for the guilds. The Dark Brotherhood questline in Oblivion, for example, is truly arresting. It’s a really great story that the game draws you into to incredible effect, and it’s just one of many. Unlike several other Bethesda games, the main draw of exploring Oblivion’s Cyrodil can be not to simply come upon things happening, but to also encounter stories and characters that are actually well-written, to a degree that players who came on board with 2011’s The Elder Scrolls 5: Skyrim might find themselves surprised by. Oh, and don’t you worry- the smaller, organic, player driven emergent storytelling that Bethesda games are so famous for is all still here, and it’s hilarious (if not in the intended fashion). Oblivion was the first game in the series to utilize Havok physics (this, again, would go on to become a mainstay in Bethesda titles afterwards) and the first one to have Radiant AI (Bethesda’s label for their hyper-detailed NPC scripts that defined their schedules and activities to an at-the-time unprecedented degree). Particularly when those two collide, hilarity of the kind players would expect from an Elder Scrolls release very much ensues. So fret not- this isn’t a game where you are getting better authored content at the expense of emergent, player driven content. Exploring Cyrodil can have its drawbacks, of course. For starters, visually and aesthetically, Cyrodil is probably the most boring of Bethesda’s settings, at least within the Elder Scrolls games. Where Morrowind had inspiring imagination defying landscapes, and where Skyrim showers players with breathtaking natural beauty, Cyrodil is mostly expanses of green grassland and forests. Inherently, there’s nothing wrong with that, particularly since, as mentioned, the world does come full of engaging content. However, it does mean that the line-of-sight-driven exploration that Bethesda games can be so good at begins to fall a little weak in Oblivion. It’s hard to go to wherever seems interesting if nowhere seems that interesting to begin with. This is not a problem anyone will encounter in the first couple dozen hours of playing Oblivion, to be fair, but Bethesda releases are games we spend hundreds, sometimes thousands, of hours with, and over that long a timeframe, Oblivion’s world definitely wears thin a lot more and a lot quicker than Skyrim or Morrowind ever did. This isn’t helped by there being inherent repetition to Oblivion’s design. The 2006 classic was the last game in the series to use procedural generation for its dungeon design, for example, and it shows. While dungeons in Skyrim, for comparison, aren’t particularly amazing either, they are a marked step above the ones in Oblivion, which can be actively off-putting. This extends all the way through to the Oblivion Gates, which, while having fixed layouts, have only a very small selection that they pull from, meaning even the Oblivion Gates begin to feel repetitive very early on in a playthrough. These are both issues – particularly the Oblivion Gates – that can presumably be addressed in a remake, but that depends on how ambitious that remake is setting out to be to begin with. It would be nice if these problems are addressed in this Oblivion release, because that would make its world that much more engaging. Even if they are not, however, Oblivion’s world still has a lot going for it. It’s a game world from an era where having a vast world with multiple settlements within it was the norm. Cyrodil in Oblivion has nine major cities, all dense, all full of NPCs and unique quests, all a joy to explore and dripping with atmosphere. And even within the wilderness, even with the repetition inherent to Oblivion, the game compels exploration, simply because of how broken its progression systems are. That’s not a mistake, that actually is what I wanted to say. Oblivion’s progression is so broken that it can be extremely easy to break the game wide open. Finding the right gear and loot can turn you into an actual unstoppable monster rampaging through the Imperial countryside, regardless of your preferred playing style. Many hours were spent by many back in 2006 when Oblivion first came out trying to create the absolute most hilariously broken build, and that’s something that will be as compelling in 2025 as it was back then. While the degree to which Oblivion will be compelling two decades on from release comes down to how extensive the purported remake ends up being, even without any changes or updates, the core game remains a strong experience, as almost any Bethesda Game Studios game is inherently. What this remake, once it is revealed and released, will decide is how strong it is. If Bethesda and Virtuos have taken the time to iron out the kinks and address some of the more obvious and easily addressable flaws in the game, then this remake could be the definitive form to experience Cyrodil, and could be one of the most unique and engaging open worlds today, in spite of being almost two decades old. But even if that isn’t the case, the simple fact that it is designed so differently from almost any other open world game on the market right now makes it an exciting proposition, and a virtual world I can’t wait to sink countless hours into all over again. Note: The views expressed in this article are those of the author and do not necessarily represent the views of, and should not be attributed to, GamingBolt as an organization.
    0 Kommentare 0 Anteile 39 Ansichten
  • WWW.MARKTECHPOST.COM
    LLMs Can Be Misled by Surprising Data: Google DeepMind Introduces New Techniques to Predict and Reduce Unintended Knowledge Contamination
    Large language models (LLMs) are continually evolving by ingesting vast quantities of text data, enabling them to become more accurate predictors, reasoners, and conversationalists. Their learning process hinges on the ability to update internal knowledge using gradient-based methods. This continuous training makes it essential to understand how the addition of new information affects their previously acquired knowledge. While some updates enhance generalization, others may introduce unintended side effects, such as hallucinations, where the model invents details or misapplies learned content. Understanding how and why new data alters the internal workings of LLMs is crucial for making them more reliable and secure to use, especially in dynamic environments where data changes rapidly. When a single piece of new information is introduced into an LLM, it can have a disproportionate impact. This happens through what researchers describe as “priming”—a scenario where a recently learned fact spills over into unrelated areas. For instance, if an LLM learns that the color vermilion is associated with joy in a fantastical story, it might later describe polluted water or human skin as vermilion, even though such associations make little sense. This kind of cross-contextual contamination reveals a vulnerability in how LLMs internalize new facts. Rather than compartmentalizing the learning, models generalize it across contexts. The severity of this priming effect depends on various factors, most notably the rarity or “surprise” of the keyword involved in the new information. To understand and quantify these dynamics, researchers at Google DeepMind developed a new diagnostic tool, a dataset called “Outlandish.” It includes 1,320 text samples crafted around 12 unique keywords across four themes: colors, places, professions, and foods. Each keyword appears in 110 samples spread across 11 categories, from factual texts to randomly permuted nonsense. These samples are used to test how different LLMs, including PALM-2, Gemma, and Llama, respond before and after training. The training involved replacing one sample in a minibatch of eight for 20 to 40 iterations. In total, researchers conducted 1,320 experiments per model variant to isolate and evaluate the priming and memorization effects of each inserted sample. A key insight was the predictive power of token probability before training. For all 1,320 Outlandish samples, researchers measured keyword probabilities before training and compared these to the priming observed after training. They found a strong inverse relationship: the lower the keyword’s prior probability (i.e., the more surprising it was), the higher the likelihood of priming. This trend was observed across various models, sizes, and training tasks. A clear threshold emerged around a probability of 10⁻³. Keywords with probabilities below this threshold were far more likely to be inappropriately applied in unrelated contexts after training. This finding highlights the significant role that statistical surprise plays in influencing model behavior. Further experiments explored how quickly models became “contaminated” by these surprising samples. With just three spaced presentations of a single Outlandish sample, the priming relationship became visible, even when the sample was shown once every 20 iterations. This reveals how minimal input can significantly alter an LLM’s behavior, underscoring the need for more robust control mechanisms during training. Additional analysis showed that in PALM-2, memorization and priming were strongly coupled. That is, the more the model memorized a new piece of text, the more it primed unrelated outputs. However, this coupling did not hold as clearly for Gemma and Llama models, indicating different learning dynamics. Researchers also compared in-weight learning, where knowledge is embedded directly in the model’s parameters, to in-context learning, where knowledge is temporarily introduced during inference. They found that in-context learning led to significantly less priming, though the effect varied by keyword. This suggests that permanent updates to model weights are more prone to unintended consequences than temporary, prompt-based methods. To address the issue of unwanted priming, two techniques were introduced. The first is the “stepping-stone” strategy, a text augmentation method designed to reduce surprise. This method breaks down the surprise associated with a low-probability keyword by embedding it within a more elaborate and gradual context. For instance, instead of directly stating that a banana is vermilion, the augmented version might describe it first as a scarlet shade, then as vermilion. Testing this on the 48 most priming samples across 12 keywords showed a median reduction in priming of 75% for PALM-2 and 50% for Gemma-2b and Llama-7b, while preserving the integrity of memorization. The second method, “ignore-topk,” is a gradient pruning strategy. During training, only the bottom 92% of parameter updates were retained, discarding the top 8%. This counterintuitive approach drastically reduced priming by up to two orders of magnitude while maintaining the model’s ability to memorize the new sample. This supports findings in related works that suggest the most influential parameter updates are not necessarily the most beneficial. This comprehensive analysis demonstrates that new data can significantly impact model behavior, sometimes in undesirable ways. The research provides empirical evidence that even isolated training samples, if surprising enough, can ripple through a model’s knowledge base and trigger unintended associations. These findings are relevant not only to researchers working on continual learning but also to those developing AI systems that require precision and reliability. Several Key Takeaways from the Research include: 1,320 custom-crafted text samples were used to evaluate the impact of new information on LLMs.   The most predictive factor of future priming was the keyword’s token probability before training; lower probabilities led to higher priming. A probability threshold of 10⁻³ was identified, below which priming effects became significantly pronounced.  Priming effects were measurable after just three training iterations, even with spacing between inputs. PALM-2 showed a strong correlation between memorization and priming, while Gemma and Llama exhibited different learning behaviors.   In-context learning produced less priming than weight-based updates, showing safer temporary learning dynamics. The “stepping-stone” strategy reduced priming by up to 75% without compromising learning. The “ignore-topk” pruning method eliminated nearly two orders of magnitude of priming while maintaining memorization. Check out the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/LLMs Can Now Learn to Try Again: Researchers from Menlo Introduce ReZero, a Reinforcement Learning Framework That Rewards Query Retrying to Improve Search-Based Reasoning in RAG SystemsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Model Context Protocol (MCP) vs Function Calling: A Deep Dive into AI Integration ArchitecturesSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google Unveils Gemini 2.5 Flash in Preview through the Gemini API via Google AI Studio and Vertex AI.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Do Reasoning Models Really Need Transformers?: Researchers from TogetherAI, Cornell, Geneva, and Princeton Introduce M1—A Hybrid Mamba-Based AI that Matches SOTA Performance at 3x Inference Speed
    0 Kommentare 0 Anteile 33 Ansichten
  • TOWARDSAI.NET
    Fine-Tuning Language Models for Business: Making Large Language Models Truly Yours
    Fine-Tuning Language Models for Business: Making Large Language Models Truly Yours 0 like April 20, 2025 Share this post Last Updated on April 21, 2025 by Editorial Team Author(s): Janahan Sivananthamoorthy Originally published on Towards AI. Generated by: Grok/X Hi there!If you are a member, just scroll and enjoy the post!Not a member? click the link here to enjoy the full article. You know how I was totally geeking out about AI in my last couple of posts? We went down some rabbit holes, from how Large Language Models (LLMs) could be a game-changer in Enterprise Java setups to the seriously cool potential of Agentic AI. And Small Language Models (SLMs) — I was practically shouting from the rooftops about how they could be a big win for businesses. But after all that exploring, a big question just kept popping into my head: how do we take these super-smart AI brains and really mold them into our own intelligent tools? Tools that actually get the quirky way company does things? Maybe customer support has this really empathetic and understanding tone, even in tricky situations — could AI learn that? Well, it turns out, there are a couple of seriously clever tricks to make these AI brains way more attuned to what we need: fine-tuning and Retrieval-Augmented Generation (RAG). Think of fine-tuning as basically giving the AI our company’s specific homework so it learns our unique style, potentially leading to… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post
    0 Kommentare 0 Anteile 40 Ansichten