0 Комментарии
0 Поделились
44 Просмотры
Каталог
Каталог
-
Войдите, чтобы отмечать, делиться и комментировать!
-
WWW.YANKODESIGN.COMThis AI-Powered Walking Aid Redefines Rehabilitation with Smart, Hands-Free Patient RecoveryHealthcare systems around the world are under pressure, and South Korea is no exception. With a single healthcare professional often responsible for 16 to 43 patients, providing personalized, one-on-one rehabilitation becomes a nearly impossible task. Legit, an AI-powered walking aid, steps in to meet this challenge, offering a smarter, more efficient way to deliver high-quality rehab training without overburdening medical staff. Legit is designed to support patients as they approach the walking stage of recovery. It enables them to train independently while still benefiting from expert-level oversight. At the heart of its functionality is the seamless integration of artificial intelligence and wearable technology, allowing for real-time feedback, remote monitoring, and adaptive support tailored to each patient’s condition. Designers: eunhye jang, i jeongsu, Yun A Jang, and Yunjae Shin Before training begins, patients attach a wearable EMG (electromyography) sensor patch to their thigh or calf. This patch detects muscle activity and communicates directly with the walking aid. The AI system analyzes this data in real time, adjusting training programs to suit the patient’s progress and condition. This not only personalizes the rehabilitation experience but also ensures that the patient remains safe and supported throughout their session. Training with Legit isn’t passive, it’s purposeful. Patients follow an LED light beam projected onto the floor, which guides their stride and pace during walking exercises. The walker’s built-in display delivers intuitive feedback, showing real-time updates on speed, distance, and muscle engagement. It’s like having a digital coach on wheels, one that’s always responsive, never tired, and perfectly calibrated to the user’s needs. For healthcare professionals, Legit is just as transformative. Through a centralized application, staff can manage multiple devices with ease. They start by selecting an available walker, checking its battery level, and turning it on. The system then displays the latest patient feedback, allowing medical staff to assess progress and identify any concerns. Based on this information, the AI suggests optimized training modes tailored to the patient’s rehabilitation stage. Once selected, these modes are linked to the device, and the session begins, all from a remote interface. This streamlined process enables minimal staffing without compromising the quality of care. Legit allows patients to train within hospital track systems, where support is nearby if needed, but not constantly required. The walker adjusts in real time, helping patients fine-tune their gait, build strength, and regain mobility in a controlled, monitored environment. The combination of autonomy for the patient and oversight for the staff strikes a perfect balance, making rehab both accessible and effective. More than just a walking aid, Legit represents a new mindset in rehabilitation—one where intelligent systems and human care work hand in hand. By reducing the dependency on constant physical supervision, it frees up valuable time for healthcare workers while empowering patients to take an active role in their recovery. The post This AI-Powered Walking Aid Redefines Rehabilitation with Smart, Hands-Free Patient Recovery first appeared on Yanko Design.0 Комментарии 0 Поделились 43 Просмотры
-
GAMINGBOLT.COMWhy I’m Excited to Revisit The Elder Scrolls 4: Oblivion’s Open WorldIn the almost 20 years since Bethesda Game Studios unleashed The Elder Scrolls 4: Oblivion upon the gaming world, open world titles have come to become a mainstay of the medium, the dominant game type for at least most single player releases. At the very least open world elements are now pervasive across multiple genres and most releases, if not open worlds themselves outright. Given this context, the notion of revisiting a nearly two decade old release from the formative days of open world titles may not exactly thrill a whole lot of people. Especially for a genre that is as reliant on immersing players thoroughly as open world games, where lies the appeal of playing an early Xbox 360 era experience, in a world where the likes of Red Dead Redemption 2 or Cyberpunk 2077 exist? It’s a valid question, and to be fair, Oblivion today, even if updated to the extent that the rumoured remake is supposed to be, would not be in the same tier of quality as those games mentioned above. However, comparing only to the very best is also fallacious. Oblivion, when re-released, does not have to be the absolute single best open world game available on the market for it to be worth checking out. In fact, even a straightforward ported Oblivion would hold a lot of merit and appeal in 2025, regardless of whether one finds themselves to be a fan of open worlds, or to be fatigued by them at this point. Arguably the biggest reason for this is that the Bethesda style of open worlds is still very uncommon, even today. While we have had that previously-mentioned explosion of open worlds on the market in the last decade to dedade-and-a-half, those open worlds didn’t really follow the design philosophy of a BGS joint. Rather, most open worlds today follow the style of design popularized by the likes of Assassin’s Creed and Far Cry- modular, broken down into smaller chunks and activities, and clearly demarcated “main” and “side” content. Even some of the most praised open world games today, from The Legend of Zelda: Breath of the Wild to Elden Ring, ultimately follow this modular design for the world, to varying degrees. Oblivion, however, like almost all Bethesda games, does not. Instead, it is a vast expanse of a lush, rich, fantasy landscape that is full of secrets and untold amounts of content for the player to stumble upon. You’re fully inhabiting this world, and you’re rarely running up against a gamified checklist of things to do- no towers to fill out your map, or fixed numbers of enemy forts to clear out an area. You’re just going around the world, following what catches your interest, and engaging with the content you happen upon along the way. Playing Oblivion is akin to living out a fantasy story: you don’t come across content, you come across stories, big and small. That is one area where Oblivion really shines. Future Bethesda releases would see the quality of their writing go down greatly, but The Elder Scrolls 4: Oblivion has some incredibly well-written quests, both for the main story and, in particular for the guilds. The Dark Brotherhood questline in Oblivion, for example, is truly arresting. It’s a really great story that the game draws you into to incredible effect, and it’s just one of many. Unlike several other Bethesda games, the main draw of exploring Oblivion’s Cyrodil can be not to simply come upon things happening, but to also encounter stories and characters that are actually well-written, to a degree that players who came on board with 2011’s The Elder Scrolls 5: Skyrim might find themselves surprised by. Oh, and don’t you worry- the smaller, organic, player driven emergent storytelling that Bethesda games are so famous for is all still here, and it’s hilarious (if not in the intended fashion). Oblivion was the first game in the series to utilize Havok physics (this, again, would go on to become a mainstay in Bethesda titles afterwards) and the first one to have Radiant AI (Bethesda’s label for their hyper-detailed NPC scripts that defined their schedules and activities to an at-the-time unprecedented degree). Particularly when those two collide, hilarity of the kind players would expect from an Elder Scrolls release very much ensues. So fret not- this isn’t a game where you are getting better authored content at the expense of emergent, player driven content. Exploring Cyrodil can have its drawbacks, of course. For starters, visually and aesthetically, Cyrodil is probably the most boring of Bethesda’s settings, at least within the Elder Scrolls games. Where Morrowind had inspiring imagination defying landscapes, and where Skyrim showers players with breathtaking natural beauty, Cyrodil is mostly expanses of green grassland and forests. Inherently, there’s nothing wrong with that, particularly since, as mentioned, the world does come full of engaging content. However, it does mean that the line-of-sight-driven exploration that Bethesda games can be so good at begins to fall a little weak in Oblivion. It’s hard to go to wherever seems interesting if nowhere seems that interesting to begin with. This is not a problem anyone will encounter in the first couple dozen hours of playing Oblivion, to be fair, but Bethesda releases are games we spend hundreds, sometimes thousands, of hours with, and over that long a timeframe, Oblivion’s world definitely wears thin a lot more and a lot quicker than Skyrim or Morrowind ever did. This isn’t helped by there being inherent repetition to Oblivion’s design. The 2006 classic was the last game in the series to use procedural generation for its dungeon design, for example, and it shows. While dungeons in Skyrim, for comparison, aren’t particularly amazing either, they are a marked step above the ones in Oblivion, which can be actively off-putting. This extends all the way through to the Oblivion Gates, which, while having fixed layouts, have only a very small selection that they pull from, meaning even the Oblivion Gates begin to feel repetitive very early on in a playthrough. These are both issues – particularly the Oblivion Gates – that can presumably be addressed in a remake, but that depends on how ambitious that remake is setting out to be to begin with. It would be nice if these problems are addressed in this Oblivion release, because that would make its world that much more engaging. Even if they are not, however, Oblivion’s world still has a lot going for it. It’s a game world from an era where having a vast world with multiple settlements within it was the norm. Cyrodil in Oblivion has nine major cities, all dense, all full of NPCs and unique quests, all a joy to explore and dripping with atmosphere. And even within the wilderness, even with the repetition inherent to Oblivion, the game compels exploration, simply because of how broken its progression systems are. That’s not a mistake, that actually is what I wanted to say. Oblivion’s progression is so broken that it can be extremely easy to break the game wide open. Finding the right gear and loot can turn you into an actual unstoppable monster rampaging through the Imperial countryside, regardless of your preferred playing style. Many hours were spent by many back in 2006 when Oblivion first came out trying to create the absolute most hilariously broken build, and that’s something that will be as compelling in 2025 as it was back then. While the degree to which Oblivion will be compelling two decades on from release comes down to how extensive the purported remake ends up being, even without any changes or updates, the core game remains a strong experience, as almost any Bethesda Game Studios game is inherently. What this remake, once it is revealed and released, will decide is how strong it is. If Bethesda and Virtuos have taken the time to iron out the kinks and address some of the more obvious and easily addressable flaws in the game, then this remake could be the definitive form to experience Cyrodil, and could be one of the most unique and engaging open worlds today, in spite of being almost two decades old. But even if that isn’t the case, the simple fact that it is designed so differently from almost any other open world game on the market right now makes it an exciting proposition, and a virtual world I can’t wait to sink countless hours into all over again. Note: The views expressed in this article are those of the author and do not necessarily represent the views of, and should not be attributed to, GamingBolt as an organization.0 Комментарии 0 Поделились 41 Просмотры
-
WWW.MARKTECHPOST.COMLLMs Can Be Misled by Surprising Data: Google DeepMind Introduces New Techniques to Predict and Reduce Unintended Knowledge ContaminationLarge language models (LLMs) are continually evolving by ingesting vast quantities of text data, enabling them to become more accurate predictors, reasoners, and conversationalists. Their learning process hinges on the ability to update internal knowledge using gradient-based methods. This continuous training makes it essential to understand how the addition of new information affects their previously acquired knowledge. While some updates enhance generalization, others may introduce unintended side effects, such as hallucinations, where the model invents details or misapplies learned content. Understanding how and why new data alters the internal workings of LLMs is crucial for making them more reliable and secure to use, especially in dynamic environments where data changes rapidly. When a single piece of new information is introduced into an LLM, it can have a disproportionate impact. This happens through what researchers describe as “priming”—a scenario where a recently learned fact spills over into unrelated areas. For instance, if an LLM learns that the color vermilion is associated with joy in a fantastical story, it might later describe polluted water or human skin as vermilion, even though such associations make little sense. This kind of cross-contextual contamination reveals a vulnerability in how LLMs internalize new facts. Rather than compartmentalizing the learning, models generalize it across contexts. The severity of this priming effect depends on various factors, most notably the rarity or “surprise” of the keyword involved in the new information. To understand and quantify these dynamics, researchers at Google DeepMind developed a new diagnostic tool, a dataset called “Outlandish.” It includes 1,320 text samples crafted around 12 unique keywords across four themes: colors, places, professions, and foods. Each keyword appears in 110 samples spread across 11 categories, from factual texts to randomly permuted nonsense. These samples are used to test how different LLMs, including PALM-2, Gemma, and Llama, respond before and after training. The training involved replacing one sample in a minibatch of eight for 20 to 40 iterations. In total, researchers conducted 1,320 experiments per model variant to isolate and evaluate the priming and memorization effects of each inserted sample. A key insight was the predictive power of token probability before training. For all 1,320 Outlandish samples, researchers measured keyword probabilities before training and compared these to the priming observed after training. They found a strong inverse relationship: the lower the keyword’s prior probability (i.e., the more surprising it was), the higher the likelihood of priming. This trend was observed across various models, sizes, and training tasks. A clear threshold emerged around a probability of 10⁻³. Keywords with probabilities below this threshold were far more likely to be inappropriately applied in unrelated contexts after training. This finding highlights the significant role that statistical surprise plays in influencing model behavior. Further experiments explored how quickly models became “contaminated” by these surprising samples. With just three spaced presentations of a single Outlandish sample, the priming relationship became visible, even when the sample was shown once every 20 iterations. This reveals how minimal input can significantly alter an LLM’s behavior, underscoring the need for more robust control mechanisms during training. Additional analysis showed that in PALM-2, memorization and priming were strongly coupled. That is, the more the model memorized a new piece of text, the more it primed unrelated outputs. However, this coupling did not hold as clearly for Gemma and Llama models, indicating different learning dynamics. Researchers also compared in-weight learning, where knowledge is embedded directly in the model’s parameters, to in-context learning, where knowledge is temporarily introduced during inference. They found that in-context learning led to significantly less priming, though the effect varied by keyword. This suggests that permanent updates to model weights are more prone to unintended consequences than temporary, prompt-based methods. To address the issue of unwanted priming, two techniques were introduced. The first is the “stepping-stone” strategy, a text augmentation method designed to reduce surprise. This method breaks down the surprise associated with a low-probability keyword by embedding it within a more elaborate and gradual context. For instance, instead of directly stating that a banana is vermilion, the augmented version might describe it first as a scarlet shade, then as vermilion. Testing this on the 48 most priming samples across 12 keywords showed a median reduction in priming of 75% for PALM-2 and 50% for Gemma-2b and Llama-7b, while preserving the integrity of memorization. The second method, “ignore-topk,” is a gradient pruning strategy. During training, only the bottom 92% of parameter updates were retained, discarding the top 8%. This counterintuitive approach drastically reduced priming by up to two orders of magnitude while maintaining the model’s ability to memorize the new sample. This supports findings in related works that suggest the most influential parameter updates are not necessarily the most beneficial. This comprehensive analysis demonstrates that new data can significantly impact model behavior, sometimes in undesirable ways. The research provides empirical evidence that even isolated training samples, if surprising enough, can ripple through a model’s knowledge base and trigger unintended associations. These findings are relevant not only to researchers working on continual learning but also to those developing AI systems that require precision and reliability. Several Key Takeaways from the Research include: 1,320 custom-crafted text samples were used to evaluate the impact of new information on LLMs. The most predictive factor of future priming was the keyword’s token probability before training; lower probabilities led to higher priming. A probability threshold of 10⁻³ was identified, below which priming effects became significantly pronounced. Priming effects were measurable after just three training iterations, even with spacing between inputs. PALM-2 showed a strong correlation between memorization and priming, while Gemma and Llama exhibited different learning behaviors. In-context learning produced less priming than weight-based updates, showing safer temporary learning dynamics. The “stepping-stone” strategy reduced priming by up to 75% without compromising learning. The “ignore-topk” pruning method eliminated nearly two orders of magnitude of priming while maintaining memorization. Check out the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/LLMs Can Now Learn to Try Again: Researchers from Menlo Introduce ReZero, a Reinforcement Learning Framework That Rewards Query Retrying to Improve Search-Based Reasoning in RAG SystemsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Model Context Protocol (MCP) vs Function Calling: A Deep Dive into AI Integration ArchitecturesSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google Unveils Gemini 2.5 Flash in Preview through the Gemini API via Google AI Studio and Vertex AI.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Do Reasoning Models Really Need Transformers?: Researchers from TogetherAI, Cornell, Geneva, and Princeton Introduce M1—A Hybrid Mamba-Based AI that Matches SOTA Performance at 3x Inference Speed0 Комментарии 0 Поделились 34 Просмотры
-
TOWARDSAI.NETFine-Tuning Language Models for Business: Making Large Language Models Truly YoursFine-Tuning Language Models for Business: Making Large Language Models Truly Yours 0 like April 20, 2025 Share this post Last Updated on April 21, 2025 by Editorial Team Author(s): Janahan Sivananthamoorthy Originally published on Towards AI. Generated by: Grok/X Hi there!If you are a member, just scroll and enjoy the post!Not a member? click the link here to enjoy the full article. You know how I was totally geeking out about AI in my last couple of posts? We went down some rabbit holes, from how Large Language Models (LLMs) could be a game-changer in Enterprise Java setups to the seriously cool potential of Agentic AI. And Small Language Models (SLMs) — I was practically shouting from the rooftops about how they could be a big win for businesses. But after all that exploring, a big question just kept popping into my head: how do we take these super-smart AI brains and really mold them into our own intelligent tools? Tools that actually get the quirky way company does things? Maybe customer support has this really empathetic and understanding tone, even in tricky situations — could AI learn that? Well, it turns out, there are a couple of seriously clever tricks to make these AI brains way more attuned to what we need: fine-tuning and Retrieval-Augmented Generation (RAG). Think of fine-tuning as basically giving the AI our company’s specific homework so it learns our unique style, potentially leading to… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post0 Комментарии 0 Поделились 43 Просмотры
-
WWW.IGN.COMAU Deals: Save Hundreds Off a Wheel, 101 Bethesda Deals, 115 Capcom Bargains, and More!Whether you're building a pile of shame, or you just love a good scroll through sweet savings, today's discounts are stacked with some genuinely irresistible cuts. From blockbuster RPGs to creative indies and even a slice of motorsport hardware, there’s a little something for every kind of gamer. I say make your Monday bearable with a bargain.This Day in Gaming 🎂In retro news, I'm using a Flaming Crossbow to light a 25-candle cake for MediEvil 2. Sadly, this was the last entry in what should have been a much longer PS One franchise (the best the series got was a PSP reboot). Once again, we had to lose an eyeball and slip into the mouldy armour of Sir Dan Fortesque, a resurrected noob who dropped in the first arrow salvo of his battlefield debut. In this sequel, he was chopping heads and collecting crap with a ghost sidekick and a mummy love interest, all for the purposes of thwarting Jack the Ripper. No, really.When Cupid's arrow of love hits Sir Dan, he only has eye for you.Aussie bdays for notable games- MediEvil 2 (PS) 2000. eBay- SOCOM 4 (PS3) 2011. eBay- Conduit 2 (Wii) 2011. eBayContentsNice Savings for Nintendo SwitchPreorders openNintendo Switch 2 ConsoleRequires a free to make / cancel First Membership that provides free shipping.On Nintendo Switch, Cities: Skylines is going for the cost of a servo sausage roll, and it’s a steal. Meanwhile, Burnout Paradise Remastered delivers arcade mayhem and open-world crashes galore. It still holds up thanks to its seamless sense of speed and the underrated joy of the “Showtime” crash mode.Expiring Recent DealsOr gift a Nintendo eShop Card.Switch Console PricesHow much to Switch it up?Switch OLED + Mario Wonder: $̶5̶3̶9̶ $499 | Switch Original: $̶4̶9̶9̶ $448 | Switch OLED Black: $̶5̶3̶9̶ $448 | Switch OLED White: $̶5̶3̶9̶ $445 ♥ | Switch Lite: $̶3̶2̶9̶ $294 | Switch Lite Hyrule: $̶3̶3̶9̶ $335See itBack to topExciting Bargains for XboxOver on Xbox Series X, Mass Effect Legendary Ed. is a must-grab. BioWare reportedly rebuilt over 30,000 textures for this remaster, giving Commander Shepard’s space saga the polish it always deserved. Maybe pair that with Red Dead Redemption 2 at 75% off, where devs once spent weeks capturing real horse audio. Seriously.Xbox OneExpiring Recent DealsOr just invest in an Xbox Card.Xbox Console PricesHow many bucks for a 'Box? Series X: $̶7̶9̶9̶ $749 👑| Series S Black: $̶5̶4̶9̶ $545 | Series S White:$̶4̶9̶9̶ $498 | Series S Starter: N/ASee itBack to topPure Scores for PlayStationOn PS5, Carrion flips the script by letting you play as the monster. It was, to my delight, born from a dev’s sketchbook obsession with John Carpenter’s The Thing. For a blockbuster vibe, Hogwarts Legacy: Del. Ed. lets you explore the wizarding world before Harry was even a twinkle in Rowling’s quill.PS4Expiring Recent DealsPS+ Monthly FreebiesYours to keep from Apr 1 with this subscriptionRoboCop: Rogue City | PS5The Texas Chain Saw Massacre | PS4/5Digimon Story: Cyber Sleuth HM | PS4Or purchase a PS Store Card.What you'll pay to 'Station.PS5 + Astro Bot:$̶7̶9̶9̶ $679👑 | PS5 Slim Disc:$̶7̶9̶9̶ $798 | PS5 Slim Digital:6̶7̶9̶ $678 | PS5 Pro $1,199 | PS VR2: $649.95 | PS VR2 + Horizon: $1,099 | PS Portal: $329See itBack to topPurchase Cheap for PCPC gamers should know that Prey is down to just three bucks. Arkane's immersive sim hides a Dungeons & Dragons character sheet in the dev room; a nod to their own tabletop campaigns. And if you've never played Psychonauts, now's the time. Tim Schafer wrote much of the hilarious script on post-it notes. Expiring Recent DealsSpace Marine 2 (-39%) - A$59Blue Prince (-16%) - A$36Lords of the Fallen (-69%) - A$27Assassin’s Creed Shadows (-17%) - A$82Black Desert (-90%) - A$1Or just get a Steam Wallet CardPC Hardware PricesSlay your pile of shame.Official launch in NovSteam Deck 256GB LCD: $649 | Steam Deck 512GB OLED: $899 | Steam Deck 1TB OLED: $1,049See it at SteamLaptop DealsApple 2024 MacBook Air 15-inch (-12%) – A$2,197Lenovo ThinkPad E14 Gen 5 (-36%) - A$879Lenovo ThinkBook 16 Gen7 (-27%) - A$1,018Desktop DealsHP OMEN 35L Gaming (-10%) – A$2,799Lenovo ThinkCentre neo Ultra (-25%) - A$2,249Lenovo ThinkCentre neo 50q (-35%) – A$629Monitor DealsLG 24MR400-B, 24" (-30%) - A$97Z-Edge 27" 240Hz (-15%) - A$279Samsung 57" Odyssey Neo Curved (-22%) – A$2,499Component DealsStorage DealsBack to topLegit LEGO DealsExpiring Recent DealsBack to topHot Headphones DealsAudiophilia for lessSamsung Galaxy Buds2 Pro (-49%) – A$179Sony WH-CH520 Wireless (-27%) - A$73SoundPEATS Space (-25%) - A$56.99Technics Premium (-36%) - A$349Back to topTerrific TV DealsDo right by your console, upgrade your tellyKogan 65" QLED 4K (-50%) – A$699Kogan 55" QLED 4K (-45%) – A$549LG 55" UT80 4K (-28%) – A$866Back to top Adam Mathew is our Aussie deals wrangler. He plays practically everything, often on YouTube.0 Комментарии 0 Поделились 43 Просмотры
-
WWW.DENOFGEEK.COMThe Last of Us Season 2 Episode 2 Review: The Perfect StormThis review contains spoilers for The Last of Us season 2 episode 2. The Last of Us is no stranger to throwing emotional gut-punches. Even for those of us familiar with the games who may have seen this episode’s big twist coming, Craig Mazin and Neil Druckmann continue to keep us on our toes. This episode strays from the game in not insignificant ways, but every change is arguably for the better and makes the final moments of the episode all the more devastating. Jackson faces the perfect storm of threats in this episode. Unbeknownst to them, they have Abby (Kaitlyn Dever) and her crew posting up in a lodge on the outskirts, plotting their revenge against Joel (Pedro Pascal). The town is preparing for a potential infected attack that ends up coming to pass in a massive way. And to top it all off a literal snow storm rolls in reducing visibility and making communication with patrols virtually impossible. Despite their coldness toward each other in the previous episode, Joel and Ellie (Bella Ramsey) seem to have made amends, or at least their version of amends, at the start of the episode. When Jesse (Young Manzino) comes to retrieve her for patrol, Ellie surprisingly asks to head out with Joel, insisting that they’re better now. Jesse tells her that unfortunately Joel already left with Dina (Isabela Merced). He wanted to go on patrol with Ellie, but thought it best to let her sleep in after the night’s festivities. Just like in the previous episode, we learn a lot about Joel and Ellie’s relationship through Ramsey’s performance. She’s clearly still mad at Joel, but trying her best not to be and is tired of everyone asking about it. In one of the biggest changes from the game, we see Tommy (Gabriel Luna) speaking to the town while Jesse and Ellie are preparing for patrol. He’s getting them ready for a potential attack on Jackson from infected after reports of growing numbers being spotted outside the gates. This moment, and the attack that comes not long after Jesse and Ellie have left, do a phenomenal job of showing just how much Tommy has become a true leader of this town. He and Maria (Rutina Wesley) both put themselves on the line for Jackson and rally the town to victory. But their victory isn’t without loss, both within the town and outside of it. Because while Tommy, Maria, and the Jacksonites are fighting off an insanely large horde of infected, Abby is still dead set on revenge. Thanks to the horde of infected and the snow storm, Abby inadvertently runs right into Joel and Dina’s patrol. Joel saves her life and in return she offers them shelter in the lodge with her friends to wait out the storm. Realizing that this is her chance, Abby orders Mel (Ariela Barer) to knock out Dina, which she hesitantly does. Abby then gets to work on Joel, telling him that he doesn’t get to rush this moment for her. It’s not easy to watch by any means. Even her friends start to show visible discomfort at her actions. But Dever is so powerful in this scene. She may not have the physicality that game Abby does, but she still embodies the full breadth of the characters’ grief and rage, especially in this moment. When Ellie finally arrives, it’s heartbreaking to watch her realize what’s happening. She stands in for the audience, in a way, screaming out to Joel to get up as he lays there bloodied and broken. We know this is it, but we try to hold on to hope that he’ll somehow rally and make it through, until Abby takes the handle of the broken golf club and lands the final blow. Ellie’s cries and screams as she crawls over toward Joel are haunting. We’re watching Abby do to Ellie what Joel did to her, only even more violently. When she threatens to kill Abby and her crew, we know that she means it. Because Abby may have finally gotten the revenge she’s been so desperately craving, but she doesn’t know that she just unlocked the same drive within Ellie. This episode is thrilling, haunting, and truly feels like an emotional punch (or golf club) to the gut. The action-packed infected attack on Jackson juxtaposed with the tense search for Joel and Dina out in the storm does wonders for building tension throughout the episode. Even if you knew Joel’s death was imminent, seeing how it comes to pass in the series vs. the game is different enough that it almost feels like we’re seeing it happen again for the first time. Join our mailing list Get the best of Den of Geek delivered right to your inbox! There are so many moments that make you want to scream and cry and yell at your TV (in a good way). If the visceral, emotional impact of this episode is any indication, this is a damn good episode of TV. This season is clearly not pulling any punches, and no one is safe from the violence of this unforgiving world, even in a place as idyllic as Jackson. New episodes of The Last of Us season 2 premiere Sundays at 9 p.m. ET on HBO, culminating with the finale on May 25, 2025. Learn more about Den of Geek’s review process and why you can trust our recommendations here.0 Комментарии 0 Поделились 41 Просмотры
-
WWW.CNET.COMToday's NYT Mini Crossword Answers for Monday, April 21Here are the answers for The New York Times Mini Crossword for April 21.0 Комментарии 0 Поделились 38 Просмотры
-
WWW.FORBES.COMDo Not Click—If You See This On Your PC It’s An AttackHere’s the warning sign for Microsoft Windows users.0 Комментарии 0 Поделились 38 Просмотры
-
WWW.DIGITALTRENDS.COMYour politeness toward ChatGPT is increasing OpenAI’s energy costsEveryone’s heard the expression, “Politeness costs nothing,” but with the advent of AI chatbots, it may have to be revised. Just recently, someone on X wondered how much OpenAI spends on electricity at its data centers to process polite terms like “please” and “thank you” when people engage with its ChatGPT chatbot. Recommended Videos To the poster’s likely surprise, OpenAI Sam Altman actually responded, saying: “Tens of millions of dollars well spent,” before adding: “You never know.” Related Many folks who engage with AI chatbots — whether via text or speech — find the conversational experience so realistic that it just feels normal to request and respond politely. But as Altman confirmed, those little extras need to be processed by its power-hungry AI tools, which means more costs to the company, and also to the environment, as most data centers are still powered by electricity generated from fossil fuels Think about it. Each polite phrase adds to the processing burden, which, when multiplied across billions of queries, results in a significant additional energy use. A survey carried out in the U.S. last year found that 67% of respondents reported being polite to AI chatbots, suggesting that 33% like to skip the niceties and get straight to the point. So, should we try to drop the manners and be less courteous in our exchanges with ChatGPT and other AI chatbots? Or just continue being polite, despite the drawbacks. Research conducted last year found that the level of politeness may well affect the quality of the large language model (LLM) that delivers responses via the chatbots. “Impolite prompts may lead to a deterioration in model performance, including generations containing mistakes, stronger biases, and omission of information,” the researchers concluded. On the same issue, a TechRadar reporter who recently experimented by conversing with ChatGPT in a less courteous manner found that the responses “seemed less helpful.” For many, being less polite toward AI chatbots may be a challenge, and it could even do a lot more than simply lower OpenAI’s energy costs and ease the burden on the environment. The fear among some studying the matter is that if it becomes socially acceptable to be blunt toward AI chatbots, such behavior could begin to leech into interpersonal interactions, potentially making human exchanges less courteous over time. Editors’ Recommendations0 Комментарии 0 Поделились 37 Просмотры