Ars Technica
Ars Technica
Original tech news, reviews and analysis on the most fundamental aspects of tech.
1 oameni carora le place asta
1037 Postari
2 Fotografii
0 Video
0 previzualizare
Recent Actualizat
  • No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU
    arstechnica.com
    DIY AI No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU Nvidia's new AI model runs locally on your machine instead of in the cloud. Ryan Whitwam Mar 25, 2025 4:15 pm | 4 Credit: Andrew Cunningham Credit: Andrew Cunningham Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreNvidia has seen its fortunes soar in recent years as its AI-accelerating GPUs have become worth their weight in gold. Most people use their Nvidia GPUs for games, but why not both? Nvidia has a new AI you can run at the same time, having justreleased its experimental G-Assist AI. It runs locally on your GPU to help you optimize your PC and get the most out of your games. It can do some neat things, but Nvidia isn't kidding when it says this tool is experimental.G-Assist is available in the Nvidia desktop app, and it consists of a floating overlay window. After invoking the overlay, you can either type or speak to G-Assist to check system stats or make tweaks to your settings. You can ask basic questions like, "How does DLSS Frame Generation work?" but it also has control over some system-level settings.By calling up G-Assist, you can get a rundown of how your system is running, including custom data charts created on the fly by G-Assist. You can also ask the AI to tweak your machine, for example, optimizing the settings for a particular game or toggling on or off a setting. G-Assist can even overclock your GPU if you so choose, complete with a graph of expected performance gains. Nvidia on G-Assist. Nvidia demoed G-Assist last year with some impressive features tied to the active game. That version of G-Assist could see what you were doing and offer suggestions about how to reach your next objective. The game integration is sadly quite limited in the public version, supporting just a few games, like Ark: Survival Evolved.There is, however, support for a number of third-party plug-ins that give G-Assist control over Logitech G, Corsair, MSI, and Nanoleaf peripherals. So, for instance, G-Assist could talk to your MSI motherboard to control your thermal profile or ping Logitech G to change your LED settings.The promise of on-device AIAs PC manufacturers fall all over themselves to release AI laptops, Nvidia has occasionally reminded everyone that computers with real GPUs are the ultimate AI PCs. There just hasn't been a lot of opportunity to take advantage of that, leaving most AI tools running in the cloud. Nvidia previously released the general purpose ChatRTX app, but G-Assist is focused on gamers, who are more likely to have powerful GPUs.Nvidia says G-Assist relies on a small language model (SLM) that has been optimized for local operation. The default text installation requires 3GB of space, and adding voice control boosts that to 6.5GB. You also need to have a GeForce RTX 30, 40, and 50 series GPU with at least 12GB of RAM. The more powerful the GPU, the faster G-Assist runs. Support for laptop GPUs will come later, but few of them will be fast enough for G-Assist. G-Assist system info. G-Assist runs on your GPU instead of the cloud, but you're probably also using that GPU to run your game. We tested this with an RTX 4070, and interacting with the model caused GPU usage to spike noticeably. Running inference on the GPU to generate an output from the model is slow, and it can also cause game performance to tank. Outside of games, G-Assist is much faster, but you'll want a very powerful GPU if you intend to use this regularly.G-Assist is slow and buggy enough that you probably won't want to rely on it right now. It's still faster to tweak most of the system and game settings yourself. However, it is an interesting step toward running AI models on your hardware. Future GPUs might be fast enough to run games and language models simultaneously, but for now, it's just a fun experiment.Ryan WhitwamSenior Technology ReporterRyan WhitwamSenior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 4 Comments
    0 Commentarii ·0 Distribuiri ·1 Views
  • Praise Kier for Severance season 2! Lets discuss.
    arstechnica.com
    yes! do it, Seth! Praise Kier for Severance season 2! Lets discuss. Marching bands? Mammalian Nurturables? An ORTBO? Yup, Severance stays weird. Nate Anderson Mar 25, 2025 4:32 pm | 0 Credit: Apple Credit: Apple Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSeverance has just wrapped up its second season. I sat down with fellow Ars staffers Aaron Zimmerman and Lee Hutchinson to talk through what we had just seen, covering everything from those goats to the show's pacing. Warning: Huge spoilers for seasons 1 and 2 follow!Nate: Severance season 1 was a smaller-scale, almost claustrophobic show about a crazy office, its "waffle parties," and the personal life of Mark Scout, mourning his dead wife and "severing" his consciousness to avoid that pain. It followed a compact group of characters, centered around the four "refiners" who worked on Lumon's severed floor. But season 2 blew up that cozy/creepy world and started following more charactersincluding far more "outies"to far more places. Did the show manage to maintain its unique vibe while making significant changes to pacing, character count, and location?Lee: I think so, but as you say, things were different this time around. One element that Im glad carried through was the shows consistent use of a very specific visual language. (I am an absolute sucker for visual storytelling. My favorite Kubrick film is Barry Lyndon. Ill forgive a lot of plot holes if theyre beautifully shot.) Season 2, especially in the back half, treats us to an absolute smorgasbord of incredible visualsbifurcated shots symbolizing severance and duality, stark whites and long hallways, and my personal favorite: Chris Walken in a black turtleneck seated in front of a fireplace, like Satan holding court in Hell. The storytelling might be a bit less focused, but it looks great. So many visual metaphors in one frame. Credit: AppleTV+ So many visual metaphors in one frame. Credit: AppleTV+ Aaron: I think it succeeded overall, with caveats. The most prominent thing lost in the transition was the tight pacing of the first season; while season 2 started and ended strong, the middle meandered quite a bit, and Id say the overall pacing felt pretty off. Doing two late-season side quest episodes (Gemma/Mark and Cobel backstories) was a bit of a drag. But I agree with LeeSeverance was more about vibes than narrative focus this season.Nate: The "side quests" were vocally disliked by a subsection of the show's fandom, and it certainly is an unusual choice to do two episodes in a row that essentially leave all your main characters to the side. But I don't think these were really outliers. This is a season, for instance, that opened with a show about the inniesand then covered the exact same ground in episode two from the outies' perspective. It also sent the whole cast off on a bizarre "ORTBO" that took an entire episode and spent a lot of time talking about Kier's masturbating, and possibly manufactured, twin. (!)Still, the "side quest" episodes stood out even among all this experimentation with pace and flow. But I think the label "side quest" can be a misnomer. The episode showing us the Gemma/Mark backstory not only brought the show's main character into focus, it revealed what was happening to Gemma and gave many new hints about what Lumon was up to. In other wordsit was about Big Stuff.The episode featuring Cobel, in contrast, found time for long, lingering drone shots of the sea, long takes of Cobel lying in bed, and long views of rural despair... and all to find a notebook. To me, this seemed much more like an actual "side quest" that could have been an interwoven B plot in a more normal episode.Lee: The side quest I didnt all mind was episode 7, Chikhai Bardo, directed by the shows cinematographer Jessica Lee Gagn. The tale of Mark and Gemmas relationshipa tale begun while donating blood using Lumon-branded equipment, with the symbolism of Lumon as a blood-hungry faceless machine being almost disturbingly on-the-nosewas masterfully told. I wasnt as much of a fan of the three episodes after that, but I think thats just because episode 7 was just so well done. I like TV that makes me feel things, and that one succeeded.Aaron: Completely agree. I love the Gemma/Mark episode, but I was very disappointed with the Cobel episode (it doesnt help that I dislike her as a character generally, and the whole Cobel invented severance! thing seemed a bit convenient and unearned to me). I think part of the issue for me was that the core innie crew and the hijinks they got up to in season 1 felt like the beating heart of the show, so even though the story had to move on at some point (and its not going backhalf the innies cant even be innies anymore), I started to miss what made me fall in love with the show.Lee: I get the narrative motivation behind Cobel having invented the severance chip (along with every line of code and every function, as she tells us), but yeah, that was the first time the show threw something at me that I really did not like. I see how this lets the story move Cobel into a helper role with Marks reintegration, but, yeah, ugh, that particular development felt tremendously unearned, as you say. I love the character, but that one prodded my suspension of disbelief pretty damn hard.Speaking of Marks reintegrationI was so excited when episode three (Who is Alive?) ended with Marks outie slamming down on the Lumon conference room table. Surely now after two catch-up episodes, I thought, wed get this storyline moving! Having the next episode (Woes Hollow) focusing on the ORTBO and Kiers (possibly fictional) twin was a little cheap, even though it was a great episode. But where I started to get really annoyed was when we slide into episode five (Trojans Horse) with Marks reintegration apparently stalled. It seems like from then to the end of the season, reintegration proceeded in fits and starts, at the speed of plot rather than in any kind of ordered fashion.It was one of the few times where I felt like my time was being wasted by the showrunners. And I dont like that feeling. That feels like Lost. Kind of wish they'd gone a little harder here. Credit: AppleTV+ Kind of wish they'd gone a little harder here. Credit: AppleTV+ Aaron: Yes! Marks reintegration was handled pretty poorly, I think. Like you said, it was exciting to see the show go there so early but it didnt really make much difference for the rest of the season. It makes sense that reintegration would take timeand we do see flashes of it happening throughout the seasonbut it felt like the show was gearing up for some wild Petey-level reintegration stuff that just never came. Presumably thats for season 3, but the reintegration stuff was just another example of what felt like the show spinning its wheels a bit. And like you said, Lee, when it feels like a show isnt quite sure what to do with the many mysteries it introduces week after week, I start to think about Lost, and not in a good way.The slow-rolled reintegration stuff was essential for the finale, though. Both seasons seemed to bank pretty hard on a slow buildup to an explosive finale setup, which felt a little frustrating this season (season 1s finale is one of my favorite TV show episodes of all time).But I think the finale worked. Just scene after scene of instantly iconic moments. The scene of innie and outtie Mark negotiating through a camcorder in that weird maternity cabin was brilliant. And while my initial reaction to Marks decision at the end was anger, I really should have seen it comingouttie Mark could not have been more patronizing in the camcorder conversation. I guess I, like outtie Mark, saw innie Mark as being somewhat lesser than.What did you guys think of the finale?Nate: A solid effort, but one that absolutely did not reach the heights of season 1. It was at its best when characters and events from the season played critical momentssuch as the altercation between Drummond, Mark, and Feral Goat Lady, or the actual (finally!) discovery of the elevator to the Testing Floor.But the finale also felt quite strange or unbalanced in other ways. Ricken doesn't make an appearance, despite the hint that he was willing to retool his book (pivotal in season 1) for the Lumon innies. Burt doesn't show up. Irving is gone. So is Reghabi. Miss Huang was summarily dismissed without having much of a story arc. So the finale failed to "gather up all its threads" in the way it did during season one.And then there was that huge marching band, which ups the number of severed employees we know about by a factor of 50xand all so they could celebrate the achievements of an innie (Mark S.) who is going to be dismissed and whose wife is apparently going to be killed. This seemed... fairly improbable, even for Lumon. On the other hand, this is a company/cult with an underground sacrificial goat farm, so what do I know about "probability"? Speaking of which, how do we feel about the Goat Revelations (tm)? This is Emile, and he must be protected at all costs. Credit: AppleTV+ This is Emile, and he must be protected at all costs. Credit: AppleTV+ Lee: Im still not entirely sure what the goat revelations were. They were being raised in order to be crammed into coffins and sacrificed when things happen? Poor little Emile was going to ride to the afterlife with Gemma, apparently, but, like why? Is it simply part of a specifically creepy Lumontology ritual? Emiles little casket had all kinds of symbology engraved on it, and we know goats (or at least the ram) symbolizes Malice in Kiers four tempers, but Im still really not getting this one.Aaron: Yeah, you kind of had to hand-wave a lot of the stuff in the finale. The goats just being sacrificial animals made me laughOK, I guess it wasnt that deep. But it could be that we dont really know their actual purpose yet.Perhaps most improbable to me was that this was apparently the most important day in Lumon history, and they had basically one security guy on the premises. Hes a big dudeor was (outtie Mark waking up mid-accidental-shooting cracked me up)but come on.Stuff like the marching band doesnt make a lick of sense. But it was a great scene, so, eh, just go with it. That seems to be what Severance is asking us to do more and more, and honestly, Im mostly OK with that. This man can do anything. Credit: AppleTV+ This man can do anything. Credit: AppleTV+ Nate: Speaking of important days in Lumon history... what is Lumon up to, exactly? Jame Eagen spoke in season 1 about his "revolving," he watched Helena eat eggs without eating anything himself, and he appears on the severed floor to watch the final "Cold Harbor" test. Clearly something weird is afoot. But the actual climactic test on Gemma was just to see if the severance block could hold her personalities apart even when facing deep traumas.However, (as Miss Casey) she had already been in the presence of her husband (Mark S.), and neither of them had known it. So the show seems to suggest on the one hand that whatever is happening on the testing floor will change the world. But on the other hand, it's really just confirming what we already know. And surely there's no need to kidnap people if the goal is just to help them compartmentalize pain; as our current epidemic of drug and alcohol use show, plenty of people would sign up for this voluntarily. So what's going on? Or, if you have no theories, does the show give you confidence that it knows where it's going?Lee: The easy answerthat severance chips will somehow allow the vampire spirit of Kier to jump bodies foreverdoesnt really line up. If Chris Walken's husband Walter Bishop is to be believed, the severance procedure is only 12 years old. So its not that, at least.Though Nates point about Helena eating eggsand James comment that he wished she would take them rawdoes echo something we learned back in season one: that Kier Egans favorite breakfast was raw eggs and milk.Aaron: Thats the question for season 3, I think, and whether theyre able to give satisfying answers will determine how people view this show in the long term. Ill admit that I was much more confident in the shows writers after the first season; this season has raised some concerns for me. I believe Ben Stiller has said that they know how the show ends, just not how it gets there. Thats a perilous place to be.Nate: We've groused a bit about the show's direction, but I think it's fair to say it comes from a place of love; the storytelling and visual style is so special, and we've had our collective hearts broken so many times by shows that can't stick the landing. (I want those hours back, Lost.) I'm certainly rooting for Severance to succeed. And even though this season wasn't perfect, I enjoyed watching every minute of it. As we wrap things up, anyone have a favorite moment from season 2? I personally enjoyed Milchick getting salty, first with Drummond and then with a wax statue of Kier.Lee: Absolutely! I very much want the show to stick the eventual landing. I have to go with you on your take, NateMilchick steals the show. Tramell Tillman plays him like a true company man, with the added complexity that comes when your company is also the cult that controls your life. My favorite bits with him are his office decorations, franklythe rabbit/duck optical illusion statue, showing his mutable nature, and the iceberg poster, hinting at hidden depths. Hes fantastic. I would 100 percent watch a spin-off series about Milchick. Mr. Milchick's office, filled with ambiguousness. I'm including Miss Huang in that description, too. Credit: AppleTV+ Mr. Milchick's office, filled with ambiguousness. I'm including Miss Huang in that description, too. Credit: AppleTV+ Aaron: This season gave me probably my favorite line in the whole seriesIrvs venomous Yes! Do it, Seth! as Helena is telling Milchick to flip the switch to bring back Helly R. But yeah, Milchick absolutely killed it this season. Devour feculence and the drum major scene were highlights, but I also loved his sudden sprint from the room after handing innie Dylan his outties note. Severance can be hilarious.And I agree, complaints aside, this show is fantastic. Its incredibly unique, and I looked forward to watching it every week so I could discuss it with friends. Heres hoping we dont have to wait three more years for the next season.Nate AndersonDeputy EditorNate AndersonDeputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 0 Comments
    0 Commentarii ·0 Distribuiri ·1 Views
  • After DDOS attacks, Blizzard rolls back Hardcore WoW deaths for the first time
    arstechnica.com
    Back to life, back to reality After DDOS attacks, Blizzard rolls back Hardcore WoW deaths for the first time New policy comes as OnlyFangs streaming guild planned to quit over DDOS disruptions. Kyle Orland Mar 25, 2025 12:52 pm | 12 Don't worry, she's just sleeping. Credit: Reddit Don't worry, she's just sleeping. Credit: Reddit Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreWorld of Warcraft Classic's Hardcore mode has set itself apart from the average MMO experience simply by making character death permanent across the entire in-game realm. For years, Blizzard has not allowed any appeals or rollbacks for these Hardcore mode character deaths, even when such deaths came as the direct result of a server disconnection or gameplay bug.Now, Blizzard says it's modifying that policy somewhat in response to a series of "unprecedented distributed-denial-of-service (DDOS) attacks" undertaken "with the singular goal of disrupting players experiences." The World of Warcraft developer says it may now resurrect Classic Hardcore characters "at our sole discretion" when those deaths come "in a mass event which we deem inconsistent with the integrity of the game."RIP OnlyFangs?The high stakes inherent to WoW's Classic Hardcore mode have made it an appealing target for streamers and other online content creators looking to build an audience. Dozens of the most popular Hardcore WoW streamers have been gathering together as part of the OnlyFangs Guild, a group dedicated to the idea that "every decision matters and one mistake can mean the end of a characters journey."In recent weeks, though, many of those OnlyFangs characters' journeys have ended as the result of a series of DDOS attacks that impacted all of World of Warcraft and other Battle.net games. Those attacks seemed suspiciously timed to coincide with major livestreamed OnlyFangs raids and negatively impacted many other players in the process.After weeks of OnlyFangs stream disruptions and character deaths from these server attacks, prominent guild member sodapoppin posted on the guild Discord that "I'd expect OnlyFangs is over... it's a terrible ending IMO, but that's the ending we got." In that same message, sodapoppin said it was clear that "the DDOS attacks are centered on us" and that he couldn't foresee asking guild members to continue streaming given the frequency and long-term consequences of those attacks."I don't feel comfortable dragging people through getting world buffs, flasks, and consumes etc., just to raid with the anxiety and probably the actuality of just being DDOS'd again and dying," sodapoppin wrote.Blizzard to the rescue?Sodapoppin allowed that OnlyFangs might continue "if we get a rollback [of the DDOS-related deaths] or I hear of some solid... DDOS protection" but added that they "don't see that happening." Last night, though, Blizzard surpassed sodapoppin's expectations and changed its Classic Hardcore permadeath policy to specifically deal with situations like this."Recently, we have experienced unprecedented distributed-denial-of-service (DDOS) attacks that impacted many Blizzard game services, including Hardcore realms, with the singular goal of disrupting players experiences," WoW Classic Associate Production Director Clay Stone wrote in a public message. "As we continue our work to further strengthen the resilience of WoW realms and our rapid response time, were taking steps to resurrect player-characters that were lost as a result of these attacks."While Blizzard's general policy on Hardcore mode deaths hasn't changed, Stone writes that the recent deaths due to DDOS are different because they "are an intentionally malicious effort made by third-party bad actors, and we believe the severity and results of DDOS attacks specifically warrant a different response."That's not entirely out of step with Blizzard's longstanding Hardcore Mode policies, which specifically prohibit "deliberate action to hamper or significantly impede the ability of other players to enjoy the game" or "actions to deliberately cause the death of another player." But those policies were designed to punish various forms of in-game griefing, not for an anonymous botnet attacking the game servers themselves.Now that DDOS-related deaths are no longer permanent, the griefers responsible for those attacks will hopefully have less motivation to take out all of Battle.net just to impact one WoW raid. But the appeal of disrupting specific scheduled streams will remain until Blizzard can find some way to protect its servers more effectively.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 12 Comments
    0 Commentarii ·0 Distribuiri ·14 Views
  • Apple barred from Google antitrust trial, putting $20 billion search deal on the line
    arstechnica.com
    Pay to play Apple barred from Google antitrust trial, putting $20 billion search deal on the line Google's sizeable payments for Safari defaults could be ending. Ryan Whitwam Mar 25, 2025 1:03 pm | 5 Credit: John Lamb | Getty Images Credit: John Lamb | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreApple has suffered a blow in its efforts to salvage its lucrative search placement deal with Google. A new ruling from the DC Circuit Court of Appeals affirms that Apple cannot participate in Google's upcoming antitrust hearing, which could leave a multibillion-dollar hole in Apple's balance sheet. The judges in the case say Apple simply waited too long to get involved.Just a few years ago, a high-stakes court case involving Apple and Google would have found the companies on opposing sides, but not today. Apple's and Google's interests are strongly aligned here, to the tune of $20 billion. Google forks over that cash every year, and it's happy to do so to secure placement as the default search provider in the Safari desktop and mobile browser.The antitrust penalties pending against Google would make that deal impermissible. Throughout the case, the government made the value of defaults clearmost people never change them. That effectively delivers Google a captive audience on Apple devices.Google's ongoing legal battle with the DOJ's antitrust division is shaping up to be the most significant action the government has taken against a tech company since Microsoft in the late '90s. Perhaps this period of stability tricked Google's partners into thinking nothing would change, but the seriousness of the government's proposed remedies seems to have convinced them otherwise.Google lost the case in August 2024, and the government proposed remedies in October. According to MediaPost, the appeals court took issue with Apple's sluggishness in choosing sides. It didn't even make its filing to participate in the remedy phase until November, some 33 days after the initial proposal. The judges ruled this delay "seems difficult to justify."When Google returns to court in the coming weeks, the company's attorneys will not be flanked by Apple's legal team. While Apple will be allowed to submit written testimony and file friend-of-the-court briefs, it will not be able to present evidence to the court or cross-examine witnesses, as it sought. Apple argued that it was entitled to do so because it had a direct stake in the outcome.Apple seeks search partner, must have $20 billionIf this penalty sticks, Apple could be looking for a new suitor. Something has to be the default search provider in Safari and other Apple products. It could continue using Google without getting paid, but the company will surely look to recoup some of that lost Google money. Regardless, there aren't many options.Google is in this situation specifically because it has a monopoly in search; whether or not it abuses that monopoly is a question for the courts (but it's not looking good). However, that doesn't change the fact that, as a monopoly, Google has almost no competition. Microsoft has tried for years to make Bing competitive, and Microsoft is no lightweight on the Internet. Yet Google Search continues to own 90 percent of the market.Aside from regional options like Yandex in Russia and Baidu in China, it's slim pickings out there. Bing is really the only major alternative to Google, but would Microsoft pay for Safari placement? It's unlikely that any smaller boutique search provider like Kagi DuckDuckGo could fulfill the needs of Apple, but it would be interesting to see a smaller player get a boost on Apple's platform. Having a viable competitorwhoever it may beon Apple devices could start changing the balance of power in search.Last year, Anthropic also attempted to be heard in the case after the government proposed forcing Google to sell off its AI investmentslike the multibillion-dollar stake it owns in Anthropic. Anthropic wanted to provide witness statements to the court, but the government's reversal on AI funding made this a moot point. That may help Google's AI search efforts down the road, but it would require AI search to be something people want to usethe jury is out on that.Ryan WhitwamSenior Technology ReporterRyan WhitwamSenior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 5 Comments
    0 Commentarii ·0 Distribuiri ·15 Views
  • Netflix expands HDR support with HDR10+
    arstechnica.com
    Samsung TV owners, rejoice Netflix expands HDR support with HDR10+ Before, Netflix users could only stream HDR titles in Dolby Vision or HDR10. Scharon Harding Mar 25, 2025 12:17 pm | 7 A scene from the Netflix original movie The Electric State. Credit: Netflix A scene from the Netflix original movie The Electric State. Credit: Netflix Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreNetflix announced today that it has started offering HDR movies and shows in the HDR10+ format.Since December 2014, when season 1 of Marco Polo debuted, Netflix has supported the HDR10 and Dolby Vision formats. Like Dolby Vision, HDR10+ is a more advanced form of HDR that uses dynamic metadata fine-tuned to specific content. HDR10+ and Dolby Vision let creatives set how each frame looks, enabling a final result that should have more detail and look closer to how the content's creators intended. HDR10+ and Dolby Vision are especially impactful when watching HDR content on lower-priced HDR TVs that could suffer from poor black levels and other performance gaps.You have to subscribe to Netflix's Premium ad-free plan to stream content in HDR and 4K resolution. The plan is $25 per month, compared to $18 per month for the Standard ad-free plan that limits users to 1080p resolution.Netflix's blog post says:To enhance our offering, we have been adding HDR10+ streams to both new releases and existing popular HDR titles. AV1-HDR10+ now accounts for 50 percent of all eligible viewing hours. We will continue expanding our HDR10+ offerings with the goal of providing an HDR10+ experience for all HDR titles by the end of this year.The streaming service added that it has "over 11,000 hours of HDR titles."Dolby Vision came out in 2014, three years before HDR10+. The HDR10+ rival also offers more control over color through its support of 12-bit video. Combined, this has led to Dolby Vision enjoying wider adoption than HDR10+.However, HDR10+ is still important for Netflix to offer to stay competitive with other streaming services supporting the format, like Amazon Prime Video, Apple TV+, Hulu, and Disney+, which announced in January that it would start supporting HDR10+.HDR10+ is also important to HDR viewers who have devices that don't support Dolby Vision. That includes TVs from Samsung, which sells more TVs than any other brand. People who try to watch HDR content on Netflix on an HDR TV that doesn't support Dolby Vision have been streaming the lesser HDR10 base standard, which uses static metadata.HDR has also grown in popularity since 2020, with HDR streaming increasing "by more than 300 percent" and the number of devices streaming Netflix and supporting HDR more than doubling, per Netflix's blog.Netflix adding HDR10+ support to its service at no extra cost is a welcome change from other streaming announcements, which often include price hikes, pulled content, and removed features. However, this latest move comes about three months after Netflix raised its Premium subscription price from $23 per month to $25 per month.Scharon HardingSenior Technology ReporterScharon HardingSenior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Toms Hardware, Channelnomics, and CRN UK. 7 Comments
    0 Commentarii ·0 Distribuiri ·16 Views
  • How Polestar engineers EVs that can handle brutal winters
    arstechnica.com
    brrrrrr How Polestar engineers EVs that can handle brutal winters Heat pumps, throttle maps, and a whole lot of going sideways. Michael Teo Van Runkle Mar 25, 2025 8:44 am | 5 Credit: Polestar Credit: Polestar Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn morePolestar provided flights from Los Angeles to Lulea and accommodation so Ars could visit Polestar's winter testing site. Ars does not accept paid editorial content.LULEA, SwedenStaring out the window of a puddle jumper descending from Stockholm into Lulea, I spy frozen seawater for the first time in my life. Not nearly as much as I expected, though, for the middle of February at the northern end of Sweden. I've flown here to drift electric Polestars on an icy lake called Stor-Skabram, near the small outpost of Jokkmokk, fully above the Arctic Circle. Yet the balmy weather serves as a constant reminder of the climate change that inspires much of the narrative around the electric vehicle industry.EVs on iceAn opportunity to get somebody else's cars sideways as much as possible on ice and snow is a particularly enjoyable way to spend a day, if you like driving cars. More importantly, automotive manufacturers rely on this kind of winter testing to fine-tune traction and stability-control programming, ensuring their cars can work well in the depths of the deepest winter. For EVs in particular, winter testing presents a more complex range of equations.First of all, an EV can't ever turn the electronic nannies off entirely, because electric motors will rev to the moon with instantaneous torque the very instant their tires lose traction. So while software uses wheel speed sensors and regenerative braking, as well as accelerometers that detect yaw rates, each EV needs to then maintain progressive output responses to driver inputs that allow for confident performance and safety simultaneously. Credit: Polestar Then there's the issue of battery performance in cold weather, since chemical cells don't respond to frigid temps as well as simpler mechanical systems. For Polestar, these challenges seem extra important given the company's Scandinavian rootseven while nestled within the current Geely umbrella. (Then again, a bit of contrarianism springs up while considering Polestar's ubiquitous sustainability messaging, given the carbon footprint of flying journalists all the way to the top of the globe to enjoy some winter testing.)Screaming around the frozen lake, I quickly forget my moral qualms. Despite temperatures hovering around freezing at midday, the ice measures about a meter thick (39.3 inches). That measurement seems scant from behind the wheel of a heavy EV, even as the Swedes assure me that ice as thin as 25 cm (9.8 in) will suffice for driving cars and just 80 cm (31.5 in) will support train tracks and actual trains.And they should know, since Polestar Head of Driving Dynamics Joakim Rydholm told me he spends upwards of four months every winter testing here in Jokkmokk. Each year, Polestar sets up a trio of circuits, two smaller tracks within one larger loop, where I spend the day jumping between the minimalistically named 2, 3, and 4 EVs. Each wears winter tires with 2-millimeter studs to allow for plenty of slip and slide but also enough speed and predictability to be useful. Credit: Polestar I fall in love with the Polestar 4 most, despite preferring the 2 and 3 much more previously on more typical tarmac conditions. Maybe the 4's additional front bias helps for sustaining higher speed driftsand the lack of a rear window definitely presents less of a problem while looking out the side for 90 percent of each lap. But on the larger circuit where the 536 hp (400 kW) 4's sportier dynamics shine brightest, I typically draw down about half of the 100 kWh battery's charge in just about 25 minutes.Cold weather adaptationThe batteries must be warming up, I figure, as I press the pedal to the metal and drift as far and wide as the traction-control programming will allow. Or do the relatively cold ambient temps cut into range? Luckily, Head of Product Beatrice Simonsson awaits after each stint to explain how Polestar ensures that winter weather will not ruin EV performance.To start, Polestar uses NMC (lithium nickel manganese cobalt) batteries with prismatic cells, unlike the LFP (lithium iron phosphate) chemistry that many other manufacturers are increasingly turning to, largely for cost reasons. Each Polestar vehicle keeps its cells as close to optimum temperature as possible using a heat pump and radiators to circulate 20 liters (5.28 gallons) of coolant, about 5 liters (1.32 gallons) of which specifically regulate the battery temps. Credit: Polestar But the biggest surprise that Simonsson reveals involves battery pre-conditioning, which, instead of warming up the NMC batteries, actually focuses mostly on cabin and occupant comfort. She explains that even at 0 C (32 F), using the heat pump to reduce the internal resistance of the battery will only result in a few percent of total range gained. In other words, for short trips, the pre-conditioning process usually eats up more power than it might save. Simonsson also tells me that Polestars will usually run the batteries slightly cooler than the purely optimal temperature to save energy lost to the heat pump.The Jokkmokk testing regimen often sees temperatures as low as -30 to -35 C (or almost where Celsius and Fahrenheit meet at -40). Even at those temps, the motors themselves don't mind, since EV range depends more on cell chemistry than the mechanical engineering of radial or axial flux motors. NMC cells can charge faster at lower temperatures than LFP, though parking an EV here for an extended time and letting the batteries truly freeze over may result in temporary performance restrictions for output and charging. Even then, Polestar never sets a lower limit, or simply hasn't found a minimum temperature where charging and driving capabilities turn off entirely.The power ratings of the three different Polestars wound up mattering less than how their varying drivetrains managed steering and throttle inputs, sensor measurements, and the resulting power delivery. Credit: Polestar The 3 seems to struggle most, with perhaps too many variables for the computer to confidently handle at pacefront and rear motors, rear torque biasing, more weight, and a higher center of gravity. Rydholm explained from the passenger seat that the accelerometers in the center of the cars come into play all the more in low-traction scenarios, when the g-force calculations need to blend regen up to 0.3 g, for example, or allow for more output with the steering wheel held straight.Going sidewaysI learned quickly that starting drifts with momentum, rather than mashing the go pedal, worked far more effectively. The 2 in particular benefited from this method, since it weighs about 1,000 pounds (454 kg) less than a 3 or 4.Throughout the day, an experimental duo of vehicle-to-load Polestar 2 prototypes also powered the grouping of huts and tipis, saunas, lights, heaters, and even a kitchen on the ice. We also experienced a few ride-along laps in a trio of Arctic Circle editions. Finished in eye-catching livery plus racing seats, upgraded suspension, roof racks, and most importantly, tires with 4-millimeter studs, the Arctic Circles upped Polestar's Scandinavian rally racing heritage by a serious measure. Credit: Polestar As much as I hope for road-going versions of the Arctic Circle to hit the market, even the stock Polestars provided more evidence that EVs can workand be fun, engaging, and borderline rambunctious to driveall in some of the harshest conditions on the planet Earth. 5 Comments
    0 Commentarii ·0 Distribuiri ·19 Views
  • Europe is looking for alternatives to US cloud providers
    arstechnica.com
    Bad forecast Europe is looking for alternatives to US cloud providers Some European cloud companies have seen an increase in business. Matt Burgess, wired.com Mar 25, 2025 9:12 am | 27 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe global backlash against the second Donald Trump administration keeps on growing. Canadians have boycotted US-made products, antiElon Musk posters have appeared across London amid widespread Tesla protests, and European officials have drastically increased military spending as US support for Ukraine falters. Dominant US tech services may be the next focus.There are early signs that some European companies and governments are souring on their use of American cloud services provided by the three so-called hyperscalers. Between them, Google Cloud, Microsoft Azure, and Amazon Web Services (AWS) host vast swathes of the Internet and keep thousands of businesses running. However, some organizations appear to be reconsidering their use of these companies cloud servicesincluding servers, storage, and databasesciting uncertainties around privacy and data access fears under the Trump administration.Theres a huge appetite in Europe to de-risk or decouple the over-dependence on US tech companies, because there is a concern that they could be weaponized against European interests, says Marietje Schaake, a nonresident fellow at Stanfords Cyber Policy Center and a former decadelong member of the European Parliament.The moves may already be underway. On March 18, politicians in the Netherlands House of Representatives passed eight motions asking the government to reduce reliance on US tech companies and move to European alternatives. Days before, more than 100 organizations signed an open letter to European officials calling for the continent to become more technologically independent and saying the status quo creates security and reliability risks.Two European-based cloud service companies, Exoscale and Elastx, tell WIRED they have seen an uptick in potential customers looking to abandon US cloud providers over the last two weekswith some already starting to make the jump. Multiple technology advisers say they are having widespread discussions about what it would take to uproot services, data, and systems.We have more demand from across Europe, says Mathias Nbauer, the CEO of Swiss-based hosting provider Exoscale, adding there has been an increase in new customers seeking to move away from cloud giants. Some customers were very explicit, Nbauer says. Especially customers from Denmark being very explicit that they want to move away from US hyperscalers because of the US administration and what they said about Greenland.It's a big worry about the uncertainty around everything. And from the Europeans perspectivethat the US is maybe not on the same team as us any longer, says Joakim hman, the CEO of Swedish cloud provider Elastx. Those are the drivers that bring people or organizations to look at alternatives.Concerns have been raised about the current data-sharing agreement between the EU and US, which is designed to allow information to move between the two continents while protecting peoples rights. Multiple previous versions of the agreement have been struck down by European courts. At the end of January, Trump fired three Democrats from the Privacy and Civil Liberties Oversight Board (PCLOB), which helps manage the current agreement. The move could undermine or increase uncertainty around the agreement. In addition, hman says, he has heard concerns from firms about the CLOUD Act, which can allow US law enforcement to subpoena user data from tech companies, potentially including data that is stored in systems outside of the US.Dave Cottlehuber, the founder of SkunkWerks, a small tech infrastructure firm in Austria, says he has been moving the companys few servers and databases away from US providers to European services since the start of the year. First and foremost, its about values, Cottlehuber says. For me, privacy is a right not a privilege. Cottlehuber says the decision to move is easier for a small business such as his, but he argues it removes some taxes that are paid to the Trump administration. The best thing I can do is to remove that small contribution of mine, and also at the same time, make sure that my customers privacy is respected and preserved, Cottlehuber says.Steffen Schmidt, the CEO of Medicusdata, a company that provides text-to-speech services to doctors and hospitals in Europe, says that having data in Europe has always been a must, but his customers have been asking for more in recent weeks. Since the beginning of 2025, in addition to data residency guarantees, customers have actively asked us to use cloud providers that are natively European companies, Schmidt says, adding that some of his services have been moved to Nbauers Exoscale.Harry Staight, a spokesperson for AWS, says it is not accurate that customers are moving from AWS to EU alternatives. Our customers have control over where they store their data and how it is encrypted, and we make the AWS Cloud sovereign-by-design, Staight says. AWS services support encryption with customer managed keys that are inaccessible to AWS, which means customers have complete control of who accesses their data. Staight says the membership of the PCLOB does not impact the agreements around EU-US data sharing and that the CLOUD Act has additional safeguards for cloud content. Google and Microsoft declined to comment.The potential shift away from US tech firms is not just linked to cloud providers. Since January 15, visitors to the European Alternatives website increased more than 1,200 percent. The site lists everything from music streaming services to DDoS protection tools, says Marko Saric, a cofounder of European cloud analytics service Plausible. We can certainly feel that something is going on, Saric says, claiming that during the first 18 days of March the company has beaten the net recurring revenue growth it saw in January and February. This is organic growth which cannot be explained by any seasonality or our activities, he says.While there are signs of movement, the impact is likely to be smallat least for now. Around the world, governments and businesses use multiple cloud servicessuch as authentication measures, hosting, data storage, and increasingly data centers providing AI processingfrom the big three cloud and tech service providers. Cottlehuber says that, for large businesses, it may take many months, if not longer, to consider what needs to be moved, the risks involved, plus actually changing systems. What happens if you have a hundred petabytes of storage, it's going to take years to move over the Internet, he says.For years, European companies have struggled to compete with the likes of Google, Microsoft, and Amazons cloud services and technical infrastructure, which make billions every year. It may also be difficult to find similar services on the scale of those provided by alternative European cloud firms.If you are deep into the hyperscaler cloud ecosystem, youll struggle to find equivalent services elsewhere, says Bert Hubert, an entrepreneur and former government regulator, who says he has heard of multiple new cloud migrations to US firms being put on hold or reconsidered. Hubert has argued that it is no longer safe for European governments to be moved to US clouds and that European alternatives cant properly compete. We sell a lot of fine wood here in Europe. But not that much furniture, he says. However, that too could change.Schaake, the former member of the European Parliament, says a combination of new investments, a different approach to buying public services, and a Europe-first approach or investing in a European technology stack could help to stimulate any wider moves on the continent. The dramatic shift of the Trump administration is very tangible, Schaake says. The idea that anything could happen and that Europe should fend for itself is clear. Now we need to see the same kind of pace and leadership that we see with defense to actually turn this into meaningful action.This story originally appeared on wired.com.Matt Burgess, wired.com Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture. 27 Comments
    0 Commentarii ·0 Distribuiri ·17 Views
  • Oops: Google says it might have deleted your Maps Timeline data
    arstechnica.com
    So private, even you don't have the data Oops: Google says it might have deleted your Maps Timeline data Google Maps switched to local-only Timeline storage in December. Ryan Whitwam Mar 24, 2025 12:56 pm | 18 POLAND - 2020/03/23: In this photo illustration a Google Maps logo seen displayed on a smartphone. (Photo Illustration by Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images) Credit: Getty Images / SOPA Images / Contributor POLAND - 2020/03/23: In this photo illustration a Google Maps logo seen displayed on a smartphone. (Photo Illustration by Mateusz Slodkowski/SOPA Images/LightRocket via Getty Images) Credit: Getty Images / SOPA Images / Contributor Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe Google Maps Timeline has long been a useful though slightly uncomfortable feature that maintains a complete record of everywhere your phone goes (and probably you with it). Google recently changed the way it stored timeline data to improve privacy, but the company now confirms that a "technical issue" resulted in many users losing their timeline history altogether, and there might not be any way to recover it.Timeline, previously known as Location History, is very useful if you need to figure out where you were on a particular day or if you just can't remember where you found that neat bar on your last vacation. Many Google users grew quite fond of having access to that data. However, Google had access to it, too. Starting in 2024, Google transitioned to storing Timeline data only on the user's individual smartphone instead of backing it up to the cloud. You can probably see where this is going.Users started piping up over the past several weeks, posting on the Google support forums, Reddit, and other social media that their treasured Timeline data had gone missing. Google has been investigating the problem, and the news isn't good. In an email sent out over the weekend, Google confirmed what many already feared: Maps has accidentally deleted Timeline data on countless devices.A Google spokesperson confirmed this is the result of a technical issue and not user error or an intentional change. It's unclear how this happened, but we'd wager on a botched Maps update. Google usually rolls out updates in waves, and it's possible that the defective build in this case made it to a large number of devices before it was stopped.You have exactly one possible fix for this issue, but only if you planned ahead. When Google began the full change-over to local storage of Timeline data, it added several settings to control the feature. While the data is stored locally by default, you have the option of creating encrypted backups in the cloud. If you did that, you should be able to restore the data. Google's email alert, along with the location of Google's backup button. Credit: Google Google's email alert, along with the location of Google's backup button. Credit: Google To check for backed-up Timeline data, open Maps and go to the Timeline section. There should be a cloud icon at the top with an arrowif it's a cloud with a line through it, you're out of luck. Tapping the enabled icon should let you download a backup of your data. According to Google, if you did not have encrypted backups enabled, the data is gone forever.To cloud or not to cloud?Google has taken a more cautious approach to storing location data in recent years. The changes to Maps date back to 2023, when the company announced it would no longer log certain types of data, including visits to abortion clinics, domestic violence shelters, and more. Moving Timeline off of its servers and onto individual devices in late 2024 would theoretically protect user privacy if Google were forced to hand over account data to law enforcement.However, there are reasons we keep things in the cloud. For one, they're more accessible. When Google transitioned Timeline data to on-device, users lost the ability to view their location history on the web. More importantly, it's harder to lose data when it's backed up on a server that Google manages. It's good that Google still supports a secure backup option, but it's not on by default. Again, that's understandable, given the aim of improving privacy, but a lot of people are wishing the backups were automatic today.Many longtime Maps users have expressed genuine sorrow over losing years of data to this glitch. Some say they believed they had encrypted backups enabled, only to find they had no data to restore. This is probably a good time to check your Maps settings if you, too, have vast swaths of historic location data living only on your phone.Ryan WhitwamSenior Technology ReporterRyan WhitwamSenior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 18 Comments
    0 Commentarii ·0 Distribuiri ·13 Views
  • Trump administration accidentally texted secret bombing plans to a reporter
    arstechnica.com
    You've got mail Trump administration accidentally texted secret bombing plans to a reporter "Shocking recklessness" in leak of detailed Yemen bombing plan in Signal chat. Jon Brodkin Mar 24, 2025 4:43 pm | 78 Sen. JD Vance (R-Ohio) speaks to reporters after a presidential debate between Joe Biden and Donald Trump at the Georgia Institute of Technology campus on June 27, 2024 in Atlanta, Georgia. Credit: Getty Images | Andrew Harnik Sen. JD Vance (R-Ohio) speaks to reporters after a presidential debate between Joe Biden and Donald Trump at the Georgia Institute of Technology campus on June 27, 2024 in Atlanta, Georgia. Credit: Getty Images | Andrew Harnik Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA prominent journalist knew the US military would start bombing Houthi targets in Yemen two hours before it happened on March 15 because top Trump administration officials accidentally included the reporter on a Signal text chain in which they discussed the war plan.Jeffrey Goldberg, editor-in-chief of The Atlantic magazine, described the surprising leak of sensitive military information in an article today. The National Security Council confirmed that the messages were real and said it is investigating how Goldberg was added to a thread in which the war information was discussed."The world found out shortly before 2 p.m. eastern time on March 15 that the United States was bombing Houthi targets across Yemen," Goldberg wrote. "I, however, knew two hours before the first bombs exploded that the attack might be coming. The reason I knew this is that Pete Hegseth, the secretary of defense, had texted me the war plan at 11:44 a.m. The plan included precise information about weapons packages, targets, and timing."Goldberg's article quotes numerous messages that appeared to come from Vice President JD Vance, Hegseth, and other Trump administration officials. Goldberg was first added to the text chain on March 11 by Michael Waltz, Trump's national security adviser.Goldberg initially "didn't find it particularly strange that he might be reaching out to me," though he considered that "someone could be masquerading as Waltz in order to somehow entrap me." But over the next few days, Goldberg became increasingly convinced that the messages were authentic.Vance: I just hate bailing Europe out againThe text chat was labeled "Houthi PC small group," and a message from Waltz indicated that he was convening a principals committee for top officials to discuss plans."I had very strong doubts that this text group was real, because I could not believe that the national-security leadership of the United States would communicate on Signal about imminent war plans," Goldberg wrote. "I also could not believe that the national security adviser to the president would be so reckless as to include the editor-in-chief of The Atlantic in such discussions with senior US officials, up to and including the vice president."Using Signal in this way may have violated US law, Goldberg wrote. "Conceivably, Waltz, by coordinating a national-security-related action over Signal, may have violated several provisions of the Espionage Act, which governs the handling of 'national defense' information, according to several national-security lawyers interviewed by my colleague Shane Harris for this story," he wrote.Signal is not an authorized venue for sharing such information, and Waltz's use of a feature that makes messages disappear after a set period of time "raises questions about whether the officials may have violated federal records law," the article said. Adding a reporter to the thread "created new security and legal issues" by transmitting information to someone who wasn't authorized to see it, "the classic definition of a leak, even if it was unintentional," Goldberg wrote.The account labeled "JD Vance" questioned the war plan in a Signal message on March 14. "I am not sure the president is aware how inconsistent this is with his message on Europe right now," the message said. "There's a further risk that we see a moderate to severe spike in oil prices. I am willing to support the consensus of the team and keep these concerns to myself. But there is a strong argument for delaying this a month, doing the messaging work on why this matters, seeing where the economy is, etc."The Vance account also stated, "3 percent of US trade runs through the suez. 40 percent of European trade does," and "I just hate bailing Europe out again." The Hegseth account responded that "I fully share your loathing of European free-loading. It's PATHETIC," but added that "we are the only ones on the planet (on our side of the ledger) who can do this."An account apparently belonging to Trump advisor Stephen Miller wrote, "As I heard it, the president was clear: green light, but we soon make clear to Egypt and Europe what we expect in return. We also need to figure out how to enforce such a requirement. EG, if Europe doesn't remunerate, then what? If the US successfully restores freedom of navigation at great cost there needs to be some further economic gain extracted in return."Shocking recklessnessGoldberg was mostly convinced that the text chain was real before the detailed war plans were sent. "After reading this chain, I recognized that this conversation possessed a high degree of verisimilitude," Goldberg wrote. "The texts, in their word choice and arguments, sounded as if they were written by the people who purportedly sent them, or by a particularly adept AI text generator. I was still concerned that this could be a disinformation operation, or a simulation of some sort. And I remained mystified that no one in the group seemed to have noticed my presence. But if it was a hoax, the quality of mimicry and the level of foreign-policy insight were impressive."Goldberg declined to directly quote from the Hesgeth message containing war plans. "The information contained in them, if they had been read by an adversary of the United States, could conceivably have been used to harm American military and intelligence personnel, particularly in the broader Middle East, Central Command's area of responsibility," Goldberg wrote. "What I will say, in order to illustrate the shocking recklessness of this Signal conversation, is that the Hegseth post contained operational details of forthcoming strikes on Yemen, including information about targets, weapons the US would be deploying, and attack sequencing."The Vance account responded, "I will say a prayer for victory," and two other users posted prayer emoji, according to Goldberg. Shortly after the bombings, Waltz posted in the Signal chat that the operation was a success, and several members of the group responded positively."The Signal chat group, I concluded, was almost certainly real," Goldberg wrote. He removed himself from the group and contacted administration officials about the information leak.NSC reviewing how inadvertent number was addedArs contacted the White House today, and we quickly received a response containing two statements about the Goldberg incident. The statements are the same as those included in The Atlantic article."This appears to be an authentic message chain, and we are reviewing how an inadvertent number was added to the chain," said a statement attributed to a National Security Council spokesperson. "The thread is a demonstration of the deep and thoughtful policy coordination between senior officials. The ongoing success of the Houthi operation demonstrates that there were no threats to troops or national security."The other statement came from a spokesperson for Vance. "The Vice President's first priority is always making sure that the President's advisers are adequately briefing him on the substance of their internal deliberations," the statement said. "Vice President Vance unequivocally supports this administration's foreign policy. The President and the Vice President have had subsequent conversations about this matter and are in complete agreement."According to Goldberg, The Atlantic spoke with several former US officials who said they used Signal to share unclassified information, but "they knew never to share classified or sensitive information on the app, because their phones could have been hacked by a foreign intelligence service.""I have never seen a breach quite like this," Goldberg wrote. "It is not uncommon for national-security officials to communicate on Signal. But the app is used primarily for meeting planning and other logistical mattersnot for detailed and highly confidential discussions of a pending military action. And, of course, I've never heard of an instance in which a journalist has been invited to such a discussion."Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 78 Comments
    0 Commentarii ·0 Distribuiri ·25 Views
  • After borking my Pixel 4a battery, Google borks me, too
    arstechnica.com
    no cash for you! After borking my Pixel 4a battery, Google borks me, too The devil is in the details. Nate Anderson Mar 24, 2025 5:01 pm | 88 The Pixel 4a. It's finally here! Credit: Google The Pixel 4a. It's finally here! Credit: Google Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt is an immutable law of nature that when you receive a corporate email with a subject line like "Changes coming to your Pixel 4a," the changes won't be the sort you like. Indeed, a more honest subject line would usually be: "You're about to get hosed."So I wasn't surprised, as I read further into this January missive from Google, that an "upcoming software update for your Pixel 4a" would "affect the overall performance and stability of its battery."How would my battery be affected? Negatively, of course. "This update will reduce your batterys runtime and charging performance," the email said. "To address this, were providing some options to consider. "Our benevolent Google overlords were about to nerf my phone batterypresumably in the interests of "not having it erupt in flames," though this was never actually made clearbut they recognized the problem, and they were about to provide compensation. This is exactly how these kinds of situations should be handled.Google offered three options: $50 cash money, a $100 credit to Google's online store, or a free battery replacement. It seemed fair enough. Yes, not having my phone for a week or two while I shipped it roundtrip to Google could be annoying, but at least the company was directly mitigating the harm it was about to inflict. Indeed, users might actually end up in better shape than before, given the brand-new battery.So I was feeling relatively sunny toward the giant monopolist when I decided to spring for the 50 simoleons. My thinking was that 1) I didn't want to lose my phone for a couple of weeks, 2) the update might not be that bad, in which case I'd be ahead by 50 bucks, and 3) I could always put the money towards a battery replacement if assumption No. 2 turned out to be mistaken.The navet of youth!I selected my $50 "appeasement" through an online form, and two days later, I received an email from Bharath on the Google Support Team.Bharath wanted me to know that I was eligible for the money and it would soon be in my hands... once I performed a small, almost trivial task: giving some company I had never heard of my name, address, phone number, Social Security number, date of birth, and bank account details.About that $50...Google was not, in fact, just "sending" me $50. I had expected, since the problem involved their phones and their update, that the solution would require little or nothing from me. A check or prepaid credit card would arrive in the mail, perhaps, or a drone might deliver a crisp new bill from the sky. I didn't know and didn't care, so long as it wasn't my problem.But it was my problem. To get the cash, I had to create an account with something called "Payoneer." This is apparently a reputable payments company, but I had never heard of it, and much about its operations is unclear. For instance, I was given three different ways to sign up depending on whether I 1) "already have a Payoneer account from Google," 2) "don't have an account," or 3) "do have a Payoneer account that was not provided nor activated through Google."Say what now?And though Google promised "no transaction fees," Payoneer appears to charge an "annual account fee" of $29.95... but only to accounts that receive less than $2,000 through Payoneer in any consecutive 12-month period.Does this fee apply to me if I sign up through the Google offer? I was directed to Payoneer support with any questions, but the company's FAQ on the annual account fee doesn't say.If the fee does apply to me, do I need to sign up for a Payoneer account, give them all of my most personal financial information, wait the "10 to 18 business days" that Google says it will take to get my money, and then return to Payoneer so that I can cancel my account before racking up some $30 charge a year from now? And I'm supposed to do all this just to get.... fifty bucks? One time?It was far simpler for me to get a recent hundred-dollar rebate on a washing machine... and they didn't need my SSN or bank account information.(Reddit users also report that, if you use the wrong web browser to cancel your Payoneer account, you're hit with an error that says: "This end point requires that the body of all requests be formatted as JSON.")Like Lando Calrissian, I realized that this deal was getting worse all the time.I planned to write Bharath back to switch my "appeasement," but then I noticed the fine print: No changes are possible after making a selection.Sono money for me. On the scale of life's crises, losing $50 is a minor one, and I resolved to move on, facing the world with a cheerful heart and a clear mind, undistracted by the many small annoyances our high-tech overlords continually strew upon the path.Then the software update arrived.A decimation situationWhen Google said that the new Pixel 4a update would "reduce your batterys runtime and charging performance," it was not kidding. Indeed, the update basically destroyed the battery.Though my phone was three years old, until January of this year, the battery still held up for all-day usage. The screen was nice, the (smallish) phone size was good, and the device remained plenty fast at all the basic tasks: texting, emails, web browsing, snapping photos. I'm trying to reduce both my consumerism and my e-waste, so I was planning to keep the device for at least another year. And even then, it would make a decent hand-me-down device for my younger kids.After the update, however, the phone burned through a full battery charge in less than two hours. I could pull up a simple podcast app, start playing an episode, and watch the battery percentage decrement every 45 seconds or so. Using the phone was nearly impossible unless one was near a charging cable at all times.To recap: My phone was shot, I had to jump through several hoops to get my money, and I couldn't change my "appeasement" once I realized that it wouldn't work for me.Within the space of three days, I went from 1) being mildly annoyed at the prospect of having my phone messed with remotely to 2) accepting that Google was (probably) doing it for my own safety and was committed to making things right to 3) berating Google for ruining my device and then using a hostile, data collecting "appeasement" program to act like it cared. This was probably not the impression Google hoped to leave in people's minds when issuing the Pixel 4a update. Removing the Pixel 4a's battery can be painful, but not as painful as catching fire. Credit: iFixit Cheap can be quite expensiveThe update itself does not appear to be part of some plan to spy on us or to extract revenue but rather to keep people safe. The company tried to remedy the pain with options that, on the surface, felt reasonable, especially given the fact that batteries are well-known as consumable objects that degrade over time. And I've had three solid years of service with the 4a, which wasn't especially expensive to begin with.That said, I do blame Google in general for the situation. The inflexibility of the approach, the options that aren't tailored for ease of use in specific countries, the outsourced tech supportthese are all hallmarks of today's global tech behemoths.It is more efficient, from an algorithmic, employ-as-few-humans-as-possible perspective, to operate "at scale" by choosing global technical solutions over better local options, by choosing outsourced email support, by trying to avoid fraud (and employee time) through preventing program changes, by asking the users to jump through your hoops, by gobbling up ultra-sensitive information because it makes things easier on your end.While this makes a certain kind of sense, it's not fun to receive this kind of "efficiency." When everything goes smoothly, it's finebut whenever there's a problem, or questions arise, these kinds of "efficient, scalable" approaches usually just mean "you're about to get screwed."In the end, Google is willing to pay me $50, but that money comes with its own cost. I'm not willing to pay with my time nor with the risk of my financial information, and I will increasingly turn to companies that offer a better experience, that care more about data privacy, that build with higher-quality components, and that take good care of customers.No company is perfect, of course, and this approach costs a bit more, which butts up against my powerful urge to get a great deal on everything. I have to keep relearning the old lesson as I am once again with this Pixel 4a fiascothat cheap gear is not always the best value in the long run.Nate AndersonDeputy EditorNate AndersonDeputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 88 Comments
    0 Commentarii ·0 Distribuiri ·24 Views
  • As preps continue, its looking more likely NASA will fly the Artemis II mission
    arstechnica.com
    Don't stop me now As preps continue, its looking more likely NASA will fly the Artemis II mission The core stage of NASA's Space Launch System is now integrated with the rocket's twin boosters. Stephen Clark Mar 24, 2025 7:08 pm | 8 The Space Launch System's core stage is seen sandwiched between the rocket's twin solid-fueled boosters inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. Credit: NASA/Frank Michaux The Space Launch System's core stage is seen sandwiched between the rocket's twin solid-fueled boosters inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. Credit: NASA/Frank Michaux Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLate Saturday night, technicians at Kennedy Space Center in Florida moved the core stage for NASA's second Space Launch System rocket into position between the vehicle's two solid-fueled boosters.Working inside the iconic 52-story-tall Vehicle Assembly Building, ground teams used heavy-duty cranes to first lift the butterscotch orange core stage from its cradle in the VAB's cavernous transfer aisle, the central passageway between the building's four rocket assembly bays. The cranes then rotated the structure vertically, allowing workers to disconnect one of the cranes connected to the bottom of the rocket.That left the rocket hanging on a 325-ton overhead crane, which would lift it over the transom into the building's northeast high bay. The Boeing-built core stage weighs about 94 tons (85 metric tons), measures about 212 feet (65 meters) tall, and will contain 730,000 gallons of cryogenic propellant at liftoff. It is the single largest element for NASA's Artemis II mission, slated to ferry a crew of astronauts around the far side of the Moon as soon as next year.Finally, ground crews lowered the rocket between the Space Launch System's twin solid rocket boosters already stacked on a mobile launch platform inside High Bay 3, where NASA assembled Space Shuttles and Saturn V rockets for Apollo lunar missions.On Sunday, teams inside the VAB connected the core stage to each booster at forward and aft load-bearing attach points. After completing electrical and data connections, engineers will stack a cone-shaped adapter on top of the core stage, followed by the rocket's upper stage, another adapter ring, and finally the Orion spacecraft that will be home to the four-person Artemis II crew for their 10-day journey through deep space. Four RS-25 engines left over from NASA's Space Shuttle program will power the SLS core stage. Credit: NASA/Frank Michaux Through the motionsThis will be the first crewed flight of NASA's Artemis program, which aims to land astronauts on the lunar south pole and eventually build a sustainable human presence on the Moon, with an eye toward future expeditions to Mars. The program's first crewed lunar landing is penciled in for the Artemis III mission, again using SLS and Orion, but adding a new piece: SpaceX's enormous Starship rocket will be used as a human-rated lunar lander. Artemis II won't land, but it will carry people to the vicinity of the Moon for the first time since 1972.The core stage for Artemis II arrived from its factory in Louisiana last year, and NASA started stacking the SLS solid rocket boosters in November. Other recent accomplishments on the path toward Artemis II include the installation of the Orion spacecraft's solar panels, and closeouts of the craft's service module at Kennedy Space Center with aerodynamic panels that will jettison during launch.As soon as next month, the Orion spacecraft will travel to a different facility at Kennedy for fueling, then to another building to meet its Launch Abort System before moving to the VAB for stacking atop the Space Launch System. For the Artemis I mission, it took around eight months to complete these activities before delivering Orion to the VAB, so it's fair to be skeptical of NASA's target launch date for Artemis II in April 2026, which is already running years behind schedule.However, the slow march toward launch continues. A few months ago, some well-informed people in the space community thought there was a real possibility the Trump administration could quickly cancel NASA's Space Launch System, the high-priced heavy-lifter designed to send astronauts from the Earth to the Moon. The most immediate possibility involved terminating the SLS program before it flies with Artemis II.This possibility appears to have been overcome by circumstances. The rockets most often mentioned as stand-ins for the Space Launch SystemSpaceX's Starship and Blue Origin's New Glennaren't likely to be cleared for crew missions for at least several years. The Orion spacecraft for the Artemis II mission, seen here with its solar arrays installed for flight, just prior to their enclosure inside aerodynamic fairings to protect them during launch. Credit: NASA/Rad Sinyak The fully reusable Starship holds immense long-term promise to be significantly cheaper and more capable than the Space Launch System, but it suffered back-to-back failures to start the year, raising questions about SpaceX's upgraded Starship design, known as "Version 2" or "Block 2." Once SpaceX irons out the design issues, it must prove it can recover and reuse Starships and test the vehicle's in-orbit refueling capabilities. Blue Origin's New Glenn had a successful debut flight in January, but its next flight is likely six or more months away. Neither rocket will be ready to fly people for at least several years.NASA's existing architecture still has a limited shelf life, and the agency will probably have multiple options for transporting astronauts to and from the Moon in the 2030s. A decision on the long-term future of SLS and Orion isn't expected until the Trump administration's nominee for NASA administrator, Jared Isaacman, takes office after confirmation by the Senate.So, what is the plan for SLS?There are different degrees of cancellation options. The most draconian would be an immediate order to stop work on Artemis II preparations. This is looking less likely than it did a few months ago and would come with its own costs. It would cost untold millions of dollars to disassemble and dispose of parts of Artemis II's SLS rocket and Orion spacecraft. Canceling multibillion-dollar contracts with Boeing, Northrop Grumman, and Lockheed Martin would put NASA on the hook for significant termination costs.Of course, these liabilities would be less than the $4.1 billion NASA's inspector general estimates each of the first four Artemis missions will cost. Most of that money has already been spent for Artemis II, but if NASA spends several billion dollars on each Artemis mission, there won't be much money left over to do other cool things.Other options for NASA might be to set a transition point when the Artemis program would move off of the Space Launch System rocket, and perhaps even the Orion spacecraft, and switch to new vehicles. Looking down on the Space Launch System for Artemis II. Credit: NASA/Frank Michaux Another possibility, which seems to be low-hanging fruit for Artemis decision-makers, could be to cancel the development of a larger Exploration Upper Stage for the SLS rocket. If there are a finite number of SLS flights on NASA's schedule, it's difficult to justify the projected $5.7 billion cost of developing the upgraded Block 1B version of the Space Launch System. There are commercial options available to replace the rocket's Boeing-built Exploration Upper Stage, as my colleague Eric Berger aptly described in a feature story last year.For now, it looks like NASA's orange behemoth has a little life left in it. All the hardware for the Artemis II mission has arrived at the launch site in Florida.The Trump administration will release its fiscal year 2026 budget request in the coming weeks. Maybe, then, NASA will also have a permanent administrator, and the veil will lift over the White House's plans for Artemis. Listing image: NASA/Frank Michaux Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 8 Comments
    0 Commentarii ·0 Distribuiri ·24 Views
  • You can now download the source code that sparked the AI boom
    arstechnica.com
    AI CAN SEE YOU You can now download the source code that sparked the AI boom CHM releases code for 2012 AlexNet breakthrough that proved "deep learning" could work. Benj Edwards Mar 24, 2025 6:14 pm | 0 Credit: ArtemisDiana via Getty Images Credit: ArtemisDiana via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Thursday, Google and the Computer History Museum (CHM) jointly released the source code for AlexNet, the convolutional neural network (CNN) that many credit with transforming the AI field in 2012 by proving that "deep learning" could achieve things conventional AI techniques could not.Deep learning, which uses multi-layered neural networks that can learn from data without explicit programming, represented a significant departure from traditional AI approaches that relied on hand-crafted rules and features.The Python code, now available on CHM's GitHub page as open source software, offers AI enthusiasts and researchers a glimpse into a key moment of computing history. AlexNet served as a watershed moment in AI because it could accurately identify objects in photographs with unprecedented accuracycorrectly classifying images into one of 1,000 categories like "strawberry," "school bus," or "golden retriever" with significantly fewer errors than previous systems.Like viewing original ENIAC circuitry or plans for Babbage's Difference Engine, examining the AlexNet code may provide future historians insight into how a relatively simple implementation sparked a technology that has reshaped our world. While deep learning has enabled advances in health care, scientific research, and accessibility tools, it has also facilitated concerning developments like deepfakes, automated surveillance, and the potential for widespread job displacement.But in 2012, those negative consequences still felt like far-off sci-fi dreams to many. Instead, experts were simply amazed that a computer could finally recognize images with near-human accuracy.Teaching computers to seeAs the CHM explains in its detailed blog post, AlexNet originated from the work of University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever, along with their advisor Geoffrey Hinton. The project proved that deep learning could outperform traditional computer vision methods.The neural network won the 2012 ImageNet competition by recognizing objects in photos far better than any previous method. Computer vision veteran Yann LeCun, who attended the presentation in Florence, Italy, immediately recognized its importance for the field, reportedly standing up after the presentation and calling AlexNet "an unequivocal turning point in the history of computer vision." As Ars detailed in November, AlexNet marked the convergence of three critical technologies that would define modern AI.According to CHM, the museum began efforts to acquire the historically significant code in 2020, when Hansen Hsu (CHM's curator) reached out to Krizhevsky about releasing the source code due to its historical importance. Since Google had acquired the team's company DNNresearch in 2013, it owned the intellectual property rights.The museum worked with Google for five years to negotiate the release and carefully identify which specific version represented the original 2012 implementationan important distinction, as many recreations labeled "AlexNet" exist online but aren't the authentic code used in the breakthrough.How AlexNet workedWhile AlexNet's impact on AI is now legendary, understanding the technical innovation behind it helps explain why it represented such a pivotal moment. The breakthrough wasn't any single revolutionary technique, but rather the elegant combination of existing technologies that had previously developed separately.The project combined three previously separate components: deep neural networks, massive image datasets, and graphics processing units (GPUs). Deep neural networks formed the core architecture of AlexNet, with multiple layers that could learn increasingly complex visual features. The network was named after Krizhevsky, who implemented the system and performed the extensive training process.Unlike traditional AI systems that required programmers to manually specify what features to look for in images, these deep networks could automatically discover patterns at different levels of abstractionfrom simple edges and textures in early layers to complex object parts in deeper layers. While AlexNet used a CNN architecture specialized for processing grid-like data such as images, today's AI systems like ChatGPT and Claude rely primarily on Transformer models. Those models are a 2017 Google Research invention that excels at processing sequential data and capturing long-range dependencies in text and other media through a mechanism called "attention."For training data, AlexNet used ImageNet, a database started by Stanford University professor Dr. Fei-Fei Li in 2006. Li collected millions of Internet images and organized them using a database called WordNet. Workers on Amazon's Mechanical Turk platform helped label the images.The project needed serious computational power to process this data. Krizhevsky ran the training process on two Nvidia graphics cards installed in a computer in his bedroom at his parents' house. Neural networks perform many matrix calculations in parallel, tasks that graphics chips handle well. Nvidia, led by Jensen Huang, had made their graphics chips programmable for non-graphics tasks through their CUDA software, released in 2007.The impact from AlexNet extends beyond computer vision. Deep learning neural networks now power voice synthesis, game-playing systems, language models, and image generators. They're also responsible for potential society-fracturing effects such as filling social networks with AI-generated slop, empowering abusive bullies, and potentially altering the historical record.Where are they now?In the 13 years since their breakthrough, the creators of AlexNet have taken their expertise in different directions, each contributing to the field in unique ways.After AlexNet's success, Krizhevsky, Sutskever, and Hinton formed a company called DNNresearch Inc., which Google acquired in 2013. Each team member has followed a different path since then. Sutskever co-founded OpenAI in 2015, which released ChatGPT in 2022, and more recently launched Safe Superintelligence (SSI), a startup that has secured $1 billion in funding. Krizhevsky left Google in 2017 to work on new deep learning techniques at Dessa.Hinton has gained acclaim and notoriety for warning about the potential dangers of future AI systems, resigning from Google in 2023 so he could speak freely about the topic. Last year, Hinton stunned the scientific community when he received the 2024 Nobel Prize in Physics alongside John J. Hopfield for their foundational work in machine learning that dates back to the early 1980s.Regarding who gets the most credit for AlexNet, Hinton described the project roles with characteristic humor to the Computer History Museum: "Ilya thought we should do it, Alex made it work, and I got the Nobel Prize."Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 0 Comments
    0 Commentarii ·0 Distribuiri ·17 Views
  • UK on alert after H5N1 bird flu spills over to sheep in world-first
    arstechnica.com
    Spillover UK on alert after H5N1 bird flu spills over to sheep in world-first The UK sheep had inflamed mammary gland much like infected cows in US. Beth Mole Mar 24, 2025 6:32 pm | 0 Sheep in Yorkshire Dales. Credit: Getty | Edwin Remsberg Sheep in Yorkshire Dales. Credit: Getty | Edwin Remsberg Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe H5N1 bird flu has spilled over to a sheep for the first time, infecting a domesticated ruminant in the United Kingdom much like it has in US dairy cows, according to UK officials.The single sheepa ewein Yorkshire, England, was confirmed infected after captive birds on the same property had tested positive for the virus, according to an announcement Monday. The ewe's milk was found to be positive for the virus through a PCR test, which detected genetic signatures of the virus. The ewe also had H5 antibodies in its blood. At the time of the confirmation, the ewe had symptoms of the infection in the way of mastitis, inflammation of the mammary glands.This mirrors what US dairy farmers have been seeing in cows. An outbreak of H5N1 in dairy cows erupted a year ago, on March 25, 2024. Since then, at least 989 herds across 17 states have been infected with bird flu. In previous reports, farmers and researchers have noted that the virus appears to attack the animal's mammary glands and their milk is teeming with the virus.In the US, at least 70 people have been infected with the virus, 41 of whom were dairy workers. In some cases, workers reported having milk splashed on their faces before developing an infection. While nearly all of the cases have been relatively mild so farsome only with eye inflammation (conjunctivitis)one person in the US has died from the infection after being exposed via wild birds.In the UK, officials said further testing of the rest of the sheep's flock has found no other infections. The one infected ewe has been humanely culled to mitigate further risk and to "enable extensive testing.""Strict biosecurity measures have been implemented to prevent the further spread of disease," UK Chief Veterinary Officer Christine Middlemiss said in a statement. "While the risk to livestock remains low, I urge all animal owners to ensure scrupulous cleanliness is in place and to report any signs of infection to the Animal Plant Health Agency immediately."While UK officials believe that the spillover has been contained and there's no onward transmission among sheep, the latest spillover to a new mammalian species is a reminder of the virus's looming threat."Globally, we continue to see that mammals can be infected with avian influenza A(H5N1)," Meera Chand, Emerging Infection Lead at the UK Health Security Agency (UKHSA), said in a statement. In the US, the Department of Agriculture has documented hundreds of infections in wild and captive mammals, from cats to bears, raccoons, and harbor seals.Chand noted that, so far, the spillovers into animals have not easily transmitted to humans. For instance, in the US, despite extensive spread through the dairy industry, no human-to-human transmission has yet been documented. But, experts fear that with more spillovers and exposure to humans, the virus will gain more opportunities to adapt to be more infectious in humans.Chand says that UKHSA and other agencies are monitoring the situation closely in the event the situation takes a turn. "UKHSA has established preparations in place for detections of human cases of avian flu and will respond rapidly with NHS and other partners if needed."Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 0 Comments
    0 Commentarii ·0 Distribuiri ·14 Views
  • MyTerms wants to become the new way we dictate our privacy on the web
    arstechnica.com
    Please sign here, here, and here, then 200 MyTerms wants to become the new way we dictate our privacy on the web It's not a "do not track" request, it's a set of terms you demand from sites. Kevin Purdy Mar 24, 2025 1:06 pm | 21 Credit: Photos.com/Getty Images Credit: Photos.com/Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAuthor, journalist, and long-time Internet freedom advocate Doc Searls wants us to stop asking for privacy from websites, services, and AI and start telling these things what we will and will not accept.Draft standard IEEE P7012, which Searls has nicknamed "MyTerms" (akin to "Wi-Fi"), is a Draft Standard for Machine Readable Personal Privacy Terms. Searls writes on his blog that MyTerms has been in the works since 2017, and a fully readable version should be ready later this year, following conference presentations at VRM Day and the Internet Identity Workshop (IIW).The big concept is that you are the first party to each contract you have with online things. The websites, apps, or services you visit are the second party. You arrive with either a pre-set contract you prefer on your device or pick one when you arrive, and it tells the site what information you will and will not offer up for access to content or services. Presumably, a site can work with that contract, modify itself to meet the terms, or perhaps tell you it can't do that.The easiest way to set your standards, at first, would be to pick something from Customer Commons, which is modeled on the copyleft concept of Creative Commons. Right now, there's just one example up: #NoStalking, which allows for ads but not with data usable for "targeted advertising or tracking beyond the primary service for which you provided it." Ad blocking is not addressed in Searls' post or IEEE summary, but it would presumably exist outside MyTermseven if MyTerms seems to want to reduce the need for ad blocking.Searls and his group are putting up the standards and letting the browsers, extension-makers, website managers, mobile platforms, and other pieces of the tech stack craft the tools. So long as the human is the first party to a contract, the digital thing is the second, a "disinterested non-profit" provides the roster of agreements, and both sides keep records of what they agreed to, the function can take whatever shape the Internet decides.Terms offered, not requests submittedSearls' and his group's standard is a plea for a sensible alternative to the modern reality of accessing web information. It asks us to stop pretending that we're all reading agreements stuffed full with opaque language, agreeing to thousands upon thousands of words' worth of terms every day and willfully offering up information about us. And, of course, it makes people ask if it is due to become another version of Do Not Track.Do Not Track was a request, while MyTerms is inherently a demand. Websites and services could, of course, simply refuse to show or provide content and data if a MyTerms agent is present, or they could ask or demand that people set the least restrictive terms.There is nothing inherently wrong with setting up a user-first privacy scheme and pushing for sites and software to do the right thing and abide by it. People may choose to stick to search engines and sites that agree to MyTerms. Time will tell if MyTerms can gain the kind of leverage Searls is aiming for.Kevin PurdySenior Technology ReporterKevin PurdySenior Technology Reporter Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch. 21 Comments
    0 Commentarii ·0 Distribuiri ·18 Views
  • How a nephews CD burner inspired early Valve to embrace DRM
    arstechnica.com
    Don't copy that CD How a nephews CD burner inspired early Valve to embrace DRM Valve's Harrington: Unchecked CD duplication "put our entire business model at risk." Kyle Orland Mar 24, 2025 1:16 pm | 6 Dude, you got Half-Life? Can you burn me a copy? Credit: pm-me-your-clocks / Reddit Dude, you got Half-Life? Can you burn me a copy? Credit: pm-me-your-clocks / Reddit Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreBack in 2004, the launch of Half-Life 2 would help launch Steam on the path to eventually becoming the de facto digital rights management (DRM) system for the vast majority of PC games. But years before that, with the 1998 launch of the original Half-Life, Valve cofounder and then-CMO Monica Harrington said she was inspired to take DRM more seriously by her nephew's reaction to the purchase of a new CD-ROM burner.PC Gamer pulled that interesting tidbit from a talk Harrington gave at last week's Game Developers Conference. In her remembering, Harrington's nephew had used funds she had sent for school supplies on a CD replicator, then sent her "a lovely thank you note essentially saying how happy he was to copy and share games with his friends."That was the moment Harrington said she realized this new technology was leading to a "generational shift" in both the availability and acceptability of PC game piracy. While game piracy and DRM definitely existed prior to CD burners (anyone else remember the large codewheels that cluttered many early PC game boxes?), Harrington said the new technologyand the blas attitude her nephew showed toward using it for piracycould "put our entire business model at risk."Shortly after Half-Life launched with a simple CD key verification system in place, Harrington said the company noticed a wave of message board complaints about the game not working. But when Valve cofounder (and Monica's then-husband) Mike Harrington followed up with those complaining posters, he found that "none of them had actually bought the game. So it turned out that the authentication system was working really well," Harrington said. Harrington (left) poses with Scott Walker. Credit: Monica Harrington / Medium Harrington (left) poses with Scott Walker. Credit: Monica Harrington / Medium In a post-talk interview with PC Gamer, Harrington noted that her ex-husband remembers the authentication scheme being in place before they discovered their nephew's newfound love of CD copying. Regardless, Monica said their nephew's experience definitely cemented a new understanding of how everyday players saw game piracy."He was 19 years old. He wasn't thinking about things like companies, business models, or anything like that," Harrington told PC Gamer. "He wasn't thinking about intellectual property. He later apologized profoundly, and I said, 'Oh my God, you have no idea how valuable that was.'"Unfortunately for Valve, the CD key system used in Half-Life DRM was pretty easy to bypass if you knew the right code to use (as our own forum members circa 2001 can attest). Still, it's easy to see how the extra layer of protection Valve put on Half-Life helped inspire Steam's somewhat more robust DRM system for Half-Life 2 years later.The rest of Harrington's GDC talk includes a lot more insider information about the early days of Valve, including a discussion of how rights issues with retail publisher Sierra almost caused Valve to abandon Half-Life 2 in the middle of development. VentureBeat has an incredibly detailed write-up of the talk in its entirety, which serves as a great follow-up to Harrington's own lengthy blog post remembrances from last summer.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 6 Comments
    0 Commentarii ·0 Distribuiri ·19 Views
  • Current SEC chair cast only vote against suing Elon Musk, report says
    arstechnica.com
    Musk and the SEC Current SEC chair cast only vote against suing Elon Musk, report says SEC case over late disclosure of Twitter stock buy still moving ahead, for now. Jon Brodkin Mar 24, 2025 1:19 pm | 12 Credit: Getty Images | NurPhoto Credit: Getty Images | NurPhoto Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA new report says that when the Securities and Exchange Commission sued Elon Musk less than a week before President Trump's inauguration, only one memberthe current chairmanvoted against filing the lawsuit.The vote behind closed doors was 41, with three Democrats and Republican Hester Peirce joining to support the lawsuit over Musk's late disclosure of a Twitter stock purchase in early 2022, Reuters reported today. The one dissent reportedly came from Republican Mark Uyeda, who was subsequently named acting SEC chairman by Trump.Uyeda also asked SEC enforcement staff "to declare that a case they wanted to bring against Elon Musk was not motivated by politics, an unusual request that the staffers refused," Bloomberg reported last month. Reuters said its sources confirmed that "staff refused to sign the pledge, as it is not typical SEC practice."Reuters reported that two of its sources "said Uyeda and his fellow Republican Peirce took issue with what the SEC wanted Musk to paygiving up $150 million in alleged unjust enrichment plus a penalty. Nonetheless, Peirce joined with the three Democrats in voting to sue."An SEC spokesperson declined to comment on the vote when contacted by Ars today. The three current commissioners are Uyeda, Peirce, and Democrat Caroline Crenshaw. Gary Gensler, a Democrat who was chair under Biden, left upon Trump's inauguration. Democrat Jaime Lizrraga also resigned from the SEC in January.SEC v. Musk still moving aheadBefore Musk bought Twitter for $44 billion, he purchased a 9 percent stake in the company and failed to disclose it within 10 days as required under US law. "Defendant Elon Musk failed to timely file with the SEC a beneficial ownership report disclosing his acquisition of more than five percent of the outstanding shares of Twitter's common stock in March 2022, in violation of the federal securities laws," the SEC said in the January 2025 lawsuit filed in US District Court for the District of Columbia. "As a result, Musk was able to continue purchasing shares at artificially low prices, allowing him to underpay by at least $150 million for shares he purchased after his beneficial ownership report was due."The SEC lawsuit against Musk is still moving forward, at least for now. Musk last week received a summons giving him 21 days to respond, according to a court filing.Enforcement priorities are expected to change under the Trump administration, of course. Trump's pick to replace Gensler, Paul Atkins, is waiting for Senate confirmation. Atkins testified to Congress in 2019 that the SEC should reduce its disclosure requirements.Trump last month issued an executive order declaring sweeping power over independent agencies, including the SEC, Federal Trade Commission, and Federal Communications Commission. Trump also fired both FTC Democrats despite a US law and Supreme Court precedent stating that the president cannot fire commission members without good cause.Another Trump executive order targets the alleged "weaponization of the federal government" and ordered an investigation into Biden-era enforcement actions taken by the SEC, FTC, and Justice Department. The Trump order's language recalls Musk's oft-repeated claim that the SEC was "harassing" him.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 12 Comments
    0 Commentarii ·0 Distribuiri ·20 Views
  • Should we be concerned about the loss of weather balloons?
    arstechnica.com
    Eyes in the sky Should we be concerned about the loss of weather balloons? Most of the time, not a big deal. But in critical times, the losses will be felt. Matt Lanza, The Eyewall Mar 24, 2025 1:31 pm | 25 A radiosonde with mailing instructions. Credit: NWS Pittsburgh A radiosonde with mailing instructions. Credit: NWS Pittsburgh Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreDue to staff reductions, retirements, and a federal hiring freeze, the National Weather Service has announced a series of suspensions involving weather balloon launches in recent weeks. The question is, will this significantly degrade forecasts in the United States and around the world?On February 27, it was announced that balloon launches would be suspended entirely at Kotzebue, Alaska, due to staffing shortages.In early March, Albany, N.Y., and Gray, Maine, announced periodic disruptions in launches. Since March 7, it appears that Gray has not missed any balloon launchesthrough Saturday. Albany, however,has missed 14 of them, all during the morning launch cycle (12z).The kicker came on Thursday afternoon when it was announced that all balloon launches would be suspended in Omaha, Neb., and Rapid City, S.D., due to staffing shortages. Additionally, the balloon launches in Aberdeen, S.D.; Grand Junction, Colo.; Green Bay, Wis.; Gaylord, Mich.; North Platte, Neb.; and Riverton, Wyo., would be reduced to once a day from twice a day.What are weather balloons?In a normal time, weather balloons would be launched across the country and world twice per day, right at about 8 am ET and 8 pm ET (one hour earlier in winter), or what we call 12z and 00z. Thats Zulu time, or noon and midnight in Greenwich, England. Rather than explain the whole reasoning behind why we use Zulu time in meteorology, heres a primer on everything you need to know. Weather balloons are launched around the world at the same time. Its a unique collaboration and example of global cooperation in the sciences, something that has endured for many years.These weather balloons are loaded up with hydrogen or helium, soar into the sky, up to and beyond jet stream level,getting to a height of over 100,000 feet before they pop. Attached to the weather balloon is a tool known as a radiosonde, or "sonde" for short. This is basically a weather-sensing device that measures all sorts of weather variables like temperature, dewpoint, pressure, and more. Wind speed is usually derived from this based on GPS transmitting from the sonde. Sunday mornings upper air launch map showing a gaping hole over the Rockies and some of the Plains. Credit: University of Wyoming Sunday mornings upper air launch map showing a gaping hole over the Rockies and some of the Plains. Credit: University of Wyoming What goes up must come down, so when the balloon pops, that radiosonde falls from the sky. A parachute is attached to it, slowing its descent and ensuring no one gets plunked on the head by one. If you find a radiosonde, it should be clearly marked, and you can keep it,let the NWS know you found it, or dispose of it properly. In some instances, there may still be a way to mail it back to the NWS (postage and envelope included and prepaid).How this data is usedIn order to run a weather model, you need an accurate snapshot of what we call the initial conditions. What is the weather at time = zero? Thats your initialization point. Not coincidentally, weather models are almost always run at 12z and 00z, to time in line with retrieving the data from these weather balloons. Its a critically important input to almost all weather modeling we use.The data from balloon launches can be plotted on a chart called a sounding, which gives meteorologists a vertical profile of the atmosphere at a point. During severe weather season, we use these observations to understand the environment we are in, assess risks to model output, and make changes to our own forecasts. During winter, these observations are critical to knowing if a storm will produce snow, sleet, or freezing rain.Observations from soundings are important inputs for assessing turbulence that may impact air travel, marine weather, fire weather, and air pollution. Other than some tools on some aircraft that we utilize, the data from balloon launches is the only real good verification tool we have for understanding how the upper atmosphere is behaving.Have we lost weather balloon data before?We typically lose out on a data point or two each day for various reasons when the balloons are launched. Weve also been operating without a weather balloon launch in Chatham, Mass., for a few years because coastal erosion made the site too challenging and unsafe.Tallahassee, Fla., has been pausing balloon launches for almost a year now due to a helium shortage and inability to safely switch to hydrogen gas for launching the balloons. In Denver, balloon launches have been paused since 2022 due to the helium shortage as well.Those are three sites, though, spread out across the country. We are doubling or tripling the number of sites without launches now, many in critical areas upstream of significant weather.Can satellites replace weather balloons?Yes and no.On one hand, satellites today are capable of incredible observations that can rival weather balloons at times. And they also cover the globe constantly, which is important. That being said, satellites cannot completely replace balloon launches. Why? Because the radiosonde data those balloon launches give us basically acts as a verification metric for models in a way that satellites cannot. It also helps calibrate derived satellite data to ensure that what the satellite is seeing is recorded correctly.But in general, satellites cannot yetreplaceweather balloons. They merely act to improve upon what weather balloons do. A study done in themiddle part of the last decadefound that wind observations improved rainfall forecasts by 30 percent.The one tool at that time that made the biggest difference in improving the forecast were radiosondes. Has this changed since then? Yes, almost certainly. Our satellites have better resolution, are capable of getting more data, and send data back more frequently. So certainly, its improved some. But enough? Thats unclear.An analysis done more recently on the value of dropsondes (the opposite of balloon launches; this time, the sensor is dropped from an aircraft instead of launched from the ground) in forecasting West Coast atmospheric rivers showed a marked improvement in forecasts when those targeted drops occur. Another study in 2017 showed that aircraft observations actually did a good job filling gaps in the upper air data network.Even with aircraft observations, there were mixed studies done in the wake of the COVID-19 reduction in air travel that suggested no impact could be detected above usual forecast error noise or that there was some regional degradation in model performance.But to be quite honest, there have not been many studies that I can find in recent years that assess how the new breed of satellites has (or has not) changed the value of upper-air observations. The NASA GEOS model keeps a record of what data sources are of most impact to model verification with respect to 24-hour forecasts. Number two on the list? Radiosondes. This could be considered probably a loose comp to the GFS model, one of the major weather models used by meteorologists globally.The verdictIn reality, the verdict in all this is to be determined, particularly statistically. Will it make a meaningful statistical difference in model accuracy? Over time, yes, probably, but not in ways that most people will notice day to day.However, based on 20 years of experience and a number of conversations about this with others in the field, there are some very real, very serious concerns beyond statistics. One thing is that the suspended weather balloon launches are occurring in relatively important areas for weather impacts downstream. A missed weather balloon launch in Omaha or Albany wont impact the forecast in California. But what if a hurricane is coming? What if a severe weather event is coming? Youll definitely see impacts to forecast quality during major, impactful events. At the very least, these launch suspensions will increase the noise-to-signal ratio with respect to forecasts. The element with the second-highest impact on the NASA GEOS model? Radiosondes. Credit: NASA The element with the second-highest impact on the NASA GEOS model? Radiosondes. Credit: NASA In other words, there may be situations where you have a severe weather event expected to kickstart in one place, but the lack of knowing the precise location of an upper air disturbance in the Rockies thanks to a suspended launch from Grand Junction, Colo., will lead to those storms forming 50 miles farther east than expected. In other words, losing this data increases the risk profile for more people in terms of knowing about weather, particularly high-impact weather.Lets say we have a hurricane in the Gulf that is rapidly intensifying, and we are expecting it to turn north and northeast thanks to a strong upper-air disturbance coming out of the Rockies, leading to landfall on the Alabama coast. What if the lack of upper-air observations has led to that disturbance being misplaced by 75 miles. Now, instead of Alabama, the storm is heading toward New Orleans. Is this an extreme example? Honestly, I dont think it is as extreme as you might think. We often have timing and amplitude forecast issues with upper-air disturbances during hurricane season, and the reality is that we may have to make some more frequent last-second adjustments now that we didnt have to in recent years. As a Gulf Coast resident, this is very concerning.I dont want to overstate things. Weather forecasts arent going to dramatically degrade day to day because weve reduced some balloon launches across the country. They will degrade, but the general public probably wont notice much difference 90 percent of the time. But that 10 percent of the time? Its not that the differences will be gigantic. But the impact of those differences could very well be gigantic, put more people in harms way, and increase the risk profile for an awful lot of people. Thats what this does: It increases the risk profile, it will lead to reduced weather forecast skill scores, and it may lead to an event that surprises a portion of the population that isnt used to be surprised in the 2020s. To me, that makes the value of weather balloons very, very significant, and I find these cuts to be extremely troubling.Should further cuts in staffing lead to further suspensions in weather balloon launches, we will see this problem magnify more often and involve bigger misses. In other words, the impacts here may not be linear, and repeated increased loss of real-world observational data will lead to very significant degradation in weather model performance that may be noticed more often than described above.This story originally appeared on The Eyewall.Matt Lanza, The Eyewall The Eyewall is dedicated to covering tropical activity in the Atlantic Ocean, Caribbean Sea, and Gulf of Mexico. The site was founded in June 2023 by Matt Lanza and Eric Berger, who work together on the Houston-based forecasting site Space City Weather. 25 Comments
    0 Commentarii ·0 Distribuiri ·21 Views
  • Genetic testing company 23andMe declares bankruptcy
    arstechnica.com
    23andIOU Genetic testing company 23andMe declares bankruptcy Former CEO wants to buy it, but the fate of its customers' genetic data is unclear. John Timmer Mar 24, 2025 10:55 am | 10 Credit: Westend61 Credit: Westend61 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Sunday, the genetic testing and heritage company 23andMe announced that it had entered Chapter 11 bankruptcy and was asking a court to arrange its sale. The company has been losing money for years, and a conflict between its board and CEO about future directions led to the entire board resigning back in September. Said CEO, Anne Wojcicki, has now resigned and will be pursuing an attempt to purchase the company and take it private.At stake is the fate of genetic data from the company's 15 million customers. The company has secured enough funding to continue operations while a buyer is found, and even though US law limits how genetic data can be used, the pending sale has raised significant privacy concerns.Risky businessThe company launched around the time that "gene chips" first allowed people to broadly scan the human genome for sites where variations were common. A few of these variants are associated with diseases, and 23andMe received approval to test for a number of these. But its big selling point for many people was the opportunity to explore their heritage. This relied on looking broadly at the patterns of variation and comparing those to the patterns typically found in different geographic regions. It's an imperfect analysis, but it can often provide a decent big-picture resolution of a person's ancestry.23andMe faced a number of challenges, though. For starters, the gene chips quickly became commodities, allowing a large range of competitors to enter the field, some of which had stronger backgrounds in things like linking genealogies to public records. This commodification also meant that many potential 23andMe partners in the pharmaceutical industry, who might be interested in gene/disease linkages, could affordably build their own databases or simply rely on some of the public resources that have since been developed, like the UK's Biobank.For many direct customers, the test was a "one and done" experienceonce they learned their heritage, there wasn't a strong enough draw for them to pay for any of 23andMe's other services. The company has recently focused on trying to get people to develop diet and fitness plans based on their genetic data, but that hasn't been enough to make the company profitable.Where will the data go?Former CEO Wojcicki, one of the company's founders, is convinced there is still a viable business there and has been interested in taking the company private for some time. Its sale may provide her the opportunity to do so, provided she can line up the finances for it.Given the business challenges it's not clear what other buyer might be interested in 23andMe as a company, raising the prospect that Wojcicki will be outbid by someone who is interested in the company's primary asset: the genetic data of 15 million people around the globe. Within the US, the use of this information is limited by the Genetic Information Nondiscrimination Act, which prevents its use in health insurance decisions and employment. But plenty of other uses may potentially be legal, and customers from overseas may have far fewer protections.John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 10 Comments
    0 Commentarii ·0 Distribuiri ·21 Views
  • Can we make AI less power-hungry? These researchers are working on it.
    arstechnica.com
    feeding the beast Can we make AI less power-hungry? These researchers are working on it. As demand surges, figuring out the performance of proprietary models is half the battle. Jacek Krywko Mar 24, 2025 7:00 am | 21 Credit: Igor Borisenko/Getty Images Credit: Igor Borisenko/Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAt the beginning of November 2024, the US Federal Energy Regulatory Commission (FERC) rejected Amazons request to buy an additional 180 megawatts of power directly from the Susquehanna nuclear power plant for a data center located nearby. The rejection was due to the argument that buying power directly instead of getting it through the grid like everyone else works against the interests of other users.Demand for power in the US has been flat for nearly 20 years. But now were seeing load forecasts shooting up. Depending on [what] numbers you want to accept, theyre either skyrocketing or theyre just rapidly increasing, said Mark Christie, a FERC commissioner.Part of the surge in demand comes from data centers, and their increasing thirst for power comes in part from running increasingly sophisticated AI models. As with all world-shaping developments, what set this trend into motion was visionquite literally.The AlexNet momentBack in 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, AI researchers at the University of Toronto, were busy working on a convolution neural network (CNN) for the ImageNet LSRVC, an image-recognition contest. The contests rules were fairly simple: A team had to build an AI system that could categorize images sourced from a database comprising over a million labeled pictures.The task was extremely challenging at the time, so the team figured they needed a really big neural netway bigger than anything other research teams had attempted. AlexNet, named after the lead researcher, had multiple layers, with over 60 million parameters and 650 thousand neurons. The problem with a behemoth like that was how to train it.What the team had in their lab were a few Nvidia GTX 580s, each with 3GB of memory. As the researchers wrote in their paper, AlexNet was simply too big to fit on any single GPU they had. So they figured out how to split AlexNets training phase between two GPUs working in parallelhalf of the neurons ran on one GPU, and the other half ran on the other GPU.AlexNet won the 2012 competition by a landslide, but the team accomplished something way more profound. The size of AI models was once and for all decoupled from what was possible to do on a single CPU or GPU. The genie was out of the bottle.(The AlexNet source code was recently made available through the Computer History Museum.)The balancing actAfter AlexNet, using multiple GPUs to train AI became a no-brainer. Increasingly powerful AIs used tens of GPUs, then hundreds, thousands, and more. But it took some time before this trend started making its presence felt on the grid. According to an Electric Power Research Institute (EPRI) report, the power consumption of data centers was relatively flat between 2010 and 2020. That doesnt mean the demand for data center services was flat, but the improvements in data centers energy efficiency were sufficient to offset the fact we were using them more.Two key drivers of that efficiency were the increasing adoption of GPU-based computing and improvements in the energy efficiency of those GPUs. That was really core to why Nvidia was born. We paired CPUs with accelerators to drive the efficiency onward, said Dion Harris, head of Data Center Product Marketing at Nvidia. In the 20102020 period, Nvidia data center chips became roughly 15 times more efficient, which was enough to keep data center power consumption steady.All that changed with the rise of enormous large language transformer models, starting with ChatGPT in 2022. There was a very big jump when transformers became mainstream, said Mosharaf Chowdhury, a professor at the University of Michigan. (Chowdhury is also at the ML Energy Initiative, a research group focusing on making AI more energy-efficient.)Nvidia has kept up its efficiency improvements, with a ten-fold boost between 2020 and today. The company also kept improving chips that were already deployed. A lot of where this efficiency comes from was software optimization. Only last year, we improved the overall performance of Hopper by about 5x, Harris said. Despite these efficiency gains, based on Lawrence Berkely National Laboratory estimates, the US saw data center power consumption shoot up from around 76 TWh in 2018 to 176 TWh in 2023.The AI lifecycleLLMs work with tens of billions of neurons approaching a number rivalingand perhaps even surpassingthose in the human brain. The GPT 4 is estimated to work with around 100 billion neurons distributed over 100 layers and over 100 trillion parameters that define the strength of connections among the neurons. These parameters are set during training, when the AI is fed huge amounts of data and learns by adjusting these values. Thats followed by the inference phase, where it gets busy processing queries coming in every day.The training phase is a gargantuan computational effortOpen AI supposedly used over 25,000 Nvidia Ampere 100 GPUs running on all cylinders for 100 days. The estimated power consumption is 50 GW-hours, which is enough to power a medium-sized town for a year. According to numbers released by Google, training accounts for 40 percent of the total AI model power consumption over its lifecycle. The remaining 60 percent is inference, where power consumption figures are less spectacular but add up over time.Trimming AI models downThe increasing power consumption has pushed the computer science community to think about how to keep memory and computing requirements down without sacrificing performance too much. One way to go about it is reducing the amount of computation, said Jae-Won Chung, a researcher at the University of Michigan and a member of the ML Energy Initiative.One of the first things researchers tried was a technique called pruning, which aimed to reduce the number of parameters. Yann LeCun, now the chief AI scientist at Meta, proposed this approach back in 1989, terming it (somewhat menacingly) the optimal brain damage. You take a trained model and remove some of its parameters, usually targeting the ones with a value of zero, which add nothing to the overall performance. You take a large model and distill it into a smaller model trying to preserve the quality, Chung explained.You can also make those remaining parameters leaner with a trick called quantization. Parameters in neural nets are usually represented as a single-precision floating point number, occupying 32 bits of computer memory. But you can change the format of parameters to a smaller one that reduces the amount of needed memory and makes the computation faster, Chung said.Shrinking an individual parameter has a minor effect, but when there are billions of them, it adds up. Its also possible to do quantization-aware training, which performs quantization at the training stage. According to Nvidia, which implemented quantization training in its AI model optimization toolkit, this should cut the memory requirements by 29 to 51 percent.Pruning and quantization belong to a category of optimization techniques that rely on tweaking the way AI models work internallyhow many parameters they use and how memory-intensive their storage is. These techniques are like tuning an engine in a car to make it go faster and use less fuel. But there's another category of techniques that focus on the processes computers use to run those AI models instead of the models themselvesakin to speeding a car up by timing the traffic lights better.Finishing firstApart from optimizing the AI models themselves, we could also optimize the way data centers run them. Splitting the training phase workload evenly among 25 thousand GPUs introduces inefficiencies. When you split the model into 100,000 GPUs, you end up slicing and dicing it in multiple dimensions, and it is very difficult to make every piece exactly the same size, Chung said.GPUs that have been given significantly larger workloads have increased power consumption that is not necessarily balanced out by those with smaller loads. Chung figured that if GPUs with smaller workloads ran slower, consuming much less power, they would finish roughly at the same time as GPUs processing larger workloads operating at full speed. The trick was to pace each GPU in such a way that the whole cluster would finish at the same time.To make that happen, Chung built a software tool called Perseus that identified the scope of the workloads assigned to each GPU in a cluster. Perseus takes the estimated time needed to complete the largest workload on a GPU running at full. It then estimates how much computation must be done on each of the remaining GPUs and determines what speed to run them so they finish at the same. Perseus precisely slows some of the GPUs down, and slowing down means less energy. But the end-to-end speed is the same, Chung said.The team tested Perseus by training the publicly available GPT-3, as well as other large language models and a computer vision AI. The results were promising. Perseus could cut up to 30 percent of energy for the whole thing, Chung said. He said the team is talking about deploying Perseus at Meta, but it takes a long time to deploy something at a large company.Are all those optimizations to the models and the way data centers run them enough to keep us in the green? It takes roughly a year or two to plan and build a data center, but it can take longer than that to build a power plant. So are we winning this race or losing? Its a bit hard to say.Back of the envelopeAs the increasing power consumption of data centers became apparent, research groups tried to quantify the problem. A Lawerence Berkley Laboratory team estimated that data centers annual energy draw in 2028 would be between 325 and 580 TWh in the USthats between 6.7 and 12 percent of the total US electricity consumption. The International Energy Agency thinks it will be around 6 percent by 2026. Goldman Sachs Research says 8 percent by 2030, while EPRI claims between 4.6 and 9.1 percent by 2030.EPRI also warns that the impact will be even worse because data centers tend to be concentrated at locations investors think are advantageous, like Virginia, which already sends 25 percent of its electricity to data centers. In Ireland, data centers are expected to consume one-third of the electricity produced in the entire country in the near future. And thats just the beginning.Running huge AI models like ChatGPT is one of the most power-intensive things that data centers do, but it accounts for roughly 12 percent of their operations, according to Nvidia. That is expected to change if companies like Google start to weave conversational LLMs into their most popular services. The EPRI report estimates that a single Google search today uses around 0.3 watts of power, while a single Chat GPT query bumps that up to 2.9 watts. Based on those values, the report estimates that an AI-powered Google search would require Google to deploy 400,000 new servers that would consume 22.8 TWh per year.AI searches take 10x the electricity of a non-AI search, Christie, the FERC commissioner, said at a FERC-organized conference. When FERC commissioners are using those numbers, youd think there would be rock-solid science backing them up. But when Ars asked Chowdhury and Chung about their thoughts on these estimates, they exchanged looks and smiled.Closed AI problemChowdhury and Chung don't think those numbers are particularly credible. They feel we know nothing about what's going on inside commercial AI systems like ChatGPT or Gemini, because OpenAI and Google have never released actual power-consumption figures.They didnt publish any real numbers, any academic papers. The only number, 0.3 watts per Google search, appeared in some blog post or other PR-related thingy, Chodwhury said. We dont know how this power consumption was measured, on what hardware, or under what conditions, he said. But at least it came directly from Google.When you take that 10x Google vs ChatGPT equation or whateverone part is half-known, the other part is unknown, and then the division is done by some third party that has no relationship with Google nor with Open AI, Chowdhury said.Googles PR-related thingy was published back in 2009, while the 2.9-watts-per-ChatGPT-query figure was probably based on a comment about the number of GPUs needed to train GPT-4 made by Jensen Huang, Nvidias CEO, in 2024. That means the 10x AI versus non-AI search claim was actually based on power consumption achieved on entirely different generations of hardware separated by 15 years. But the number seemed plausible, so people keep repeating it, Chowdhury said.All reports we have today were done by third parties that are not affiliated with the companies building big AIs, and yet they arrive at weirdly specific numbers. They take numbers that are just estimates, then multiply those by a whole lot of other numbers and get back with statements like AI consumes more energy than Britain, or more than Africa, or something like that. The truth is they dont know that, Chowdhury said.He argues that better numbers would require benchmarking AI models using a formal testing procedure that could be verified through the peer-review process.As it turns out, the ML Energy Initiative defined just such a testing procedure and ran the benchmarks on any AI models they could get ahold of. The group then posted the results online on their ML.ENERGY Leaderboard.AI-efficiency leaderboardTo get good numbers, the first thing the ML Energy Initiative got rid of was the idea of estimating how power-hungry GPU chips are by using their thermal design power (TDP), which is basically their maximum power consumption. Using TDP was a bit like rating a cars efficiency based on how much fuel it burned running at full speed. Thats not how people usually drive, and thats not how GPUs work when running AI models. So Chung built ZeusMonitor, an all-in-one solution that measured GPU power consumption on the fly.For the tests, his team used setups with Nvidias A100 and H100 GPUs, the ones most commonly used at data centers today, and measured how much energy they used running various large language models (LLMs), diffusion models that generate pictures or videos based on text input, and many other types of AI systems.The largest LLM included in the leaderboard was Metas Llama 3.1 405B, an open-source chat-based AI with 405 billion parameters. It consumed 3352.92 joules of energy per request running on two H100 GPUs. Thats around 0.93 watt-hourssignificantly less than 2.9 watt-hours quoted for ChatGPT queries. These measurements confirmed the improvements in the energy efficiency of hardware. Mixtral 8x22B was the largest LLM the team managed to run on both Ampere and Hopper platforms. Running the model on two Ampere GPUs resulted in 0.32 watt-hours per request, compared to just 0.15 watt-hours on one Hopper GPU.What remains unknown, however, is the performance of proprietary models like GPT-4, Gemini, or Grok. The ML Energy Initiative team says it's very hard for the research community to start coming up with solutions to the energy efficiency problems when we dont even know what exactly were facing. We can make estimates, but Chung insists they need to be accompanied by error-bound analysis. We dont have anything like that today.The most pressing issue, according to Chung and Chowdhury, is the lack of transparency. Companies like Google or Open AI have no incentive to talk about power consumption. If anything, releasing actual numbers would harm them, Chowdhury said. But people should understand what is actually happening, so maybe we should somehow coax them into releasing some of those numbers.Where rubber meets the roadEnergy efficiency in data centers follows the trend similar to Moores lawonly working at a very large scale, instead of on a single chip, Nvidia's Harris said. The power consumption per rack, a unit used in data centers housing between 10 and 14 Nvidia GPUs, is going up, he said, but the performance-per-watt is getting better.When you consider all the innovations going on in software optimization, cooling systems, MEP (mechanical, electrical, and plumbing), and GPUs themselves, we have a lot of headroom, Harris said. He expects this large-scale variant of Moores law to keep going for quite some time, even without any radical changes in technology.There are also more revolutionary technologies looming on the horizon. The idea that drove companies like Nvidia to their current market status was the concept that you could offload certain tasks from the CPU to dedicated, purpose-built hardware. But now, even GPUs will probably use their own accelerators in the future. Neural nets and other parallel computation tasks could be implemented on photonic chips that use light instead of electrons to process information. Photonic computing devices are orders of magnitude more energy-efficient than the GPUs we have today and can run neural networks literally at the speed of light.Another innovation to look forward to is 2D semiconductors, which enable building incredibly small transistors and stacking them vertically, vastly improving the computation density possible within a given chip area. We are looking at a lot of these technologies, trying to assess where we can take them, Harris said. But where rubber really meets the road is how you deploy them at scale. Its probably a bit early to say where the future bang for buck will be.The problem is when we are making a resource more efficient, we simply end up using it more. It is a Jevons paradox, known since the beginnings of the industrial age. But will AI energy consumption increase so much that it causes an apocalypse? Chung doesn't think so. According to Chowdhury, if we run out of energy to power up our progress, we will simply slow down.But people have always been very good at finding the way, Chowdhury added.Jacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 21 Comments
    0 Commentarii ·0 Distribuiri ·19 Views
  • The 2025 Cadillac Optiq: Sensibly sized and improves on the Equinox EV
    arstechnica.com
    babby caddy The 2025 Cadillac Optiq: Sensibly sized and improves on the Equinox EV The AWD Optiq is quite competitive in the sub-$60,000 EV crossover segment. Michael Teo Van Runkle Mar 24, 2025 8:00 am | 5 We've previously tested Cadillac's mid-sized and supersized electric cars, now it's time for the smallest one, the Optiq. Credit: Michael Teo Van Runkle We've previously tested Cadillac's mid-sized and supersized electric cars, now it's time for the smallest one, the Optiq. Credit: Michael Teo Van Runkle Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreCadillac provided flights from Los Angeles to San Fransisco and accommodation so Ars could drive the Optiq. Ars does not accept paid editorial content.Badging on the rear of the new Cadillac Optiq may confuse some American buyers. This crossover is fully electric, so the alphanumeric nomenclature can't refer to engine displacementand not horsepower, either. Instead, 500E4 refers to 500 Newton-meters of torque, the metric units for more familiar pound-feet, plus dual-motor all-wheel drive. Rating the Optiq's output in kilowatts might have at least rendered something at least somewhat more comprehensible, but the designation hints at the Optiq's intended global market, which in turn reveals just how important this crossover EV is for Cadillac's future.The Equinox slots in as an upmarket variant of the Chevrolet Equinox EV, featuring a suite of enhancements unveiled at a Downtown Los Angeles preview last spring. With the exterior design, interior materials, and tech features all known quantities, I arrived to a drive program held in the San Francisco Bay Areaconcurrently with the Escalade IQmore curious to experience how much the Optiq's additional power and refinement can possibly improve on the already solid Equinox.On paper, the Caddy bests its Chevy counterpart despite using much of the same hardware. In this case, an 85-kilowatt-hour battery allows for an EPA-estimated range of 302 miles (486 km) despite output from dual motors matching the AWD Equinox at 300 hp (223 kW), just with a bit more in the torque department at 354 lb-ft (almost, but not quite, that 500Nm figure). Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle Michael Teo Van Runkle The Optiq also weighs 5,192 pounds (2,355 kg) despite a diminutive footprint, proportions which a low and raked windshield angle only help to emphasize (in addition to improving aerodynamic drag). But in practice, the major mechanical difference between the Equinox and Optiq comes down to suspension tuning. Cadillac's marketing materials highlight both the luxury and sporty spirit of this crossover, and the shock dampers needed to live up to those somewhat divergent goals.On the rough roads of San Francisco, and then up to the headlands of Marin County, the Optiq first rode with more supple compliance, drowning out speed bumps and streetcar tracks with ease. Then, when the roads started winding, the adjustable drive modes let me switch up the character, as I set the steering to the lightest mode to avoid torque steer and ramp up feedback from the front tires. Of course, I also selected the maximum acceleration and brake responsiveness, then started hustling through a long series of corners.Almost more impressive than the suspension improvement versus the Equinox, which I drove in Michigan, the Optiq's lack of noise, vibration, and harshness (NVH) stood out throughout the drive. This in turn highlighted the Dolby Atmos-enabled sound system, made up of 19 AKG speakers controlled via a 33-inch touchscreen. Though the Escalade IQ absolutely blew the smaller Optiq out of the water, despite lacking Atmos for model-year 2025 due to development timelines, I still wanted to test everything from Pink Floyd's tripped-out Comfortably Numb to the peculiar pitches of Animal Collective, the electro bass of Major Lazer, and some shriller dance pop by Lady Gaga. The 33-inch display is common across most new Cadillacs. CarPlay is absent, but the Google Maps integration is very good. Michael Teo Van Runkle The 33-inch display is common across most new Cadillacs. CarPlay is absent, but the Google Maps integration is very good. Michael Teo Van Runkle There's physical controls for the infotainment if you don't want to use the touchscreen. Michael Teo Van Runkle There's physical controls for the infotainment if you don't want to use the touchscreen. Michael Teo Van Runkle The speakers were let down by the lack of options available via the online streaming we tried during our test drive. Michael Teo Van Runkle The speakers were let down by the lack of options available via the online streaming we tried during our test drive. Michael Teo Van Runkle There's physical controls for the infotainment if you don't want to use the touchscreen. Michael Teo Van Runkle The speakers were let down by the lack of options available via the online streaming we tried during our test drive. Michael Teo Van Runkle Searching through the Amazon Music app hoping to find songs optimized for Dolby Atmos surround sound proved nearly impossible, though. If I owned an Optiq, I'd need to create playlists in advance rather than just aimlessly scrolling (or relying on curated options from Cadillac and Dolby). That type of mindset shift applies to much of EV life, in the end, similar to how Optiq's total range dropping about 5 percent versus the Equinox FWD's 319 miles (513 km) should matter less than many urban buyers may imagine.For the additional torque and dual-motor AWD, the Optiq starts at $55,595 (or $61,695 for this loaded Optiq Sport 2). Compare that to the AWD Equinox with 285 miles of range (459 km) and a starting sticker of $49,400which represents a big jump up from the FWD at $34,995. The Optiq includes far more standard features, especially Super Cruise hands-free driving, which I thoroughly enjoyed activating on the 101 freeway crossing the Golden Gate Bridge.I also experienced zero screen glitches or blackouts, so hopefully the Optiq's additional development time solved some of the struggles seen on the Equinox and its other General Motors (ne Ultium) EVs. Yet similarly to the Equinox and Blazer EVs, and even the Acura ZDX, the Optiq's driving dynamics overall can easily fall onto the more anaesthetized side of the luxury-sporty divide. Sluggish initial responsiveness to the accelerator pedal emphasizes that impression, though the Optiq can sprint about quite quickly once underway. Certainly don't expect the instant torque punch of other EVs, even ones with similar total power output ratings, though. The Super Cruise hands-free driver assist is quite mature now. Michael Teo Van Runkle The Super Cruise hands-free driver assist is quite mature now. Michael Teo Van Runkle The 2025 Optiq is still a CCS1 EV, not NACS. Michael Teo Van Runkle The 2025 Optiq is still a CCS1 EV, not NACS. Michael Teo Van Runkle Will this badging make sense to anyone? Does it matter? Michael Teo Van Runkle Will this badging make sense to anyone? Does it matter? Michael Teo Van Runkle The 2025 Optiq is still a CCS1 EV, not NACS. Michael Teo Van Runkle Will this badging make sense to anyone? Does it matter? Michael Teo Van Runkle To me, that personality works best for a Cadillacjust climb in and experience the glidepath of electric luxury. And all without needing to make serious sacrifices to the laws of physics, versus an Escalade IQ that weighs just about twice as much. The back seats fit my 6-foot-1-inch (1.85 m) frame with plenty of leg and headroom, while the rear trunk allows for plenty of storage, including two plastic recesses behind each wheelwell that a Caddy rep certainly didn't describe as perfect for bringing melons home from the grocery store. As with those other GM EVs, though, the Optiq does lack CarPlay. But that controversial decision seems less and less important to me every time I drive a Chevy, GMC, or now Cadillac EV, since the onboard Google software actually does a remarkably accurate job of predicting range and finding charge stations. The lack of CarPlay may turn off some buyers, but compared to more typical Cadillac prices, this crossover looks downright reasonablenot to mention when compared to the rest of the industry, including the now pass Tesla Model Y, the lackluster Audi Q4, and more conceptual Genesis GV60. 5 Comments
    0 Commentarii ·0 Distribuiri ·14 Views
  • David Blaine shows his hand in Do Not Attempt
    arstechnica.com
    now you see him David Blaine shows his hand in Do Not Attempt NatGeo docuseries follows Blaine around the world to learn the secrets of ordinary people doing remarkable feats. Jennifer Ouellette Mar 23, 2025 4:17 pm | 1 Magician David Blaine smiles while running his hand through a flame. Credit: National Geographic/Dana Hayes Magician David Blaine smiles while running his hand through a flame. Credit: National Geographic/Dana Hayes Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOver the course of his long career, magician and endurance performer David Blaine has taken on all kinds of death-defying feats: catching a bullet in his teeth, fasting for 44 days, or holding his breath for a record-breaking 17 minutes and 4 seconds, to name a few. Viewers will get to see a different side of Blaine as he travels the world to meet kindred spirits from a wide range of cultures in David Blaine Do Not Attempt, a new six-episode docuseries from National Geographic.(Some spoilers below.)The series was shot over three calendar years (2022-2024) in nine different countries, and features Blaine interacting with, and learning from, all manner of daredevils, athletes, street performers, and magicians. In Southeast Asia, for instance, he watches practitioners of an Indonesian martial art called Debus manipulate razor blades in their mouths and eat nails. (There is no trick to this, just conditioned endurance to pain, as Blaine discovers when he attempts to eat nails: his throat was sore for days.) He braves placing scorpions on his body, breaks a bottle with his head, and sets himself on fire in Brazil while jumping off a high bridge.One of the elements that sets this series apart from Blaine's previous magical specials is his willingness to be filmed practicing and training to do the various featured stunts, including early failed attempts. This makes him seem more vulnerable and immensely likableeven if it made him personally uncomfortable during filming. David Blaine and Amandeep Singh prepare to break bottles with their fists. National Geographic David Blaine and Amandeep Singh prepare to break bottles with their fists. National Geographic Fire Ramesh demonstrates spitting a fireball for Blaine. National Geographic/Aditya Kapoor Fire Ramesh demonstrates spitting a fireball for Blaine. National Geographic/Aditya Kapoor Blaine performs a triple suicide slide with Sam Sam Thubane and Kayla Oliphant Blaine performs a triple suicide slide with Sam Sam Thubane and Kayla OliphantFire Ramesh demonstrates spitting a fireball for Blaine. National Geographic/Aditya Kapoor Blaine performs a triple suicide slide with Sam Sam Thubane and Kayla Oliphant Blaine poses with a "bee beard" and a deck of cards. National Geographic/Doug McKenzie Blaine learns the trick to sticking a knife up his nose. National Geographic "I've always kept that part hidden," Blaine told Ars. "Normally I work for a few years and I develop [a stunt] until I feel pretty good about it, and then I go and do the stunt and push myself as far as possible. But in this scenario, it was so many places, so many people, so many events, so many feats, so many things to learn so fast. So it was me in a way that I never liked to show myself: awkward and uncomfortable and screaming and laughing. It's the things that as a magician, I always hide. As a magician, I try to be very monotone and let the audience react, but I was in that audience reacting. So for this series, I was the spectator to the magic, and it was for me very uncomfortable. But I was watching these amazing performerswhat I consider to be magicians."Safety firstThe task of keeping Blaine and the entire crew safe in what are unquestionably dangerous situations falls to safety expert Sebastian "Bas" Pot. "I joke that my title is Glorifed Nanny," Pot told Ars. "I specialize in taking people to very remote locations where they want to do insane things. I have three basic rules: no one dies, everyone gets paid, and we all smile and laugh every day. If I achieve those three things, my job is done." He deliberately keeps himself out of the shot; there is only one scene in Do Not Attempt where we see Pot's face as he's discussing the risks of a stunt with Blaine.Blaine has always taken on risks, but because he has historically hidden his preparation from public view, viewers might not realize how cautious he really is. "What people tend to forget about guys like David is that they're very calculated," said Pot. The biggest difference between working with Blaine and other clients? "Normally I'll do everything, I will never ask anyone to do anything that I wouldn't do myself," said Pot. "David is taking huge risks and there's a lot that he does that I wouldn't do."Like Blaine, Pot also emphasized the importance of repetition to safety. In addition, "A huge amount of it is keeping the calm on set, listening and observing and not getting caught up in the excitement of what's going on," he said" While he uses some basic technology for tasks like measuring wind speed, checking for concussion, or monitoring vital signs, for the most part keeping the set safe "is very much about switching off from the technology," he said. Ken Stornes leaps from a platform in a Norwegian death dive. National Geographic/Dana Hayes Ken Stornes leaps from a platform in a Norwegian death dive. National Geographic/Dana Hayes David Blaine jumps belly-first into a pile of snow. National Geographic David Blaine jumps belly-first into a pile of snow. National Geographic Inka Cagnasso coaches Blaine inside a wind tunnel. National Geographic/Dana Hayes Inka Cagnasso coaches Blaine inside a wind tunnel. National Geographic/Dana Hayes David Blaine jumps belly-first into a pile of snow. National Geographic Inka Cagnasso coaches Blaine inside a wind tunnel. National Geographic/Dana Hayes Salla Hakanp walks under the ice National Geographic Blaine pounds against a frozen-over hole in the ice. National Geographic/Dana Hayes And when everyone else on set is watching Blaine, "I'm looking outwards, because I've got enough eyes on him," said Pot. There was only one bad accident during filming, involving a skydiving crew member during the Arctic Circle episode who suffered a spinal fracture after a bad landing. The crew member recuperated and was back in the wind tunnel practicing within a month.This is the episode where Blaine attempts a Viking "death dive" into a snow drift under the tutelage of a Norwegian man named Ken Stornes, with one key difference: Stornes jumps from much greater heights. He also participates in a sky dive. But the episode mostly focuses on Blaine's training with free divers under the ice to prepare for a stunt in which Blaine swims from one point under Finnish ice to another, pulling himself along with a rope while holding his breath. A large part of his motivation for attempting it was his failed 2006 "Drowned Alive" seven-day stunt in front of Lincoln Center in New York. (He sustained liver and kidney damage as a result.)"One of my favorite quotes is Churchill, when he says, 'Success is the ability to go from one failure to the next failure with enthusiasm,'" said Blaine. "That's what this entire series is. It's these incredible artists and performers and conservationists and people that do these incredible feats, but it's the thousands of hours of work, training, failure, repeat that you don't see that makes what they do seem magical. There's no guidebook for what they're doing. But they've developed these things to the point that when I was watching them, I'm crying with joy. I can't believe that what I'm seeing is really happening in front of my eyes. It is magical. And it's because of the amount of repetition, work, failure, repeat that they put in behind the curtain that you don't see."This time, Blaine succeeded. "It was an incredible experience with these artists that have taken this harsh environment and turned it into a wonderland," said Blaine of his Arctic experience. "The free divers go under three and a half feet of ice, hold their breath. There's no way out. They have to find the exit point.""When you stop and look, you forget that you're in this extreme environment and suddenly it's the most beautiful surroundings, unlike anything that I've ever seen," he said. "It's almost like being in outer space. And when you're in that extreme and dangerous situation, there's this camaraderie, they're all in it together. At the same time, they're all very alert. There's no distractions. Nobody's thinking about messages, phones, bills. Everybody's right there in that moment. And you're very aware of everything around you in a way that normally in the real world doesn't exist." David Blaine watches as Paty and Jaki Valente dive off the Joatinga Bridge in Brazil. National Geographic David Blaine watches as Paty and Jaki Valente dive off the Joatinga Bridge in Brazil. National Geographic Andre Franco lights Blaine's shins on fire. National Geographic/Dana Hayes Andre Franco lights Blaine's shins on fire. National Geographic/Dana Hayes David Blaine watches as Paty and Jaki Valente dive off the Joatinga Bridge in Brazil. National Geographic Andre Franco lights Blaine's shins on fire. National Geographic/Dana Hayes Blaine is covered in fire gel as he prepares to light himself on fire. National Geographic/Dan Winters Blaine is covered in fire gel as he prepares to light himself on fire. National Geographic/Dan Winters Blaine walks off the edge of Joatinga Bridge while on fire. Blaine walks off the edge of Joatinga Bridge while on fire.Blaine is covered in fire gel as he prepares to light himself on fire. National Geographic/Dan Winters Blaine walks off the edge of Joatinga Bridge while on fire.Blaine admits that his attitude towards risk has changed somewhat with age. "I'm older and I have a daughter, and therefore I don't want to do something where, oh, it went wrong and it's the worst case scenario," he said. "So I have been very careful. If something seemed like the risk wasn't worth it, I backed away. For some of these things, I would just have to watch, study, learn, take time off, come back. I wouldn't do it unless I felt that the master who was sharing their skillset with me felt that I could pull it off. There was a trust and I was able to listen and follow exactly. That ability to listen to directions and commit to something is a very necessary part to pulling something off like this."Granted, he didn't always listen. When he deliberately attracted a swarm of bees to make a "bee beard," he was advised to wear a white tee shirt to avoid getting stung. But black is Blaine's signature color and he decided to stick with it. He did indeed get stung about a dozen times but took the pain in stride. "He takes responsibility for him," Pot (who is a beekeeper) said of that decision. "I'd tell a crew member to go change their tee shirt and they would."The dedication to proper preparation and training is evident throughout Do Not Attempt, but particularly in the Southeast Asia-centric episode where Blaine attempts to kiss a venomous King Cobrawhat Pot considers to be the most dangerous stunt in the series. "The one person I've ever had die was a snake expert in Venezuela years ago, who got bitten by his own snake because he chose not to follow the safety protocols we had put in place," said Pot.Kissing a cobraSo there were weeks of preparation before Blaine even attempted the stunt, guided by an Indonesian Debus practitioner named Fiitz, who can read the creatures' body language so effortlessly he seems to be dancing with the snakes. The final shot (see clip below) took ten days to film. Anti-venom was naturally on hand, but while anti-venom might save your life if you're bitten by a King Cobra, "the journey you're going to on will be hell," Pol said. "You can still have massive necrosis, lose a limb, it might take weeksthere's no guarantees at all. [to recover]." And administering anti-venom can induce cardiac shock if it's not done correctly. "You don't want some random set medic reading instructions off Google on how to give anti-venom" said Pot. David Blaine kisses a King Cobra with the expert guidance of Debus practitioner Fiitz. Blaine's genuine appreciation for the many performers he encounters in his journey is evident in every frame. "[The experience] changed me in a way that you can't simply explain," Blaine said. "It was incredible to discover these kindred spirits all around the world, people who had these amazing passions. Many of them had to go against what everybody said was possible. Many of them had to fail, repeat, embarrass themselves, risk everything, and learn. That was one of the greatest experiences: discovering this unification of all these people from all different parts of the world that I felt had that theme in common. It was nice to be there firsthand, getting a glimpse into their world or seeing what drives them.""The other part that was really special: I became a person that gets to watch real magic happening in front of my eyes," Blaine continued. "When I'm up in the sky watching [a skydiver named] Inka, I'm actually crying tears of joy because it's so compelling and so beautiful. So many of these places around the world had these amazing performers. Across the board, each place, every continent, every person, every performer has given me a gift that I'll cherish for the rest of my life."David Blaine Do Not Attempt premieres tonight on National Geographic and starts streaming tomorrow on Disney+ and Hulu.Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 1 Comments
    0 Commentarii ·0 Distribuiri ·37 Views
  • This launcher is about to displace the V-2 as Germanys largest rocket
    arstechnica.com
    True north This launcher is about to displace the V-2 as Germanys largest rocket Isar Aerospace's first Spectrum rocket will launch from Andya Spaceport in Norway. Stephen Clark Mar 23, 2025 12:06 pm | 9 Isar Aerospace's Spectrum rocket on the launch pad at Andya Spaceport in Norway. Credit: Isar Aerospace/Robin Brillert/Wingmen Media Isar Aerospace's Spectrum rocket on the launch pad at Andya Spaceport in Norway. Credit: Isar Aerospace/Robin Brillert/Wingmen Media Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSeven years ago, three classmates at the Technical University of Munich believed their student engineering project might hold some promise in the private sector.At the time, one of the co-founders, Daniel Metzler, led a team of 40 students working on rocket engines and launching sounding rockets. Josef Fleischmann was on the team that won the first SpaceX Hyperloop competition. Together with another classmate, Markus Brandl, they crafted rocket parts in a campus workshop before taking the leap and establishing Isar Aerospace, named for the river running through the Bavarian capital.Now, Isar's big moment has arrived. The company's orbital-class first rocket, named Spectrum, is set to lift off from a shoreline launch pad in Norway as soon as Monday.The three-hour launch window opens at 12:30 pm local time in Norway, or 7:30 am EDT in the United States. "The launch date remains subject to weather, safety and range infrastructure," Isar said in a statement. Isar's Spectrum rocket rolls out to its launch pad in Norway. Credit: Isar Aerospace Isar said it received a launch license from theNorwegian Civil Aviation Authority on March 14, following the final qualification test on the Spectrum rocket in February to validate its readiness for flight.Notably, this will be the first orbital launch attempt from a launch pad in Western Europe. The French-run Guiana Space Center in South America is the primary spaceport for European rockets. Virgin Orbit staged an airborne launch attempt from an airport in the United Kingdom in 2023, and the Plesetsk Cosmodrome is located in European Russia.No guaranteesSuccess is never assured on the inaugural launch of a new rocket. Isar is the first in a wave of European launch startups to arrive at this point. The company developed the Spectrum rocket with mostly private funding, although Isar received multimillion-euro investments from the European Space Agency, the German government, and the NATO Innovation Fund.All told, Isar says it has raised more than 400 million euros, or $435 million at today's currency exchange rate, more than any other European launch startup.We are approaching the most important moment of our journey so far, and I would like to thank all our team, partners, customers and investors who have been accompanying and trusting us," said Daniel Metzler, Isar's co-founder and CEO, in a statement.Most privately-developed rockets have failed to reach orbit on the first try. Several US launch companies that evolved in a similar mold as Isarsuch as Rocket Lab, Firefly Aerospace, and Astrafaltered on the way to orbit on their rockets' first flights."With this mission, Isar Aerospace aims to collect as much data and experience as possible on its in-house developed launch vehicle. It is the first integrated test of all systems," said Alexandre Dalloneau, Isar's vice president of mission and launch operations."The test results will feed into the iterations and development of future Spectrum vehicles, which are being built and tested in parallel," Isar said in a statement. Look familiar? Isar Aerospace's Spectrum rocket is powered by nine first stage engines arranged in an "octaweb" configuration patterned on SpaceX's Falcon 9 rocket. Credit: Isar Aerospace/Wingmen Media Europe has struggled to regain its footing after SpaceX took over the dominant position in the global commercial launch market, a segment led for three decades by Europe's Ariane rocket family before SpaceX proved the reliability of the lower-cost, partially reusable Falcon 9 launcher. The continent's new Ariane 6 rocket, funded by ESA and built by a consortium owned by multinational firms Airbus and Safran, is more expensive than the Falcon 9 and years behind schedule. It finally debuted last year.One ton to LEOIsar's Spectrum rocket is not as powerful as the SpaceX's Falcon 9 or Arianespace's Ariane 6. But even SpaceX had to start somewhere. Its small Falcon 1 rocket failed three times before tasting success. Spectrum is somewhat larger and more capable than Falcon 1, with performance in line with Firefly's Alpha rocket.The fully assembled Spectrum rocket stands about 92 feet (28 meters) tall and measures more than 6 feet (2 meters) in diameter. The expendable launcher is designed to haul payloads up to 1 metric ton (2,200 pounds) into low-Earth orbit. Spectrum is powered by nine Aquila engines on its first stage, and one engine on the second stage, burning a mixture of propane and liquid oxygen propellants.There are no customer satellites aboard the first Spectrum test flight. The rocket will climb into a polar orbit from Andya Spaceport in northern Norway, but Isar hasn't published a launch timeline or the exact parameters of the target orbit.While modest in size next to Europe's Ariane launcher family, Isar's Spectrum is the largest German rocket since the V-2, the World War II weapon of terror launched by Nazi Germany against targets in Great Britain, Belgium, and other places. In the 80 years since the war, German industry developed a handful of small sounding rockets, and manufactured upper stages for Ariane rockets.But German governments have long shunned spending on launchers at levels commensurate with the nation's place as a top contributor to ESA. France took the lead in the continent's postwar rocket industry, providing the lion's share of funding for Ariane, and taking responsibility for building engines and booster stages.Now, 80 years to the week since the last V-2 launch of World War II, Germany again has a homegrown liquid-fueled rocket on the launch pad. This time, it's for a much different purpose.As a first step, Isar and other companies in Europe are vying to inject competition with Arianespace into the European launch market. This will begin with small government-funded satellites that otherwise would have likely launched on rideshare flights by SpaceX or Arianespace.In 2022, the German space agency (known as DLR) announced the selection of research and demo payloads slated to fly on Spectrum's second launch. The Norwegian Space Agency revealed a contract earlier this month for Isar to launch a pair of satellites for the country's Arctic Ocean Surveillance program.Within the next few days, ESA is expected to release an "invitation to tender" for European industry to submit proposals for the European Launcher Challenge. This summer, ESA will select winners from Europe's crop of launch startups to demonstrate their rockets can deliver the agency's scientific satellites to orbit. This is the first time ESA has experimented with a fully commercial business model, with launch service contracts to private companies. Isar is a leading contender to win the launcher challenge, alongside other European companies like Rocket Factory Augsburg, HyImpulse, MaiaSpace, and others.Previously, ESA has provided billions of euros to Europe's big incumbent rocket companies for development of new generations of Ariane rockets. Now, ESA wants follow the path of NASA, which has used fixed-price service contracts to foster commercial cargo and crew transportation to the International Space Station, and most recently, privately-owned landers on the Moon."Whatever the outcome, Isar Aerospace's upcoming Spectrum launch will be historic: the first commercial orbital launch from mainland Europe," Josef Aschbacher, ESA's director general, posted on X. "The support and co-funding the European Space Agency has given Isar Aerospace and other launch service provider startups is paying off for increased autonomy in Europe. Wishing Isar Aerospace a great launch day with fair weather and most importantly, that the data they receive from the liftoff will speed next iterations of their rockets."Toni Tolker-Nielsen, ESA's acting director of space transportation, called this moment a "paradigm shift" for Europe's launcher strategy."In the last 40 years, we have had these ESA-developed launchers that we have been relying on," Tolker-Nielsen told Ars in an interview. "So we started with Ariane 1 up to Ariane 6. Vega C came onboard. And it's been working like that for the last 40 years. Now, we are moving into in the '30s, and the next decades, to have privately-developed launchers."Isar Aerospace's first Spectrum rocket will lift off from the remote Andya Spaceport in Norway, a gorgeous location that might be the world's most picturesque launch site. Nestled on the western coast of an island inside the Arctic Circle, Andya offers an open path over the Norwegian Sea for rockets to fly north, where they can place satellites into polar orbit.The spaceport is operated by Andya Space, a company 90 percent owned by the Norwegian government through the Ministry for Trade, Industry, and Fisheries. Until now, Andya Spaceport has been used for launches of suborbital sounding rockets. The geography of Norway permits northerly launches from Andya Spaceport. Credit: Andya Space No better time than nowIsar's first launch comes amid an abrupt turn in European strategic policy as the continent's leaders struggle with how to respond to moves by President Donald Trump in his first two months in office. In recent weeks, the Trump administration put European leaders on their heels with sudden policy reversals and unpredictable statements on Ukraine, NATO, and the US government's long-term backstopping of European security.Friedrich Merz, set to become Germany's next chancellor,said last monththat Europe should strive to "achieve independence" from the United States. "It is clear that the Americans, at least this part of the Americans, this administration, are largely indifferent to the fate of Europe."Last week, Merz shepherded a bill through German parliament to amend the country's constitution, allowing for a significant increase in German defense spending. The incoming chancellor said the change is "nothing less than the first major step towards a new European defense community."The erosion of Europe's trust in the Trump administration prompted rumors that the US government could trigger a "kill switch" to turn off combat capabilities of F-35 fighter jets sold to US allies. This would have previously seemed like a far-fetched conspiracy theory, but some European officials felt compelled to make statements denying the kill switch reports. Still, the recent turbulence in trans-Atlantic relations has some US allies rethinking their plans to buy more US-made fighter jets and weapons systems."Reliable and predictable orders should go to European manufacturers whenever possible," Merz said. Robert Habeck, Germany's vice chancellor and economics minister, tours Isar Aerospace in Ottobrunn, Germany, in 2023. : German Economics Minister Robert Habeck (Bndnis 90/Die Grnen) walks past a prototype rocket during a visit to the space company Isar Aerospace. Credit: Marijan Murat/picture alliance via Getty Images This uncertainty extends to space, where it is most apparent in the launch industry. SpaceX, founded and led by Trump ally Elon Musk, dominates the global commercial launch business. European governments have repeatedly turned to SpaceX to launch multiple defense and scientific satellites over the last several years, while Europe encountered delays with its homegrown Ariane 6 and Vega rockets.Until 2022, Europe and Russia jointly operated Soyuz rockets from the Guiana Space Center in South America to deploy government and commercial payloads to orbit. The partnership ended with Russia's invasion of Ukraine.Europe's flagship Ariane 5 rocket retired in 2023, a year before its replacementthe Ariane 6debuted on its first test flight from the Guiana Space Center. The first operational flight of the Ariane 6 delivered a French military spy satellite to orbit March 6. The smaller Vega C rocket successfully launched in December, two years after officials grounded the vehicle due to an in-flight failure.ESA funded development of the Ariane 6 and Vega C in partnership with ArianeGroup, a joint venture between Airbus and Safran, and the Italian defense contractor Avio.For the moment, Europe's launcher program is back on track to provide autonomous access to space, a capability European officials consider a strategic imperative. Philippe Baptiste, France's minister for research and higher education, said after the Ariane 6 flight earlier this month that the launch was "proof" of European space sovereignty."The return of Donald Trump to the White House, with Elon Musk at his side, already has significant consequences on our research partnerships, on our commercial partnerships," Baptiste said in his remarkably pointed prepared remarks. "If we want to maintain our independence, ensure our security, and preserve our sovereignty, we must equip ourselves with the means for strategic autonomy, and space is an essential part of this."The problem? Ariane 6 and Vega C are costly, lack a path to reusability, and aren't geared to match SpaceX's blistering launch cadence. If Europe wants autonomous access to space, European taxpayers will have to pay a premium. Isar's Spectrum also isn't reusable, but European officials hope competition from new startups will produce fresh launch options, and perhaps stimulate an inspired response from Europe's entrenched launch companies."In today's geopolitical climate, our first test flight is about much more than a rocket launch: Space is one of the most critical platforms for our security, resilience and technological advancement," Metzler said. "In the next days, Isar Aerospace will lay the foundations to regain much needed independent and competitive access to space from Europe."Tolker-Nielsen, in charge of ESA's space transportation division, said this is the first of many steps for Europe to develop a thriving commercial launch sector."This launch is a milestone, which is very important," he said. "It's the first conclusion of all this work, so I will be looking carefully on that. I cross my fingers that it goes well."Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 9 Comments
    0 Commentarii ·0 Distribuiri ·31 Views
  • Trump administrations blockchain plan for USAID is a real head-scratcher
    arstechnica.com
    a solution in search of a problem Trump administrations blockchain plan for USAID is a real head-scratcher Whatever happens to USAID, it will apparently "leverage blockchain technology." Vittoria Elliott, wired.com Mar 23, 2025 7:05 am | 14 Credit: Pete Kiehart via Getty Credit: Pete Kiehart via Getty Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAccording to a memo circulating among State Department staff and reviewed by WIRED, the Trump administration plans to rename the United States Agency for International Development (USAID) as US International Humanitarian Assistance (IHA), and to bring it directly under the secretary of state. The document, on which Politico first reported, states that as part of its reorganization, the agency will leverage blockchain technology as part of its procurement process.All distributions would also be secured and traced via blockchain technology to radically increase security, transparency, and traceability, the memo reads. This approach would encourage innovation and efficiency among implementing partners and allow for more flexible and responsive programming focused on tangible impact rather than simply completing activities and inputs.The memo does not make clear what specifically this meansif it would encompass doing cash transfers in some kind of cryptocurrency or stablecoin, for example, or simply mean using a blockchain ledger to track aid disbursement.The memo comes as staffers at USAID are trying to understand their future. The agency was an early target of the so-called Department of Government Efficiency (DOGE), which has effectively been headed by centibillionaire Elon Musk. Shortly after President Trumps inauguration, the State Department put the entire agencys staff on administrative leave, slashed its workforce, and halted a portion of payments to partner organizations around the world, including those doing lifesaving work. Since then a federal judge has issued a preliminary injunction against the dismantling of the agency, but the memo appears to indicate that the administration has plans to continue its mission of drastically cutting USAID and fully folding it into the State Department.The plans for the blockchain have also caught staffers off guard.Few blockchain-based projects have managed to achieve large-scale use in the humanitarian sector. Linda Raftree, a consultant who helps humanitarian organizations adopt new technology, says theres a reason for thatthe incorporation of blockchain technology is often unnecessary.It feels like a fake technological solution for a problem that doesnt exist, she says. I dont think we were ever able to find an instance where people were using blockchain where they couldnt use existing tools.Giulio Coppi, a senior humanitarian officer at the nonprofit Access Now who has researched the use of blockchain in humanitarian work, says that blockchain technologies, while sometimes effective, offer no obvious advantages over other tools organizations could use, such as an existing payments system or another database tool. Theres no proven advantage that its cheaper or better, he says. The way its been presented is this tech solutionist approach that has been proven over and over again to not have any substantial impact in reality.There have been, however, some successful instances of using blockchain technology in the humanitarian sector. In 2022, the United Nations High Commissioner for Refugees (UNHCR) ran a small pilot to give cash assistance to Ukrainians displaced by the Russia-Ukraine war in a stablecoin. Other pilots have been tested in Kenya by the Kenya Red Cross Society. The International Committee of the Red Cross, which works with the Kenya team, also helped to develop the Humanitarian Token Solution (HTS).One representative from an NGO that uses blockchain technology, but wasnt authorized to speak to the media with regards to issues relating to USAID, says that particularly with regards to money transfers, stablecoins can be faster and easier than other methods of reaching communities impacted by a disaster. However, introducing new systems means youre setting up a new burden for the many organizations that USAID partners with, they say. The relative cost of new systems is harder for small NGOs, which would often include the kind of local organizations that would be at the front line of response to disasters.The proposed adoption of blockchain technology seems related to an emphasis on exerting tight controls over aid. The memo seems, for example, to propose that funding should be contingent on outcomes, reading, Tying payment to outcomes and results rather than inputs would ensure taxpayer dollars deliver maximum impact. A USAID employee, who asked to remain anonymous because they were not authorized to speak to the media, says that many of USAIDs contracts already function this way, with organizations being paid after performing their work. However, thats not possible in all situations. Those kinds of agreements are often not flexible enough for the environments we work in, they say, noting that in conflict or disaster zones, situations can change quickly, meaning that what an organization may be able to do or need to do can fluctuate.Raftree says this language appears to be misleading, and bolsters claims made by Musk and the administration that USAID was corrupt. Its not like USAID was delivering tons of cash to people who hadnt done things, she says.This story originally appeared on wired.com.Vittoria Elliott, wired.com Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture. 14 Comments
    0 Commentarii ·0 Distribuiri ·39 Views
  • Sometimes, its the little tech annoyances that sting the most
    arstechnica.com
    veni, vidi, vici Sometimes, its the little tech annoyances that sting the most macOS wouldn't remember mouse settings? This means war! Nate Anderson Mar 22, 2025 7:07 am | 15 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAnyone who has suffered the indignity of splinter, a blister, or a paper cut knows that small things can sometimes be hugely annoying. You aren't going to die from any of these conditions, but it's still hard to focus when, say, the back of your right foot is rubbing a new blister against the inside of your not-quite-broken-in-yet hiking boots.I found myself in the computing version of this situation yesterday, when I was trying to work on a new Mac Mini and was brought up short by the fact that my third mouse button (that is, clicking on the scroll wheel) did nothing. This was odd, because I have for many years assigned this button to "Mission Control" on macOSa feature that tiles every open window on your machine, making it quick and easy to switch apps. When I got the new Mini, I immediately added this to my settings. Boom!And yet there I was, a couple hours later, clicking the middle mouse button by reflex and getting no result. This seemed quite oddhad I only imagined that I made the settings change? I made the alteration again in System Settings and went back to work.But after a reboot later that day to install an OS update, I found that my shortcut setting for Mission Control had once again been wiped away. This wasn't happening with any other settings changes, and it was strangely vexing.When it happened a third time, I switched into full "research and destroy the problem" mode. One of my Ars colleagues commiserated with me, writing, "This kind of powerful-annoying stuff is just so common. I swear at least once every few months, some shortcut or whatever just stops working, and sometimes, after a week or so, it starts working again. No rhyme, reason, or apparent causality except that computers are just [unprintable expletives]."But even if computers are [unprintable expletives], their problems have often been encountered and fixed by some other poor soul.I turned to the Internet for help... and immediately stumbled upon an Apple discussion thread called "MacOS mouse shortcuts are reset upon restart or shutdown." The posterand most of those replyingsaid that the odd behavior had only appeared in macOS Sequoia. One reply claimed to have identified the source of the bug and offered a fix:1. Set your Mission Control mouse shortcuts as usual.2. Go to ~/Library/Containers/com.apple.Desktop-Settings.extension/Data/Library/Preferences folder.3. Copy com.apple.symbolichotkeys.plist file.4. Go to ~/Library/Preferences5. Paste the com.apple.symbolichotkeys.plist file. Re-writte the previous one.The bug is the macOS core seems to save the shortcut preferences directly into the step 4 folder, but it should be saved in step 2 and 4 folders at the same time.Unfortunately, I didn't have any such .plist file in ~/Library/Containers/com.apple.Desktop-Settings.extension/Data/Library/Preferences. However, a second intrepid user found the file in a different location, writing:This solution worked for me but NOTE that to find the plist file to copy I had to go to ~/Library/Container/Desktop & Dock/Data/Library/Preferences instead. For whatever reason the other folder (com.apple.Desktop-Settings.extension) didn't exist. Perhaps they moved it in 15.3 (but didn't fix the bug!)?Here, at last, was the answer. I found the proper .plist filed and I copied it over to ~/Library/Preferences, then I rebooted the computer. Everything worked.Sweet success! Jovial victoriousness! Ebullient wonderment!...and then I went on with my day.Who moved my cheese?This trivial annoyance reminded me of several things.First, despite Apple's "it just works" ethos, it doesn't always work; Macs are computers like any other, their software filled with spaghetti code and poorly defined variables. Errors creep in. This one was a bit surprising, however, in that it has already persisted across three point releases of the operating system even though the fix is in Apple's own forums and appears to be as simple as storing a file in the correct spot. I am tempted to draw grandiose lessons from the incident about whether Apple's attention to iOS is leading to sloppiness in macOSbut I won't.Second, we really take the working of these ultra-complex systems for granted. I'm old enough to remember the Bad Old Days of trying to get Wing Commander running on a PC and having to muck about with HIMEM.SYS files just to get the game to load. Young Nate, with his modem and shared phone line that could lose an hour-long download if someone else in the house picked up the phone, would have loved to deal only with small problems like a mouse setting not sticking. So perhaps Current Nate has gotten soft.Third, it is really irritating to have one's muscle memory routine interrupted. Every time I clicked that middle mouse button and nothing happened, I felt the sharp shock of annoyance that my devices should betray me in this way. Even though my brain knew that the clicks were no longer producing their expected results, my fingers clicked anyway out of instinct until I broke down, went back into System Settings, and made the change again. The brain/body system rebels against anything that forces its expected reactions to change. (Though over time, of course, humans are great adapters. But in the short term... not so much.)Fourth, the corollary to this level of irritation is the feeling of triumph when the problemhowever small it might beis successfully fixed. In the slightly less technical realm, our refrigerator has two sliding plastic doors that twist closed together in order to protect (?) the cheese drawer. But years of cheese consumption had apparently led to tiny bits of cheese getting down into the mechanism and gumming it up something terrible. Last week, I spent 45 minutes taking the doors and the drawer apart, washing all the pieces in hot, soapy water, and scrubbing at them for a good 10 minutes until the last of the ground-in cheese residue was cleared away. When I reassembled the whole contraption, and the doors swung as smoothly as if they were new, I felt the same sense of elation as when I defeated the macOS mouse bug. There's something about figuring out how a system works, identifying the current problem, and then addressing that problem that just tickles the brain in a certain way.Fifth and finally, I was reminded that for all of the Internet's many (many!) problems, it is still full of people taking time to share their knowledge just to help others. So thanks, random Internet commenters who showed me how to fix my problem! I owe you a debt of gratitude.Since I enjoyed fixing my little problem so much, I thought I'd share it with you, gentle Ars readers. What minor tech irritations have you overcome recently?Nate AndersonDeputy EditorNate AndersonDeputy Editor Nate is the deputy editor at Ars Technica. His most recent book is In Emergency, Break Glass: What Nietzsche Can Teach Us About Joyful Living in a Tech-Saturated World, which is much funnier than it sounds. 15 Comments
    0 Commentarii ·0 Distribuiri ·56 Views
  • Anthropics new AI search feature digs through the web for answers
    arstechnica.com
    DEEP RESEARCH JR. Anthropics new AI search feature digs through the web for answers Anthropic Claude just caught up with a ChatGPT feature from 2023but will it be accurate? Benj Edwards Mar 21, 2025 3:08 pm | 9 Credit: Anthropic Credit: Anthropic Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Thursday, Anthropic introduced web search capabilities for its AI assistant Claude, enabling the assistant to access current information online. Previously, the latest AI model that powers Claude could only rely on data absorbed during its neural network training process, having a "knowledge cutoff" of October 2024.Claude's web search is currently available in feature preview for paid users in the United States, with plans to expand to free users and additional countries in the future. After users enable the feature in their profile settings, Claude will automatically determine when to use web search to answer a query or find more recent information.The new feature works with Claude 3.7 Sonnet and requires a paid subscription. The addition brings Claude in line with competitors like Microsoft Copilot and ChatGPT, which already offer similar functionality. ChatGPT first added the ability to grab web search results as a plugin in March 2023, so this new feature is a long time coming. A Claude AI web search demonstration video from Anthropic. "This was sorely needed," wrote independent AI researcher Simon Willison on his blog. "ChatGPT, Gemini and Grok all had this ability already, and despite Anthropic's excellent model quality it was one of the big remaining reasons to keep other models in daily rotation."Interestingly, the web search feature seems somewhat "agentic" in the sense that it can autonomously loop through several attempts at searching the web to drill down for an answeryou might call it a very simplified version of the "Deep Research" agent trend that recently came to Google Gemini and ChatGPT. A screenshot example of what Anthropic Claude's web search process looks like, captured March 21, 2025. Credit: Benj Edwards Anthropic positions the web search feature as potentially good for various use cases, including for "sales teams" doing account planning, "financial analysts" assessing market data, "researchers" building grant proposals, and "shoppers" comparing prices and features of products.Anthropic's blog mentions, "Sales teams can transform account planning and drive higher win rates through informed conversations with prospects by analyzing industry trends to learn key initiatives and pain points," which sounds like Claude may have had a hand in writing about itself.Caution over citations and sourcesClaude users should be warned that large language models (LLMs) like those that power Claude are notorious for sneaking in plausible-sounding confabulated sources. A recent survey of citation accuracy by LLM-based web search assistants showed a 60 percent error rate. That particular study did not include Anthropic's new search feature because it took place before this current release.When using web search, Claude provides citations for information it includes from online sources, ostensibly helping users verify facts. From our informal and unscientific testing, Claude's search results appeared fairly accurate and detailed at a glance, but that is no guarantee of overall accuracy. Anthropic did not release any search accuracy benchmarks, so independent researchers will likely examine that over time. A screenshot example of what Anthropic Claude's web search citations look like, captured March 21, 2025. Credit: Benj Edwards Even if Claude search were, say, 99 percent accurate (a number we are making up as an illustration), the 1 percent chance it is wrong may come back to haunt you later if you trust it blindly. Before accepting any source of information delivered by Claude (or any AI assistant) for any meaningful purpose, vet it very carefully using multiple independent non-AI sources.A partnership with Brave under the hoodBehind the scenes, it looks like Anthropic partnered with Brave Search to power the search feature, from a company, Brave Software, perhaps best known for its web browser app. Brave Search markets itself as a "private search engine," which feels in line with how Anthropic likes to market itself as an ethical alternative to Big Tech products.Simon Willison discovered the connection between Anthropic and Brave through Anthropic's subprocessor list (a list of third-party services that Anthropic uses for data processing), which added Brave Search on March 19.He further demonstrated the connection on his blog by asking Claude to search for pelican facts. He wrote, "It ran a search for 'Interesting pelican facts' and the ten results it showed as citations were an exact match for that search on Brave." He also found evidence in Claude's own outputs, which referenced "BraveSearchParams" properties.The Brave engine under the hood has implications for individuals, organizations, or companies that might want to block Claude from accessing their sites, since presumably Brave's web crawler is doing the web indexing. Anthropic did not mention how sites or companies could opt out of the feature. We have reached out to Anthropic for clarification.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 9 Comments
    0 Commentarii ·0 Distribuiri ·86 Views
  • California bill would force ISPs to offer 100Mbps plans for $15 a month
    arstechnica.com
    Affordable broadband California bill would force ISPs to offer 100Mbps plans for $15 a month Like New York law, Calif. bill demands cheap plans for people with low incomes. Jon Brodkin Mar 21, 2025 4:46 pm | 16 Credit: Adrienne Bresnahan | Getty Images Credit: Adrienne Bresnahan | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA proposed state law in California would force Internet service providers to offer $15 monthly plans to people with low incomes. The bill is similar to a New York law that took effect in January but has a higher minimum speed requirement: The proposed $15 plans for low-income California residents would have to come with download speeds of 100Mbps and upload speeds of 20Mbps.Broadband lobby groups fear that many states will enact such requirements after New York won a multiyear court battle to enforce its law. The Supreme Court has rejected telecom industry challenges to the New York law twice.The California bill was proposed in January by Democratic Assemblymember Tasha Boerner, but the original version simply declared an intent to require affordable home Internet service and contained no specifics on required speeds or prices. The requirement for specific speeds and a $15 price is being added to the bill with an amendment that was provided to Ars today by Boerner's office. The amendment should be in the official record by early next week, a Boerner spokesperson said."Every California Internet service provider shall offer for purchase to eligible households within their California service territory affordable home Internet service that meets minimum speed requirements," the amended bill says. Each ISP would also be required to "make commercially reasonable efforts to promote and advertise" these plans, including via a "prominent display" on its website and promotional materials sent to consumers in eligible households.The amendment defines affordable home Internet service as a plan costing no more than $15 a month, including all recurring taxes and fees. The speed requirements are at least 100Mbps downstream and 20Mbps upstream, with "sufficient speed and latency to support distance learning and telehealth services." The plans would have to be offered to households in which at least one resident participates in a qualified public assistance program.Several states consider price requirementsWhile the California proposal will face opposition from ISPs and is not guaranteed to become law, the amended bill has higher speed requirements for the $15 plan than the existing New York law that inspired it. The New York law lets ISPs comply either by offering $15 broadband plans with download speeds of at least 25Mbps, or $20-per-month service with 200Mbps speeds. The New York law doesn't specify minimum upload speeds.AT&T stopped offering its 5G home Internet service in New York entirely instead of complying with the law. But AT&T wouldn't be able to pull home Internet service out of California so easily because it offers DSL and fiber Internet in the state, and it is still classified as a carrier of last resort for landline phone service.The California bill says ISPs must file annual reports starting January 1, 2027, to describe their affordable plans and specify the number of households that purchased the service and the number of households that were rejected based on eligibility verification. The bill seems to assume that ISPs will offer the plans before 2027 but doesn't specify an earlier date. Boerner's office told us the rule would take effect on January 1, 2026. Boerner's office is also working on an exemption for small ISPs, but hasn't settled on final details.Meanwhile, a Massachusetts bill proposes requiring that ISPs provide at least 100Mbps speeds for $15 a month or 200Mbps for $20 a month. A Vermont bill would require 25Mbps speeds for $15 a month or 200Mbps for $20 a month.Telco groups told the Supreme Court last year that the New York law "will likely lead to more rate regulation absent the Court's intervention" as other states will copy New York. They subsequently claimed that AT&T's New York exit proves the law is having a negative effect. But the Supreme Court twice declined to hear the industry challenge, allowing New York to enforce the law.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 16 Comments
    0 Commentarii ·0 Distribuiri ·79 Views
  • Cloudflare turns AI against itself with endless maze of irrelevant facts
    arstechnica.com
    Follow the left wall Cloudflare turns AI against itself with endless maze of irrelevant facts New approach punishes AI companies that ignore "no crawl" directives. Benj Edwards Mar 21, 2025 5:14 pm | 17 Credit: iambuff via Getty Images Credit: iambuff via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Wednesday, web infrastructure provider Cloudflare announced a new feature called "AI Labyrinth" that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.Cloudflare, founded in 2009, is probably best known as a company that provides infrastructure and security services for websites, particularly protection against distributed denial-of-service (DDoS) attacks and other malicious traffic.Instead of simply blocking bots, Cloudflare's new system lures them into a "maze" of realistic-looking but irrelevant pages, wasting the crawler's computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler's operators that they've been detected."When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them," writes Cloudflare. "But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources."The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific factssuch as neutral information about biology, physics, or mathematicsto avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven). Cloudflare creates this content using its Workers AI service, a commercial platform that runs AI tasks.Cloudflare designed the trap pages and links to remain invisible and inaccessible to regular visitors, so people browsing the web don't run into them by accident.A smarter honeypotAI Labyrinth functions as what Cloudflare calls a "next-generation honeypot." Traditional honeypots are invisible links that human visitors can't see but bots parsing HTML code might follow. But Cloudflare says modern bots have become adept at spotting these simple traps, necessitating more sophisticated deception. The false links contain appropriate meta directives to prevent search engine indexing while remaining attractive to data-scraping bots."No real human would go four links deep into a maze of AI-generated nonsense," Cloudflare explains. "Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots."This identification feeds into a machine learning feedback loopdata gathered from AI Labyrinth is used to continuously enhance bot detection across Cloudflare's network, improving customer protection over time. Customers on any Cloudflare planeven the free tiercan enable the feature with a single toggle in their dashboard settings.A growing problemCloudflare's AI Labyrinth joins a growing field of tools designed to counter aggressive AI web crawling. In January, we reported on "Nepenthes," software that similarly lures AI crawlers into mazes of fake content. Both approaches share the core concept of wasting crawler resources rather than simply blocking them. However, while Nepenthes' anonymous creator described it as "aggressive malware" meant to trap bots for months, Cloudflare positions its tool as a legitimate security feature that can be enabled easily on its commercial service.The scale of AI crawling on the web appears substantial, according to Cloudflare's data that lines up with anecdotal reports we've heard from sources. The company says that AI crawlers generate more than 50 billion requests to their network daily, amounting to nearly 1 percent of all web traffic they process. Many of these crawlers collect website data to train large language models without permission from site owners, a practice that has sparked numerous lawsuits from content creators and publishers.The technique represents an interesting defensive application of AI, protecting website owners and creators rather than threatening their intellectual property. However, it's unclear how quickly AI crawlers might adapt to detect and avoid such traps, potentially forcing Cloudflare to increase the complexity of its deception tactics. Also, wasting AI company resources might not please people who are critical of the perceived energy and environmental costs of running AI models.Cloudflare describes this as just "the first iteration" of using AI defensively against bots. Future plans include making the fake content harder to detect and integrating the fake pages more seamlessly into website structures. The cat-and-mouse game between websites and data scrapers continues, with AI now being used on both sides of the battle.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 17 Comments
    0 Commentarii ·0 Distribuiri ·69 Views
  • Measles arrives in Kansas, spreads quickly in undervaccinated counties
    arstechnica.com
    "Fluid situation" Measles arrives in Kansas, spreads quickly in undervaccinated counties Since a single case last week, at least 9 more have been reported with more pending. Beth Mole Mar 21, 2025 5:53 pm | 24 Boxes and vials of the Measles, Mumps, Rubella Virus Vaccine at a vaccine clinic put on by Lubbock Public Health Department on March 1, 2025 in Lubbock, Texas. Credit: Getty | Jan Sonnenmair Boxes and vials of the Measles, Mumps, Rubella Virus Vaccine at a vaccine clinic put on by Lubbock Public Health Department on March 1, 2025 in Lubbock, Texas. Credit: Getty | Jan Sonnenmair Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreMeasles has arrived in Kansas and is spreading swiftly in communities with very low vaccination rates. Since last week, the state has tallied 10 cases across three counties with more pending.On March 13, health officials announced the state's first measles case since 2018. The case was reported in Stevens County, which sits in the southwest corner of the state. As of now, it's unclear if the case is connected to the mushrooming outbreak that began in West Texas.That initial case in Kansas already shows potential to mushroom on its own. Stevens County contains two school districts, both of which have extremely low vaccination rates among kindergartners. By the time children enter kindergarten, they should have their two doses of Measles, Mumps, and Rubella (MMR) vaccine, which together are 97 percent effective against measles. In the 20232024 school year, rates of kindergartners with their two shots stood at 83 percent in the Hugoton school district and 80 percent in the Moscow school district, according to state data. Those rates are significantly below the 95 percent threshold needed to block the onward community spread of measlesone of the most infectious viruses known to humankind.As of today, March 21, Stevens County has reported three more casestwo confirmed and one epidemiologically linked probable casebringing the total to four cases. And there's more to come."We do have pending cases at this time," the county's health department wrote in a Facebook update this afternoon. "We want to keep our community informedthis is a fluid situation and we are focused on working closely with the identified positives and their contacts."On the west border of Stevens sits Morton County, which on Wednesday reported three confirmed cases linked to the first case reported last week in Stevens. Morton County has two school districts, Elkhart and Rolla. The vaccination coverage for kindergartners in Elkhart in 20232024 was also a low 83 percent, while the coverage in Rolla was not reported.On Thursday, the county on the northern border of Stevens, Grant County, also reported three confirmed cases, which were also linked to the first case in Stevens. Grant County is in a much better position to handle the outbreak than its neighbors; its one school district, Ulysses, reported 100 percent vaccination coverage for kindergartners in the 20232024 school year.Outbreak riskSo far, details about the fast-rising cases are scant. The Kansas Department of Health and Environment (KDHE) has not published another press release about the cases since March 13. Ars Technica reached out to KDHE for more information but did not hear back before this story's publication.The outlet KWCH 12 News out of Wichita published a story Thursday, when there were just six cases reported in just Grant and Stevens Counties, saying that all six were in unvaccinated people and that no one had been hospitalized. On Friday, KWCH updated the story to note that the case count had increased to 10 and that the health department now considers the situation an outbreak.Measles is an extremely infectious virus that can linger in airspace and on surfaces for up to two hours after an infected person has been in an area. Among unvaccinated people exposed to the virus, 90 percent will become infected.Vaccination rates have slipped nationwide, creating pockets that have lost herd immunity and are vulnerable to fast-spreading, difficult-to-stop outbreaks. In the past, strong vaccination rates prevented such spread, and in 2000, the virus was declared eliminated, meaning there was no continuous spread of the virus over a 12-month period. Experts now fear that the US will lose its elimination status, meaning measles will once again be considered endemic to the country.So far this year, the Centers for Disease Control and Prevention has documented 378 measles cases as of Thursday, March 20. That figure is already out of date.On Friday, the Texas health department reported 309 cases in its ongoing outbreak. Forty people have been hospitalized, and one unvaccinated child with no underlying medical conditions has died. The outbreak has spilled over to New Mexico and Oklahoma. In New Mexico, officials reported Friday that the case count has risen to 42 cases, with two hospitalizations and one death in an unvaccinated adult. In Oklahoma, the case count stands at four.Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 24 Comments
    0 Commentarii ·0 Distribuiri ·69 Views
  • Infantile amnesia occurs despite babies showing memory activity
    arstechnica.com
    I'm drawing a blank Infantile amnesia occurs despite babies showing memory activity It looks like humans actively suppress our earliest memories. John Timmer Mar 21, 2025 3:41 pm | 10 Credit: Plume creative Credit: Plume creative Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor many of us, memories of our childhood have become a bit hazy, if not vanishing entirely. But nobody really remembers much before the age of 4, because nearly all humans experience what's termed "infantile amnesia," in which memories that might have formed before that age seemingly vanish as we move through adolescence. And it's not just us; the phenomenon appears to occur in a number of our fellow mammals.The simplest explanation for this would be that the systems that form long-term memories are simply immature and don't start working effectively until children hit the age of 4. But a recent animal experiment suggests that the situation in mice is more complex: the memories are there, they're just not normally accessible, although they can be re-activated. Now, a study that put human infants in an MRI tube suggests that memory activity starts by the age of 1, suggesting that the results in mice may apply to us.Less than total recallMice are one of the species that we know experience infantile amnesia. And, thanks to over a century of research on mice, we have some sophisticated genetic tools that allow us to explore what's actually involved in the apparent absence of the animals' earliest memories.A paper that came out last year describes a series of experiments that start by having very young mice learn to associate seeing a light come on with receiving a mild shock. If nothing else is done with those mice, that association will apparently be forgotten later in life due to infantile amnesia.But in this case, the researchers could do something. Neural activity normally results in the activation of a set of genes. In these mice, the researchers engineered it so one of the genes that gets activated encodes a protein that can modify DNA. When this protein is made, it results in permanent changes to a second gene that was inserted in the animal's DNA. Once activated through this process, the gene leads to the production of a light-activated ion channel.In practical terms, it means that if any neurons are activated in the area of the brain that stores memories of locations, they will make copies of a protein that allows ions to cross the cell membrane when exposed to light of the right wavelength. Since the flow of ions across the membrane is the primary component of a nerve impulse, this allows light exposure to trigger nerve impulses. (This sort of experimental manipulation is generically termed "optogenetics.")In these experiments, the young mice would start making the ion channel specifically in those cells that were activated as it learned its way around the maze. If exposed to the right light weeks or months later, those cells would start sending nerve impulses again, just as they would if they were re-activating the memory. In short, if the mice were forming memories as infants, the researchers should be able to replay those memories later in life simply by exposing the right cells to light.It worked. If you activated this memory in the mice after they matured, they once again behaved as if the light coming on is associated with a shock. The memory was still there, it just wasn't normally accessible to the mice.Dont shock the babyObviously, genetically manipulating human infants and giving them shocks wouldn't fly with an ethics review board. So, the new work relied on a standard test used for memory in infants: if an image is familiar to them, they tend to look at it longer. So, the researchers put the babies in an MRI tube with video screens and monitored activity in the hippocampus, the area of the brain that handles these sorts of memories. The babies were shown a series of pictures, some of which repeated after a long enough lag to ensure that the infant couldn't track them via short-term working memory.If you did the analysis purely on whether the babies stared at images that were familiar, you'd come up empty, with any effect buried in the statistical noise. But there was a significant correlation between staring longer and activity in the hippocampus, suggesting that the kids were more likely to stare at something that had triggered the memory formation process during their first viewing.There was a lot of noise in the data, but when broken down by age, it appeared that older infants were much more likely to form memories, with the ability starting roughly when they hit 1 year old. So, there does appear to be a period where the hippocampus hasn't matured enough to form long-term memories. It's just that this period ends a couple of years before infantile amnesia stops.It also suggests that humans may share this feature with mice: memories formed during this window between the onset of memory formation and the end of infantile amnesia are probably still there. We just don't have a way to access them unless something external to the brain manages to trigger them.The larger questions, however, remain unanswered. We don't know what mechanism suppresses these memories while letting those formed later operate normally, although having a well-described system in mice should help us start to address that. But the "why" will likely remain very difficult to answer. It's not obvious whether this selective amnesia is simply a necessary consequence of mammalian brain development, or if it actually provides us with some benefits.Science, 2025. DOI: 10.1126/science.adt7570 (About DOIs).John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 10 Comments
    0 Commentarii ·0 Distribuiri ·70 Views
  • Italy demands Google poison DNS under strict Piracy Shield law
    arstechnica.com
    That'sa spicy DNS Italy demands Google poison DNS under strict Piracy Shield law A lawsuit claims Google has not blocked football streams as required in Italy. Ryan Whitwam Mar 21, 2025 3:52 pm | 5 Credit: Aurich Lawson Credit: Aurich Lawson Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreItaly is using its Piracy Shield law to go after Google, with a court ordering the Internet giant to immediately begin poisoning its public DNS servers. This is just the latest phase of a campaign that has also targeted Italian ISPs and other international firms like Cloudflare. The goal is aimed at preventing illegal football streams, but the effort has already caused collateral damage. Regardless, Italy's communication regulator praises the ruling and hopes to continue sticking it to international tech firms.The Court of Milan issued this ruling in response to a complaint that Google failed to block pirate websites after they were identified by the national communication regulator, known as AGCOM. The court found that the sites in question were involved in the illegal streaming of Series A football matches, which has been a focus of anti-piracy crusaders in Italy for years. Since Google offers a public DNS service, it is subject to the site-blocking law.Piracy Shield is often labeled as draconian by opponents because blocking content via DNS is messy. It blocks the entire domain, which has led to confusion when users rely on popular platforms to distribute pirated content. Just last year, Italian ISPs briefly blocked the entire Google Drive domain because someone, somewhere used it to share copyrighted material. This is often called DNS poisoning or spoofing in the context of online attacks, and the outcome is the same if it's being done under legal authority: a DNS record is altered to prevent someone typing a domain name from being routed to the correct IP address.Spotted by TorrentFreak, AGCOM Commissioner Massimiliano Capitanio took to LinkedIn to celebrate the ruling, as well as the existence of the Italian Piracy Shield. "The Judge confirmed the value of AGCOM's investigations, once again giving legitimacy to a system for the protection of copyright that is unique in the world," said Capitanio.Capitanio went on to complain that Google has routinely ignored AGCOM's listing of pirate sites, which are supposed to be blocked in 30 minutes or less under the law. He noted the violation was so clear-cut that the order was issued without giving Google a chance to respond, known as inaudita altera parte in Italian courts.This decision follows a similar case against Internet backbone firm Cloudflare. In January, the Court of Milan found that Cloudflare's CDN, DNS server, and WARP VPN were facilitating piracy. The court threatened Cloudflare with fines of up to 10,000 euros per day if it did not begin blocking the sites.Google could face similar sanctions, but AGCOM has had difficulty getting international tech behemoths to acknowledge their legal obligations in the country. We've reached out to Google for comment and will update this report if we hear back.Ryan WhitwamSenior Technology ReporterRyan WhitwamSenior Technology Reporter Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he's written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards. 5 Comments
    0 Commentarii ·0 Distribuiri ·70 Views
Mai multe povesti