Ars Technica
Ars Technica
Original tech news, reviews and analysis on the most fundamental aspects of tech.
  • 0 pessoas curtiram isso
  • 145 Publicações
  • 2 fotos
  • 0 Vídeos
  • 0 Anterior
  • News
Pesquisar
Atualizações Recentes
  • ARSTECHNICA.COM
    Ted Cruz wants to overhaul $42B broadband program, nix low-cost requirement
    The Grants They Are A-Changin' Ted Cruz wants to overhaul $42B broadband program, nix low-cost requirement Cruz claims grant program is "boondoggle," urges Biden admin to halt activities. Jon Brodkin Nov 22, 2024 4:31 pm | 63 After winning reelection, Sen. Ted Cruz (R-Texas) speaks to a crowd at an election watch party on November 5, 2024 in Houston, Texas. Credit: Getty Images | Danielle Villasana After winning reelection, Sen. Ted Cruz (R-Texas) speaks to a crowd at an election watch party on November 5, 2024 in Houston, Texas. Credit: Getty Images | Danielle Villasana Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreEmboldened by Donald Trump's election win, Republicans are seeking big changes to a $42.45 billion broadband deployment program. Their plan could delay distribution of government funding and remove or relax a requirement that ISPs accepting subsidies must offer low-cost Internet plans.US Senator Ted Cruz (R-Texas) today issued a press release titled, "Sen. Cruz Warns Biden-Harris NTIA: Big Changes Ahead for Multi-Billion-Dollar Broadband Boondoggle." Cruz, who will soon be chair of the Senate Commerce Committee, is angry about how the National Telecommunications and Information Administration has implemented the Broadband Equity, Access, and Deployment (BEAD) program that was created by Congress in November 2021.The NTIA announced this week that it has approved the funding plans submitted by all 50 states, the District of Columbia, and five US territories, which are slated to receive federal money and dole it out to broadband providers for network expansions. Texas was the last state to gain approval in what the NTIA called "a major milestone on the road to connecting everyone in America to affordable, reliable high-speed Internet service."Republicans including Cruz and incoming Federal Communications Commission Chairman Brendan Carr have criticized the NTIA for not distributing the money faster. But Cruz's promise of a revamp creates uncertainty about the distribution of funds. Cruz sent a letter yesterday to NTIA Administrator Alan Davidson in which he asked the agency to halt the program rollout until Trump takes over. Cruz also accused the NTIA of "technology bias" because the agency decided that fiber networks should be prioritized over other types of technology.Cruz: Stop what youre doing"It is incumbent on you to bear these upcoming changes in mind during this transition term," Cruz wrote. "I therefore urge the NTIA to pause unlawful, extraneous BEAD activities and avoid locking states into in [sic] any final actions until you provide a detailed, transparent response to my original inquiry and take immediate, measurable steps to address these issues."An NTIA spokesperson told Ars today that the agency received Cruz's letter and is reviewing it. The NTIA's update on the BEAD program earlier this week said the state approvals show that "all 56 states and territories are taking the next steps to request access to their allocated BEAD funding and select the providers who will build and upgrade the high-speed Internet networks of the future."Cruz's letter alleged that the agency "repeatedly ignored the text of the Infrastructure Investment and Jobs Act" when designing the BEAD program. "This past August, I sent you an inquiry regarding NTIA's decision to hoard nearly $1 billion in BEAD funding to build a central planning bureaucracy that proceeded to impose extraneous mandates on the states and prevent the expeditious delivery of Internet access to unserved communities," Cruz wrote. "Instead of working to reverse course on the botched BEAD program, your agency responded by doubling down on its extralegal requirements and evading congressional inquiries."Cruz said he "will monitor this matter" as Commerce Committee chairman. "Fortunately, as President-elect Trump has already signaled, substantial changes are on the horizon for this program," Cruz wrote. "With anticipated new leadership at both NTIA and in Congress, the BEAD program will soon be 'unburdened by what has been' and states will no longer be subject to the unlawful and onerous bureaucratic obstacles imposed by the Biden-Harris NTIA."GOP mad about low-cost plan ruleAs we wrote in July, Republicans are angry at the NTIA over its enforcement of the requirement that subsidized ISPs offer a low-cost plan. The NTIA countered that it followed the law written by Congress. The US law that ordered NTIA to stand up the program requires that Internet providers receiving federal funds offer at least one "low-cost broadband service option for eligible subscribers."The law also says the NTIA may not "regulate the rates charged for broadband service," and Republicans claim the NTIA is violating this restriction. A July 23 letter sent by over 30 broadband industry trade groups also alleged that the administration is illegally regulating broadband prices. ISPs pointed to NTIA guidance that "strongly encouraged" states to set a fixed rate of $30 per month for the low-cost service option."The statute requires that there be a low-cost service option," Davidson reportedly said at a congressional hearing in May. "We do not believe the states are regulating rates here. We believe that this is a condition to get a federal grant. Nobody's requiring a service provider to follow these rates, people do not have to participate in the program."With Republicans gaining full control of Congress, they could amend the law to require changes. The Trump administration could also make changes on its own after new leadership at the NTIA is in place.Cruz's letter referenced plans to eliminate the "rate regulation" and other requirements set by the Biden administration. That includes what Cruz called "extreme technology bias" in reference to the NTIA's preference for fiber broadband projects instead of other kinds of networks like cable, wireless, or satellite.Cruz wrote:Congress will review the BEAD program early next year, with specific attention to NTIA's extreme technology bias in defining "priority broadband projects" and "reliable broadband service"; imposition of statutorily-prohibited rate regulation; unionized workforce and DEI labor requirements; climate change assessments; excessive per-location costs; and other central planning mandates. In turn, states will be able to expand connectivity on terms that meet the real needs of their communities, without irrelevant requirements that tie up resources, create confusion, and slow deployment.Cruz alleges race-based discriminationWhile the FCC is not administering the BEAD program, Carr took aim at it today in a post on X. "VP Harris led the $42 billion program for expanding Internet infrastructure into a thicket of red tape and saddled it with progressive policy goals that have nothing to do with quickly connecting Americans," Carr wrote.Cruz separately sent another letter to the NTIA yesterday criticizing its plan for distributing $1.25 billion from the Digital Equity Competitive Grant Program. Cruz claimed that the NTIA's consideration of race when issuing grants violates the Fifth Amendment, writing that the "federal government is forbidden from engaging in impermissible race-based discrimination under the equal protection component of the Fifth Amendment's Due Process Clause."The nonprofit Benton Institute for Broadband & Society urged the NTIA to stay the course. In a press release, the Benton Institute said the NTIA is following the law:The primary problem that Senator Cruz identifies in his letter is that the NTIA's notice of funding opportunity incorporates "covered populations" language which includes "individuals who are members of a racial or ethnic minority group." But it was Congress, in its wisdom, that defined the covered populations the Digital Equity Act programs are designed to addressincluding "individuals who are members of a racial or ethnic minority group." In fact, the law goes further to define covered populations to include low-income people, seniors, veterans, people with disabilities, and rural Americans (among others) and outlines the critical steps that NTIA must follow to advance digital literacy and improve internet adoption.It's the lawand NTIA is merely following the law as Congress intended.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 63 Comments
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • ARSTECHNICA.COM
    The good, the bad, and the ugly behind the push for more smart displays
    Op-ed The good, the bad, and the ugly behind the push for more smart displays Opinion: Apple could really change the game here. Scharon Harding Nov 22, 2024 5:40 pm | 38 Amazon's Echo Show 21. Credit: Amazon Amazon's Echo Show 21. Credit: Amazon Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAfter a couple of years without much happening, smart displays are in the news again. Aside from smart TVs, consumer screens that connect to the Internet have never reached a mainstream audience. However, there seems to be a resurgence to make smart displays more popular. The approaches that some companies are taking are better than those of others, revealing a good, bad, and ugly side behind the push.Note that for this article, we'll exclude smart TVs when discussing smart displays. Unlike the majority of smart displays, smart TVs are mainstream tech. So for this piece, we'll mostly focus on devices like the Google Next Hub Max or Amazon Echo Show (as pictured above).The goodWhen it comes to emerging technology, a great gauge for whether innovation is happening is by measuring how much a product solves a real user problem. Products seeking a problem to solve or that are glorified vehicles for ads and tracking don't qualify.If reports that Apple is working on its first smart display are true, there may be potential for it to solve the problem of managing multiple smart home devices from different companies.Apple has declined to comment on reports from Bloomberg's Mark Gurman of an Apple smart display under development. But Gurman recently claimed that the display will be able to be mounted on walls and "use AI to navigate apps. Gurman said that it would incorporate Apple's smart home framework HomeKit, which supports "hundreds of accessories" and can control third-party devices, like smart security cameras, thermostats, and lights. Per the November 12 report:The product will be marketed as a way to control home appliances, chat with Siri, and hold intercom sessions via Apples FaceTime software. It will also be loaded with Apple apps, including ones for web browsing, listening to news updates and playing music. Users will be able to access their notes and calendar information, and the device can turn into a slideshow display for their photos.If released, the devicesaid to be shaped like a 6-inch iPhonewould compete with the Nest Hub and Echo Show. Apple entering the smart display business could bring a heightened focus on privacy and push other companies to make privacy a bigger focus, too. Apple has already given us a peek at how it might handle smart home privacy with the HomePod. "All communication between HomePod and Apple servers is encrypted, and anonymous IDs protect your identity," Apple's HomePod privacy policy states.Apple's supposed smart display would also likely, and hopefully, leverage HomeKit Secure Video, which has already been adopted by non-Apple smart products and "ensures that activity detected by your security cameras is analyzed and encrypted by your Apple devices at home before being securely stored in iCloud," per Apple. This could help address concerns around the security of things like managing footage from smart doorbells.Looking further ahead, I'm curious how an Apple smart display could impact Google's efforts here. Google hasn't released a new smart home display since 2019's Nest Hub Max. And with voice assistants like Google Assistant losing popularity, Google has seemed more interested in Pixel Tablets lately than smart displays. Recent sleuthing, however, has spotted code pointing to a new Nest Hub Max amid suspicion that Google is canceling future Pixel Tablets.If Apple put out its own smart home display, how might Google respond? And how might generative AI impact interest or final products from either side? Surely, the Nest Hub Max isn't where smart home hubs max out. The badOver the past couple of years, we've seen more web-connected desktop monitors released. Some were driven by the growth of videoconferences boosted by the pandemic. Others are more about providing access to common consumer apps, like Netflix, without needing to connect to a personal system. Neither gives me enough reason to put another device on my network.Smart monitors for videoconferencing could be useful for workplaces or for someone less technically inclined to see loved ones. But for most, a monitor dedicated to web calls isn't practical or necessary. The demise of devices like the Facebook Portal and Lenovo ThinkSmart View Plus (which Lenovo is no longer selling) support that view.Meanwhile, smart monitors like Samsung's M-series or LG's MyView-series have the same ads and privacy concerns as smart TVs. As we've discussed at Ars before, smart TVs are increasingly used to push ads and track users, giving TV operating system (OS) owners, such as LG and Samsung, an alternative revenue stream from hardware sales. LG has a whole lineup of smart monitors like this one. Credit: LG Samsung and LG smart monitors use the same OSes as their respective smart TVs. LG and Samsung TVs are better at keeping ads to a respectable minimum than other, often cheaper, TVs. However, LG and Samsung have been seeking ways to grow their ad businesses.For the most part, smart monitors don't seem to fill a gap in demand like a well-executed, privacy-focused smart home hub might.Interest in tracking users and selling ads via TVs is what has caused dumb TVs to be exceptionally hard to find. I'd hate for dumb monitors to be elusive one day, too.The uglyAmazon markets its Echo Show displays as hubs for managing smart homes, calendars, and shopping lists. However, Amazon doesn't have a good reputation for maintaining user privacy. And with Amazon under pressure to make Alexa financially successful, it wouldn't surprise if more ads or subscription fees were eventually forced on Echo Show owners.This week, Amazon announced the Echo Show 21, its biggest smart display yet. The bigger size makes the device more appropriate for watching stuff on platforms like Netflix and (as Amazon would love) Amazon Prime Video. Since Amazon owns the Echo Show OS, it could track user habits to fuel its ad business to generate insight for its businesses. Additionally, Echo Shows encourage tasks like researching and saving recipes and making shopping and to-do listsall representing e-commerce opportunities for Amazon. Amazon can use its smart display to track streaming habits. Credit: Amazon Amazon can use its smart display to track streaming habits. Credit: Amazon Amazon says it doesn't sell customer data, but it also says it may use user data for targeted ads, to inform its own business decisions, and to share non-user-specific trends with third parties.Amazon has also been building a generative AI version of Alexa that is expected to require a subscription fee and seek more user information. However, Amazon hasn't done much to make Alexa easier to trust. As I wrote when Amazon first demoed gen AI Alexa:The use of visual IDs to enable using Alexa without a wake word heightens the dependence on cameras and microphones, yet Amazon hasn't disclosed any revamped approaches to customer privacy. The company was previously caughtkeeping recordings, includingchildren's, forever, and Amazon workers have been caughtlistening to Alexa audioandspying on Ring users. Alexa audio has even beenused in criminal trials. Amazon says it doesn't send images or videos to the cloud and emphasizes Echo Show devices' microphone/camera off button and integrated physical camera shutter.The free version of Alexa is expected to stay available when the generative AI alternative releases. But it remains possible that Amazon could eventually lock feature features behind a paywall or remove them.Smart displays pushWith smart home enthusiasts more excited than ever about Matter and with smart display talk already on the rise, I'm expecting more discussion around what makes a good, bad, or ugly smart display in 2025.As tech companies push these devices out, Ars will focus on whether those devices are solving problems and if they're doing so with privacy and other user needs at the forefront. Smart displays built around company needs rather than users' will see limited interest from technologists.Scharon HardingSenior Product ReviewerScharon HardingSenior Product Reviewer Scharon is Ars Technicas Senior Product Reviewer writing news, reviews, and analysis on consumer technology, including laptops, mechanical keyboards, and monitors. Shes based in Brooklyn. 38 Comments Prev story
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • ARSTECHNICA.COM
    Were closer to re-creating the sounds of Parasaurolophus
    It's all in the crest Were closer to re-creating the sounds of Parasaurolophus Preliminary model suggests the dinosaur bellowed like a large trumpet or saxophone, or perhaps a clarinet. Jennifer Ouellette Nov 21, 2024 4:30 pm | 16 A 3D-printed model of the Parasaurolophus skulls at a 1:3 scale to the original fossil. The white model is the nasal passages inside the skull. Credit: Hongjun Lin A 3D-printed model of the Parasaurolophus skulls at a 1:3 scale to the original fossil. The white model is the nasal passages inside the skull. Credit: Hongjun Lin Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur's crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. Ive been fascinated by giant animals ever since I was a kid. Id spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today, he said during a press briefing. It wasnt until college that I realized the sounds we hear in movies and showswhile mesmerizingare completely fabricated using sounds from modern animals. Thats when I decided to dive deeper and explore what dinosaurs might have actually sounded like.A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means "near crested lizard." While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.It's that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.A near-crested lizard Reconstruction of the environment where Parasaurolophus walkeri lived during the Cretaceous. Credit: Marco Antonio Pineda/CC BY-SA 4.0 The most intriguing hypothesis is that the crest served as a resonating chamber, first proposed in 1931 by Swedish paleontologist Carl Wiman, who noted that the crest's structure was similar to that of a swan and thus could have been used for vocalization. Lin stumbled upon a 1981 paper by David Weishampel expanding on the notion, predicting that the dinosaur's calls would have fallen in the frequency range of 55 to 720 Hertz. Weishampel's model treated the crest as an open pipe/closed pipe system. Lin did a bit more research, and a 2013 paper convinced him that Weishampel's model was due for an update.Lin created a physical setup to empirically test his own mathematical model of what might be happening acoustically inside Parasaurolophus' crest, dubbed the "Linophone," although it is not a perfect anatomical replica of the dinosaur's crest. The setup consisted of two connected open pipes designed to mimic the vibrations of vocal cords. Lin conducted frequency sweeps using a speaker to generate the sounds and recorded the resonance data with microphones at three different locations. An oscilloscope transferred that data back to his computer.He found that the crest did indeed seem to be useful for resonance, similar to the crests in modern birds. "If I were to guess what this dinosaur sounded like, it would be a brass instrument like a huge trumpet or saxophone," said Lin, based on the simple pipe-like structure of his model. However, the presence of soft tissue-like vocal cords could mean that the sound was closer to that of a clarinet.Lin is still refining his mathematical model, and he thinks he should be able to extend it to studying other creatures with similar vocal structures."Once we have a working model, we'll move toward using fossil scans" to further improve it, Lin said, although he noted that one challenge is that soft tissue like vocal cords are often poorly preserved. His ultimate goal is to re-create the sound of the Parasaurolophusand perhaps even design his own accessible plug-in to add dinosaur sounds to his musical compositions.Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 16 Comments
    0 Comentários 0 Compartilhamentos 24 Visualizações
  • ARSTECHNICA.COM
    Surgeons remove 2.5-inch hairball from teen with rare Rapunzel syndrome
    Dangling danger Surgeons remove 2.5-inch hairball from teen with rare Rapunzel syndrome The teen didn't return for follow-up. Instead, she planned to see a hypnotherapist. Beth Mole Nov 21, 2024 5:02 pm | 32 Credit: Getty | Ada Summer Credit: Getty | Ada Summer Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAfter a month of unexplained bouts of stomach pain, an otherwise healthy 16-year-old girl arrived at the emergency department of Massachusetts General Hospital actively retching and in severe pain.A CT scan showed nothing unusual in her innards, and her urine and blood tests were normal. The same was found two weeks prior, when she had arrived at a different hospital complaining of stomach pain. She was discharged home with instructions to take painkillers, a medication for peptic ulcers, and another to prevent nausea and vomiting. The painkiller didn't help, and she didn't take the other two medications.Her pain worsened, and something was clearly wrong. When she arrived at Mass General, her stomach was tender, and her heart rate was elevated. When doctors tried to give her a combination of medications for common causes of abdominal pain, she immediately vomited them back up.So, her doctors set out to unravel the mystery, starting by considering the most common conditions that could explain her abdominal pain before moving on to the rarer possibilities. In a case study recently published in the New England Journal of Medicine, doctors recounted how they combed through a list that included constipation, gastritis, disorders of the gut-brain interaction, delayed stomach emptying brought on by an infection, lactose intolerance, gall bladder disease, pancreatitis, and Celiac disease. But each one could be dismissed fairly easily. Her pain was severe and came on abruptly. She had no fever or diarrhea and no recent history of an infection. Her gall bladder and pancreas looked normal on imaging.Hairy detailsThen there were the rarer causesmechanical problems. With tenderness and intermittent severe pain, an obstruction in her gut seemed like a good fit. And this led them to one of the rarest and unexpected possibilities: Rapunzel syndrome.Based on the girl's presentation, doctors suspected that a bezoar had formed in her stomach, growing over time and intermittently blocking the passage of food, causing pain. A bezoar is a foreign mass formed from accumulated material that's been ingested. A bezoar can form from a clump of dietary fiber (a phytobezoar) or from a glob of pharmaceutical products, like an extended-release capsule, enteric-coated aspirin, or iron (a pharmacobezoar). Then there's the third option: a tangle of hair (a trichobezoar).Hair is resistant to digestion and isn't easily moved through the digestive system. As such, it often gets lodged in folds of the gastric lining, denatures, and then traps food and gunk to form a mass. Over time, it will continue to collect material, growing into a thick, matted wad.Of all the bezoars, trichobezoars are the most common. But none of them are particularly easy to spot. On CT scans, bezoars can be indistinguishable from food in the stomach unless there's an oral contrast material. To look for a possible bezoar in the teen, her doctors ordered an esophagogastroduodenoscopy, in which a scope is put down into the stomach through the mouth. With that, they got a clear shot of the problem: a trichobezoar. (The image is here, but a warning: it's graphic).Tangled tailBut this trichobezoar was particularly rare; hair from the mottled mat had dangled down from the stomach and into the small bowel, which is an extremely uncommon condition called Rapunzel syndrome, named after the fairy-tale character who lets down her long hair. It carries a host of complications beyond acute abdominal pain, including perforation of the stomach and intestines, and acute pancreatitis. The only resolution is surgical removal. In the teen's case, the trichobezoar came out during surgery using a gastrostomy tube. Surgeons recovered a hairball about 2.5 inches wide, along with the dangling hair that reached into the small intestine.For any patient with a trichobezoar, the most important next step is to address any psychiatric disorders that might underlie hair-eating behavior. Hair eating is often linked to a condition called trichotillomania, a repetitive behavior disorder marked by hair pulling. Sometimes, the disorder can be diagnosed by signs of hair lossbald patches, irritated scalp areas, or hair at different growth stages. But, for the most part, it's an extremely difficult condition to diagnose as patients have substantial shame and embarrassment about the condition and will often go to great lengths to hide it.Another possibility is that the teen had pica, a disorder marked by persistent eating of nonfood, nonnutritive substances. Intriguingly, the teen noted that she had pica as a toddler. But doctors were skeptical that pica could explain her condition given that hair was the only nonfood material in the bezoar.The teen's doctors would have liked to get to the bottom of her condition and referred her to a psychiatrist after she successfully recovered from surgery. But unfortunately, she did not return for follow-up care and told her doctors she would instead see a hypnotherapist that her friends recommended.Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 32 Comments Prev story
    0 Comentários 0 Compartilhamentos 31 Visualizações
  • ARSTECHNICA.COM
    Microsoft pushes full-screen ads for Copilot+ PCs on Windows 10 users
    just circling back Microsoft pushes full-screen ads for Copilot+ PCs on Windows 10 users Microsoft has frequently used this kind of reminder to encourage upgrades. Andrew Cunningham Nov 20, 2024 1:45 pm | 150 One of several full-screen messages that has been sent to Windows 10 users over the last few days. Credit: Kyle Orland One of several full-screen messages that has been sent to Windows 10 users over the last few days. Credit: Kyle Orland Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreWindows 10's free, guaranteed security updates stop in October 2025, less than a year from now. Windows 10 users with supported PCs have been offered the Windows 11 upgrade plenty of times before. But now Microsoft is apparently making a fresh push to get users to upgrade, sending them full-screen reminders recommending they buy new computers.The reminders, which users have seen within the last few days, all mention the end of Windows 10 support but otherwise seem to differ from computer to computer. My Ars colleague Kyle Orland got one focused on Windows 11's gaming features, while posters on X (formerly Twitter) got screens that emphasized the ease of migrating from old PCs to new ones and other Windows 11 features. One specifically recommended upgrading to a Copilot+ PC, which supports a handful of extra AI features that other Windows 11 PCs don't, but other messages didn't mention Copilot+ specifically.None of the messages mention upgrading to Windows 11 directly, though Kyle said his PC meets Windows 11's requirements. These messages may be intended mostly for people using older PCs that can't officially install the Windows 11 update.The full-screen reminders also don't mention the one official escape hatch that Microsoft provides for Windows 10 users: the Extended Security Update (ESU) program, which will offer a single additional year of security updates to home users for a one-time fee of $30. Schools, businesses, and other organizations will be able to get up to three years of ESUs, but years two and three of the program aren't being offered to regular consumers.Though this is a fresh wave of full-screen update reminders, it's far from the first time Microsoft has used this tactic. Microsoft sent a wave of full-screen Windows 11 upgrade messages to Windows 10 users in early 2023. Toward the end of Windows 10's free upgrade period in 2016, users of Windows 7 and 8 were shown a full-screen message reminding them to update before the offer expired. In 2014, Windows XP users were warned of the upcoming end of support with a pop-up message; Windows 7 users received similar pop-ups in 2019.Paying for ESUs or buying a new PC isn't the only way to keep getting Windows updates after October 2025. Supported PCs can install Windows 11 directly, though sometimes Windows 10 PCs will need some configuration changes. And the experience of running Windows 11 on an older "unsupported" PC can occasionally be annoying, but the day-to-day user experience is surprisingly good most of the time.Andrew CunninghamSenior Technology ReporterAndrew CunninghamSenior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 150 Comments
    0 Comentários 0 Compartilhamentos 5 Visualizações
  • ARSTECHNICA.COM
    Study: Why Aztec death whistles sound like human screams
    Putting the "psycho" in acoustics Study: Why Aztec death whistles sound like human screams The basic mechanism relies on the Venturi effect, producing a unique rough and piercing sound. Jennifer Ouellette Nov 20, 2024 2:37 pm | 36 The skull-shaped body of the Aztec death whistle may represent Mictlantecuhtli, the Aztec Lord of the Underworld. Credit: Sascha Frhholz The skull-shaped body of the Aztec death whistle may represent Mictlantecuhtli, the Aztec Lord of the Underworld. Credit: Sascha Frhholz Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreArchaeologists have discovered numerous ceramic or clay whistles at Aztec sites, dubbed "death whistles" because of their distinctive skull shapes. A new paper published in the journal Communications Psychology examines the acoustical elements of the unique shrieking sounds produced by those whistles, as well as how human listeners are emotionally affected by the sounds. The findings support the hypothesis that such whistles may have been used in Aztec religious rituals or perhaps as mythological symbols.Archaeologists unearthed the first Aztec death whistles, also known as ehecachichtlis, in 1999 while excavating the Tlatelolco site in Mexico City. They found the body of a sacrificial victim, a 20-year-old male who had been beheaded, at the base of the main stairway of a temple dedicated to the wind god Ehecatl. The skeleton was clutching two ceramic skull-shaped whistles, one in each hand, along with other artifacts. More skull whistles were subsequently found, and they've found their way into popular culture. For instance, in Ghostbusters: Afterlife (2021), Egon Spengler had such a whistle in his secret laboratory collection.Scholars have puzzled over the purpose of the skull whistles, although given the dearth of concrete evidence, most suggestions are highly speculative. One hypothesis is that it was used in battle, with hundreds of warriors blowing their whistles simultaneously as a battle cry. Music archaeologist Arnd Adje Both has dismissed that idea, suggesting instead that the whistle's purpose was more likely tied to ceremonial or religious practices, like human sacrifice. Yet another hypothesis proposes that the whistles were intended as symbols of a deity. The skull shape, for instance, might allude to the Aztec god of the underworld, Mictlantecuhtli.Aztec death whistles don't fit into any existing Western classification for wind instruments; they seem to be a unique kind of "air spring" whistle, based on CT scans of some of the artifacts. Sascha Frhholz, a cognitive and affective neuroscientist at the University of Zrich, and several colleagues wanted to learn more about the physical mechanisms behind the whistle's distinctive sound, as well as how humans perceive said sounda field known as psychoacoustics. The whistles have a very unique construction, and we dont know of any comparable musical instrument from other pre-Columbian cultures or from other historical and contemporary contexts, said Frhholz.A symbolic sound? Human sacrifice with original skull whistle (small red box and enlarged rotated view in lower right) discovered 198789 at the Ehecatl-Quetzalcoatl temple in Mexico City. Credit: Salvador Guillien Arroyo, Proyecto Tlatelolco For their acoustic analysis, Frhholz et al. obtained sound recordings from two Aztec skull whistles excavated from Tlatelolco, as well as from three noise whistles (part of Aztec fire snake incense ladles). They took CT scans of whistles in the collection of the Ethnological Museum in Berlin, enabling them to create both 3D digital reconstructions and physical clay replicas. They were also able to acquire three additional artisanal clay whistles for experimental purposes.Human participants then blew into the replicas with low-, medium-, and high-intensity air pressure, and the ensuing sounds were recorded. Those recordings were compared to existing databases of a broad range of sounds: animals, natural soundscapes, water sounds, urban noise, synthetic sounds (as for computers, pinball machines, printers, etc.), and various ancient instruments, among other samples. Finally, a group of 70 human listeners rated a random selection of sounds from a collection of over 2,500 samples.The CT scans showed that skull whistles have an internal tube-like air duct with a constricted passage, a counter pressure chamber, a collision chamber, and a bell cavity. The unusual construction suggests that the basic principle at play is the Venturi effect, in which air (or a generic fluid) speeds up as it flows through a constricted passage, thereby reducing the pressure. "At high playing intensities and air speeds, this leads to acoustic distortions and to a rough and piercing sound character that seems uniquely produced by the skull whistles," the authors wrote. (e) Digitalization and 3D reconstruction of the skull whistle replicas using CT scans of the replicas. (f) 3D models of an original skull whistle demonstrate the air flow dynamics, construction similarity, and sound generation process. Credit: Sascha Frhholz et al., 2024 That is consistent with the rough piercing sounds of the recordings of original skull whistles, per the authors. The spectral signal contains features of pink noise, along with high-pitched frequencies. There were only minor differences between recordings of the original skull whistles and the replicas. The whistle sound is most similar to natural sounds and electronic music effects, and least similar to other instruments like Mexican flutes. Animal, human, and synthetic sounds fall somewhere in between. Finally, the whistle sounds corresponded to a distinct pitch of the modulation power spectrum (MPS) with psychoacoustic significance, associated with primate screams, terrifying music, and the like.Perhaps, then, it is not surprising that human listeners consistently rated skull whistle sounds as having negative emotional quality, as well as sounding largely unnatural, scary, or aversive. This was further bolstered by a follow-up experiment in which 32 participants listened to skull whistles and other sounds while undergoing an fMRI. Per Frhholz et al., there was a strong response in brain regions associated with the affective neural system, as well as regions that associate sounds with symbolic meaning. So the death whistles combine basic psycho-affective influences with more complex mental processes involving symbolism.This is consistent with the tradition of many ancient cultures to capture natural sounds in musical instruments, and could explain the ritual dimension of the death whistle sound for mimicking mythological entities, said Frhholz. Unfortunately, we could not perform our psychological and neuroscientific experiments with humans from ancient Aztec cultures. But the basic mechanisms of affective response to scary sounds are common to humans from all historical contexts."Communications Psychology, 2024. DOI: 10.1038/s44271-024-00157-7 (About DOIs).Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 36 Comments
    0 Comentários 0 Compartilhamentos 6 Visualizações
  • ARSTECHNICA.COM
    Google stops letting sites like Forbes rule search for Best CBD Gummies
    Best Hail-Mary Revenue for Publishers 2024 Google stops letting sites like Forbes rule search for Best CBD Gummies If you've noticed strange sites on "Best" product searches, so has Google. Kevin Purdy Nov 20, 2024 2:47 pm | 46 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn more"Updating our site reputation abuse policy" is how Google, in almost wondrously opaque fashion, announced yesterday that big changes have come to some big websites, especially those that rely on their domain authority to promote lucrative third-party product recommendations.If you've searched for reviews and seen results that make you ask why so many old-fashioned news sites seem to be "reviewing" products latelyespecially products outside that site's expertisethat's what Google is targeting."This is a tactic where third-party content is published on a host site in an attempt to take advantage of the host's already-established ranking signals," Google's post on its Search Central blog reads. "The goal of this tactic is for the content to rank better than it could otherwise on a different site, and leads to a bad search experience for users."Search firm Sistrix cited the lost traffic to the third-party review content inside Forbes, The Wall Street Journal, CNN, Fortune, and Time as worth $7.5 million last week, according to AdWeek. Search rankings dropped by up to 97 percent at Time's affiliate review site, Time Stamped, and 43 percent at Forbes Advisor. The drops are isolated to the affiliate subdomains of the sites, so their news-minded primary URLs still rank where relevant.Trusted names in CBD gummies and pet insuranceThe "site reputation abuse" Google is targeting takes many forms, but it has one common theme: using an established site's domain history to quietly sell things. Forbes, a well-established business news site, has an ownership stake in Forbes Marketplace (named Forbes Advisor in site copy) but does not fully own it.Under the strength of Forbes' long-existing and well-linked site, Forbes Marketplace/Advisor has dominated the search term "best cbd gummies" for "an eternity," according to SEO analyst Lily Ray. Forbes has similarly dominated "best pet insurance," and long came up as the second result for "how to get rid of roaches," as detailed in a blog post by Lars Lofgren. If people click on this high-ranking result, and then click on a link to buy a product or request a roach removal consultation, Forbes typically gets a cut.Forbes Marketplace had seemingly also provided SEO-minded review services to CNN and USA Today, as detailed by Lofgren. Lofgren's term for this business, "Parasite SEO," took hold in corners critical of the trend. Ars has contacted Forbes for comment and will update this post with response.The unfair, exploitative nature of parasite SEOGoogle writes that it had reviewed "situations where there might be varying degrees of first-party involvement" (most publishers' review sites indicate some kind of oversight or editorial standards linked to the primary site). But however arranged, "no amount of first-party involvement alters the fundamental third-party nature of the content or the unfair, exploitative nature of attempting to take advantage of the host sites' ranking signals."As such, using third-party content in such a way as to take advantage of a high search quality ranking, outside the site's primary focus, is considered spam. That delivers a major hit to a site's Google ranking, and the impact is already being felt.The SEO reordering does not affect more established kinds of third-party content, like wire service reports, syndication, or well-marked sponsored content, as detailed in Google's spam policy section about site reputation abuse. As seen on the SEO subreddit, and on social media, Google has given sites running afoul of its updated policy a "Manual Action" rather than relying only on its algorithm to catch the often opaque arrangements.Kevin PurdySenior Technology ReporterKevin PurdySenior Technology Reporter Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch. 46 Comments
    0 Comentários 0 Compartilhamentos 6 Visualizações
  • ARSTECHNICA.COM
    Qubit that makes most errors obvious now available to customers
    Qubits on rails Qubit that makes most errors obvious now available to customers Can a small machine that makes error correction easier upend the market? John Timmer Nov 20, 2024 3:58 pm | 9 A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreWe're nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.Whats a dual-rail qubit?Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it's possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it's measured, allowing the system as a whole to hold a single bit of quantum informationa qubit.This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that's easily detected. "It's about 90 percent or more [of the errors]," said Quantum Circuits' Andrei Petrenko. "So it's a huge advantage that we have with photon loss over other errors. And that's actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error."Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it's possible to determine if there is an odd number of photons in the hardware. If that isn't the case, you know an error has occurredmost likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.But it does not eliminate the need for error correction if we want to do more complex computations that can't make it to completion without encountering an error. There's still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn't tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems. The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits In fact, the initial hardware that's being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.Numbers or noise?Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn't alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.Having talked with several of them, there is a logic to what they're doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company's case.Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren't mutually exclusive, and everyone will need to do a bit of both.)The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we've already solved to make the electronics revolution happen.In other words, within limits, scaling is a much easier problem to solve than errors. It's still going to be extremely difficult to get the millions of hardware qubits we'd need to error correct complex algorithms on today's hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.Errors firstAnd there's evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn't much point in doing so given the error rates of the qubits on the then-current generation machine.The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overalland one that they may already have the pieces in place to handle. Quantum Circuits' Petrenko, for example, told Ars, "I think that we're at the point where we've gone through a number of iterations of this qubit architecture where we've de-risked a number of the engineering roadblocks." And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing's development, we're still seeing startups enter the field.John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 9 Comments Prev story
    0 Comentários 0 Compartilhamentos 6 Visualizações
  • ARSTECHNICA.COM
    Automatic braking systems save lives. Now theyll need to work at 62 mph.
    slow down Automatic braking systems save lives. Now theyll need to work at 62 mph. Regulators have ordered an expansion of the tech, but the auto industry says the upgrade wont be easy. Aarian Marshall, WIRED.com Nov 19, 2024 2:45 pm | 82 At a test site, the driverless, electrically powered Cube minibus drives toward a man looking at his smartphone. (The minibus stopped.) Credit: Christophe Gateau/picture alliance via Getty Images At a test site, the driverless, electrically powered Cube minibus drives toward a man looking at his smartphone. (The minibus stopped.) Credit: Christophe Gateau/picture alliance via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe world is full of feel-bad news. Heres something to feel good about: Automatic emergency braking is one of the great car safety-tech success stories.Auto-braking systems, called AEB for short, use sensors including cameras, radar, and lidar to sense when a crash is about to happen and warn driversthen automatically apply the brakes if drivers dont respond. Its a handy thing to have in those vital few moments before your car careens into the back of another. One industry group estimates that US automakers' move to install AEB on most carssomething they did voluntarily, in cooperation with road safety advocateswill prevent 42,000 crashes and 20,000 injuries by 2025. A new report from AAA finds these emergency braking systems are getting even betterand challenges automakers to perfect them at even higher speeds.AAA researchers tested three model year 2018 and 2017 vehicles versus three model year 2024 vehicles, and found the AEB systems in the newer cars were twice as likely as the old systems to avoid collisions at speeds up to 35 miles per hour. In fact, the new systems avoided all of the tested collisions at speeds between 12 and 35 mph. The majority of the newer cars avoided hitting a non-moving target at 45 mph, too.The systems are headed the right way, says Greg Brannon, the director of automotive research at AAA.Now new regulations will require AEB systems to get even more intelligent. Earlier this year, the US National Highway Traffic Safety Administration, which crafts the countrys road safety rules, announced that by 2029, it will require all cars to be able to stop and avoid contact with any vehicle in front of them at even faster speeds: 62 mph. The Feds will also require automakers to build AEB systems that can detect pedestrians in the daytime and at night. And automakers will have to build tech that applies brakes automatically at speeds up to 45 mph when it senses an imminent collision with a person, and 90 mph when it senses one with a car.The rule will require automakers to build systems that can operate at highway speeds. As a result, it should do more good; according to the NHTSA, if manufacturers deploy auto-braking systems that work at higher speeds, it would save at least 360 lives each year and prevent 24,000 injuries.But no story can be all good news. Auto industry officials argue that meeting that 2029 target will be really very hard. Thats practically impossible with available technology, John Bozzella, the president and CEO of the auto industry lobbying group the Alliance for Automotive Innovation, wrote earlier this year in a letter to Congress. The government estimated that installing more advanced AEB systems on its cars would cost an additional $350 per vehicle. The auto lobbying group estimates prices could range up to $4,200 per car instead, and it has filed a petition to request changes to the final federal rules.In response to WIREDs questions, a spokesperson for NHTSA said that more advanced AEB systems will significantly reduce injury or property damage and the associated costs from these crashes. The spokesperson said the agency is working expeditiously to reply to the groups petition.Auto safety experts say that if automakers (and the suppliers who build their technology) pull off more advanced automatic emergency braking, theyll have to walk a tightrope: developing tech that avoids crashes without ballooning costs. Theyll also have to avoid false positives or phantom braking, which incorrectly identify nonhazards as hazards and throw on the brakes for no apparent reason. These can frustrate and annoy driversand at higher speeds, give them serious cases of whiplash.That is a really big concern: That as you increase the number of situations in which the system has to operate, you have more of these false warnings, says David Kidd, a senior research scientist at the Insurance Institute for Highway Safety (IIHS), an insurance-industry-funded scientific and educational organization.Otherwise, drivers will get mad. The mainstream manufacturers have to be a little careful because they dont want to create customer dissatisfaction by making the system too twitchy, says Brannon, at AAA. Tesla drivers, for example, have proven very tolerant of beta testing and quirks. Your average driver, maybe less so.Based on its own research, IIHS has pushed automakers to install AEB systems able to operate at faster speeds on their cars. Kidd says IIHS research suggests there have been no systemic, industry-wide issues with safety and automatic emergency braking. Fewer and fewer drivers seem to be turning off their AEB systems out of annoyance. (The new rules make it so drivers cant turn them off.) But US regulators have investigated a handful of automakers, including General Motors and Honda, for automatic emergency braking issues that have reportedly injured more than 100 people, though automakers have reportedly fixed the issue.New complexitiesGetting cars to fast-brake at even higher speeds will require a series of tech advances, experts say. AEB works by bringing in data from sensors. That information is then turned over to automakers custom-tuned classification systems, which are trained to recognize certain situations and road usersthats a stopped car in the middle of the road up ahead or theres a person walking across the road up thereand intervene.So to get AEB to work in higher-speed situations, the tech will have to see further down the road. Most of todays new cars come loaded up with sensors, including cameras and radar, which can collect vital data. But the auto industry trade group argues that the Feds have underestimated the amount of new hardwareincluding, possibly, more expensive lidar unitsthat will have to be added to cars.Brake-makers will have to tinker with components to allow quicker stops, which will require the pressurized fluid that moves through a brakes hydraulic lines to go even faster. Allowing cars to detect hazards at further distances could require different types of hardware, including sometimes-expensive sensors. Some vehicles might just need a software update, and some might not have the right sensor suite, says Bhavana Chakraborty, an engineering director at Bosch, an automotive supplier that builds safety systems. Those without the right hardware will need updates across the board, she says, to get to the levels of safety demanded by the federal government.Bosch and other suppliers advise automakers how to use the systems they build, but manufacturers are ultimately in charge of the other AEB secret sauce: algorithms. Each automaker tunes its safety system, using its own calculations to determine how and when its vehicles will automatically avoid collisions.Whats nextEven the US Feds 2029 rules dont fulfill all road safety advocates dreams. The regulations dont require safety systems to recognize bicyclists, though some automakers are already building that into theirs voluntarily. And unlike European vehicles, US AEB systems wont undergo tests that determine how well they work when theyre turning. The European New Car Assessment Program started testing AEB for turning effectiveness last year, and has for several years required automakers to build systems that totally avoid crashes at higher speeds. Some automakers are already building systems that pass these tests, says Kidd, the IIHS scientista good sign that theyll be able to pull it off on US roads too.I dont think theres any doubt that these will make the roads safer, Kidd says. A good news story after all.This story originally appeared on wired.com.Aarian Marshall, WIRED.com Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture. 82 Comments
    0 Comentários 0 Compartilhamentos 9 Visualizações
  • ARSTECHNICA.COM
    Niantic uses Pokmon Go player data to build AI navigation system
    gotta catch 'em all Niantic uses Pokmon Go player data to build AI navigation system Visual scans of the world have helped Niantic build what it calls a "Large Geospatial Model." Benj Edwards Nov 19, 2024 3:34 pm | 10 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLast week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobile games, such as Pokmon Go, and from users of its Scaniverse app, reports 404 Media.All AI models require training data. So far, companies have collected data from websites, YouTube videos, books, audio sources, and more, but this is perhaps the first we've heard of AI training data collected through a mobile gaming app."Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse," wrote Niantic in a company blog post.The company calls its creation a "Large Geospatial Model" (LGM), drawing parallels to large language models (LLMs) like the kind that power ChatGPT. Where language models process text, Niantic's model will process physical spaces using geolocated images collected through its apps.The scale of Niantic's data collection reveals the company's sizable presence in the AR space. The model draws from over 10 million scanned locations worldwide, with users capturing roughly 1 million new scans weekly through Pokmon Go and Scaniverse. These scans come from a pedestrian perspective, capturing areas inaccessible to cars and street-view cameras.First-person scansThe company reports it has trained more than 50 million neural networks, each one representing a specific location or viewing angle. These networks compress thousands of mapping images into digital representations of physical spaces. Together, they contain over 150 trillion parametersadjustable values that help the networks recognize and understand locations. Multiple networks can contribute to mapping a single location, and Niantic plans to combine its knowledge into one comprehensive model that can understand any location, even from unfamiliar angles."Imagine yourself standing behind a church," Niantic wrote in its blog post. "The closest local model has seen only the front entrance of that church, and thus, it will not be able to tell you where you are. But on a global scale, we have seen thousands of churches captured by local models worldwide. No church is the same, but many share common characteristics. An LGM accesses that distributed knowledge."The technology builds on Niantic's existing Lightship Visual Positioning System, which lets players place virtual items in real-world locations with centimeter-level precision. A recent Pokmon Go feature called Pokmon Playgrounds demonstrates this capability, allowing users to leave Pokmon at specific spots for others to find.Niantic suggests the technology could support augmented reality products, robotics, and autonomous systems, with additional applications in spatial planning, logistics, and remote collaboration.Did Niantic's millions of players have any idea their scans would be fed into an AI system? Judging from this Reddit thread reacting to the 404 Media article, it seems that many are not surprised. "Definitely wasn't unwittingly," wrote one Redditor. "Most of us knew their business model didn't revolve around supporting the actual players."No doubt the process was covered by Pokmon Go's data collection terms of service, but the larger reaction to this news will likely be a developing story over time.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 10 Comments Prev story
    0 Comentários 0 Compartilhamentos 10 Visualizações
  • ARSTECHNICA.COM
    Microsoft and Atom Computing combine for quantum error correction demo
    Atomic power? Microsoft and Atom Computing combine for quantum error correction demo New work provides a good view of where the field currently stands. John Timmer Nov 19, 2024 4:00 pm | 4 The first-generation tech demo of Atom's hardware. Things have progressed considerably since. Credit: Atom Computing The first-generation tech demo of Atom's hardware. Things have progressed considerably since. Credit: Atom Computing Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn September, Microsoft made an unusual combination of announcements. It demonstrated progress with quantum error correction, something that will be needed for the technology to move much beyond the interesting demo phase, using hardware from a quantum computing startup called Quantinuum. At the same time, however, the company also announced that it was forming a partnership with a different startup, Atom Computing, which uses a different technology to make qubits available for computations.Given that, it was probably inevitable that the folks in Redmond, Washington, would want to show that similar error correction techniques would also work with Atom Computing's hardware. It didn't take long, as the two companies are releasing a draft manuscript describing their work on error correction today. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.Atoms and errorsWhile we have various technologies that provide a way of storing and manipulating bits of quantum information, none of them can be operated error-free. At present, errors make it difficult to perform even the simplest computations that are clearly beyond the capabilities of classical computers. More sophisticated algorithms would inevitably encounter an error before they could be completed, a situation that would remain true even if we could somehow improve the hardware error rates of qubits by a factor of 1,000something we're unlikely to ever be able to do.The solution to this is to use what are called logical qubits, which distribute quantum information across multiple hardware qubits and allow the detection and correction of errors when they occur. Since multiple qubits get linked together to operate as a single logical unit, the hardware error rate still matters. If it's too high, then adding more hardware qubits just means that errors will pop up faster than they can possibly be corrected.We're now at the point where, for a number of technologies, hardware error rates have passed the break-even point, and adding more hardware qubits can lower the error rate of a logical qubit based on them. This was demonstrated using neutral atom qubits by an academic lab at Harvard University about a year ago. The new manuscript demonstrates that it also works on a commercial machine from Atom Computing.Neutral atoms, which can be held in place using a lattice of laser light, have a number of distinct advantages when it comes to quantum computing. Every single atom will behave identically, meaning that you don't have to manage the device-to-device variability that's inevitable with fabricated electronic qubits. Atoms can also be moved around, allowing any atom to be entangled with any other. This any-to-any connectivity can enable more efficient algorithms and error-correction schemes. The quantum information is typically stored in the spin of the atom's nucleus, which is shielded from environmental influences by the cloud of electrons that surround it, making them relatively long-lived qubits.Operations, including gates and readout, are performed using lasers. The way the physics works, the spacing of the atoms determines how the laser affects them. If two atoms are a critical distance apart, the laser can perform a single operation, called a two-qubit gate, that affects both of their states. Anywhere outside this distance, and a laser only affects each atom individually. This allows a fine control over gate operations.That said, operations are relatively slow compared to some electronic qubits, and atoms can occasionally be lost entirely. The optical traps that hold atoms in place are also contingent upon the atom being in its ground state; if any atom ends up stuck in a different state, it will be able to drift off and be lost. This is actually somewhat useful, in that it converts an unexpected state into a clear error. Atom Computing's system. Rows of atoms are held far enough apart so that a single laser sent across them (green bar) only operates on individual atoms. If the atoms are moved to the interaction zone (red bar), a laser can perform gates on pairs of atoms. Spaces where atoms can be held can be left empty to avoid performing unneeded operations. Credit: Reichardt, et al. The machine used in the new demonstration hosts 256 of these neutral atoms. Atom Computing has them arranged in sets of parallel rows, with space in between to let the atoms be shuffled around. For single-qubit gates, it's possible to shine a laser across the rows, causing every atom it touches to undergo that operation. For two-qubit gates, pairs of atoms get moved to the end of the row and moved a specific distance apart, at which point a laser will cause the gate to be performed on every pair present.Atom's hardware also allows a constant supply of new atoms to be brought in to replace any that are lost. It's also possible to image the atom array in between operations to determine whether any atoms have been lost and if any are in the wrong state.Its only logicalAs a general rule, the more hardware qubits you dedicate to each logical qubit, the more simultaneous errors you can identify. This identification can enable two ways of handling the error. In the first, you simply discard any calculation with an error and start over. In the second, you can use information about the error to try to fix it, although the repair involves additional operations that can potentially trigger a separate error.For this work, the Microsoft/Atom team used relatively small logical qubits (meaning they used very few hardware qubits), which meant they could fit more of them within 256 total hardware qubits the machine made available. They also checked the error rate of both error detection with discard and error detection with correction.The research team did two main demonstrations. One was placing 24 of these logical qubits into what's called a cat state, named after Schrdinger's hypothetical feline. This is when a quantum object simultaneously has non-zero probability of being in two mutually exclusive states. In this case, the researchers placed 24 logical qubits in an entangled cat state, the largest ensemble of this sort yet created. Separately, they implemented what's called the Bernstein-Vazirani algorithm. The classical version of this algorithm requires individual queries to identify each bit in a string of them; the quantum version obtains the entire string with a single query, so is a notable case of something where a quantum speedup is possible.Both of these showed a similar pattern. When done directly on the hardware, with each qubit being a single atom, there was an appreciable error rate. By detecting errors and discarding those calculations where they occurred, it was possible to significantly improve the error rate of the remaining calculations. Note that this doesn't eliminate errors, as it's possible for multiple errors to occur simultaneously, altering the value of the qubit without leaving an indication that can be spotted with these small logical qubits.Discarding has its limits; as calculations become increasingly complex, involving more qubits or operations, it will inevitably mean every calculation will have an error, so you'd end up wanting to discard everything. Which is why we'll ultimately need to correct the errors.In these experiments, however, the process of correcting the errortaking an entirely new atom and setting it into the appropriate statewas also error-prone. So, while it could be done, it ended up having an overall error rate that was intermediate between the approach of catching and discarding errors and the rate when operations were done directly on the hardware.In the end, the current hardware has an error rate that's good enough that error correction actually improves the probability that a set of operations can be performed without producing an error. But not good enough that we can perform the sort of complex operations that would lead quantum computers to have an advantage in useful calculations. And that's not just true for Atom's hardware; similar things can be said for other error-correction demonstrations done on different machines.There are two ways to go beyond these current limits. One is simply to improve the error rates of the hardware qubits further, as fewer total errors make it more likely that we can catch and correct them. The second is to increase the qubit counts so that we can host larger, more robust logical qubits. We're obviously going to need to do both, and Atom's partnership with Microsoft was formed in the hope that it will help both companies get there faster.John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 4 Comments
    0 Comentários 0 Compartilhamentos 9 Visualizações
  • ARSTECHNICA.COM
    The key moment came 38 minutes after Starship roared off the launch pad
    Turning point The key moment came 38 minutes after Starship roared off the launch pad SpaceX wasn't able to catch the Super Heavy booster, but Starship is on the cusp of orbital flight. Stephen Clark Nov 19, 2024 11:57 pm | 36 The sixth flight of Starship lifts off from SpaceX's Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX. The sixth flight of Starship lifts off from SpaceX's Starbase launch site at Boca Chica Beach, Texas. Credit: SpaceX. Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSpaceX launched its sixth Starship rocket Tuesday, proving for the first time that the stainless steel ship can maneuver in space and paving the way for an even larger, upgraded vehicle slated to debut on the next test flight.The only hiccup was an abortive attempt to catch the rocket's Super Heavy booster back at the launch site in South Texas, something SpaceX achieved on the previous flight October 13. The Starship upper stage flew halfway around the world, reaching an altitude of 118 miles (190 kilometers) before plunging through the atmosphere for a pinpoint slow-speed splashdown in the Indian Ocean.The sixth flight of the world's largest launcherstanding 398 feet (121.3 meters) tallbegan with a lumbering liftoff from SpaceX's Starbase facility near the US-Mexico border at 4 pm CST (22:00 UTC) Tuesday. The rocket headed east over the Gulf of Mexico propelled by 33 Raptor engines clustered on the bottom of its Super Heavy first stage.A few miles away, President-elect Donald Trump joined SpaceX founder Elon Musk to witness the launch. The SpaceX boss became one of Trump's closest allies in this year's presidential election, giving the world's richest man extraordinary influence in US space policy. Sen. Ted Cruz (R-Texas) was there, too, among other lawmakers. Gen. Chance Saltzman, the top commander in the US Space Force, stood nearby, chatting with Trump and other VIPs. Elon Musk, SpaceX's CEO, President-elect Donald Trump, and Gen. Chance Saltzman of the US Space Force watch the sixth launch of Starship Tuesday. Credit: Brandon Bell/Getty Images From their viewing platform, they watched Starship climb into a clear autumn sky. At full power, the 33 Raptors chugged more than 40,000 pounds of super-cold liquid methane and liquid oxygen per second. The engines generated 16.7 million pounds of thrust, 60 percent more than the Soviet N1, the second-largest rocket in history.Eight minutes later, the rocket's upper stage, itself also known as Starship, was in space, completing the program's fourth straight near-flawless launch. The first two test flights faltered before reaching their planned trajectory.A brief but crucial demoAs exciting as it was, we've seen all that before. One of the most important new things engineers desired to test on this flight occurred about 38 minutes after liftoff.That's when Starship reignited one of its six Raptor engines for a brief burn to make a slight adjustment to its flight path. The burn only lasted a few seconds, and the impulse was smalljust a 48 mph (77 km/hour) change in velocity, or delta-Vbut it demonstrated the ship can safely deorbit itself on future missions.With this achievement, Starship will likely soon be cleared to travel into orbit around Earth and deploy Starlink internet satellites or conduct in-space refueling experiments, two of the near-term objectives on SpaceX's Starship development roadmap.Launching Starlinks aboard Starship will allow SpaceX to expand the capacity and reach of commercial consumer broadband network, which, in turn, provides revenue for Musk to reinvest into Starship. Orbital refueling is an enabler for Starship voyages beyond low-Earth orbit, fulfilling SpaceX's multibillion-dollar contract with NASA to provide a human-rated Moon lander for the agency's Artemis program. Likewise, transferring cryogenic propellants in orbit is a prerequisite for sending Starships to Mars, making real Musk's dream of creating a settlement on the red planet. Artist's illustration of Starship on the surface of the Moon. Credit: SpaceX Until now, SpaceX has intentionally launched Starships to speeds just shy of the blistering velocities needed to maintain orbit. Engineers wanted to test the Raptor's ability to reignite in space on the third Starship test flight in March, but the ship lost control of its orientation, and SpaceX canceled the engine firing.Before going for a full orbital flight, officials needed to confirm Starship could steer itself back into the atmosphere for reentry, ensuring it wouldn't present any risk to the public with an unguided descent over a populated area. After Tuesday, SpaceX can check this off its to-do list."Congrats to SpaceX on Starship's sixth test flight," NASA Administrator Bill Nelson posted on X. "Exciting to see the Raptor engine restart in spacemajor progress towards orbital flight. Starships success is Artemis 'success. Together, we will return humanity to the Moon & set our sights on Mars."While it lacks the pizazz of a fiery launch or landing, the engine relight unlocks a new phase of Starship development. SpaceX has now proven the rocket is capable of reaching space with a fair measure of reliability. Next, engineers will fine-tune how to reliably recover the booster and the ship, and learn how to use them.Acid testSpaceX appears well on the way to doing this. While SpaceX didn't catch the Super Heavy booster with the launch tower's mechanical arms Tuesday, engineers have shown they can do it. The challenge of catching Starship itself back at the launch pad is more daunting. The ship starts its reentry thousands of miles from Starbase, traveling approximately 17,000 mph (27,000 km/hour), and must thread the gap between the tower's catch arms within a matter of inches.The good news here is SpaceX has now twice proven it can bring Starship back to a precision splashdown in the Indian Ocean. In October, the ship settled into the sea in darkness. SpaceX moved the launch time for Tuesday's flight to the late afternoon, setting up for splashdown shortly after sunrise northwest of Australia.The shift in time paid off with some stunning new visuals. Cameras mounted on the outside of Starship beamed dazzling live views back to SpaceX through the Starlink network, showing a now-familiar glow of plasma encasing the spacecraft as it plowed deeper into the atmosphere. But this time, daylight revealed the ship's flaps moving to control its belly-first descent toward the ocean. After passing through a deck of low clouds, Starship reignited its Raptor engines and tilted from horizontal to vertical, making contact with the water tail-first within view of a floating buoy and a nearby aircraft in position to observe the moment. Here's a replay of the splashdown.The ship made it through reentry despite flying with a substandard heat shield. Starship's thermal protection system is made up of thousands of ceramic tiles to protect the ship from temperatures as high as 2,600 Fahrenheit (1,430 Celsius).Kate Tice, a SpaceX engineer hosting the company's live broadcast of the mission, said teams at Starbase removed 2,100 heat shield tiles from Starship ahead of Tuesday's launch. Their removal exposed wider swaths of the ship's stainless steel skin to super-heated plasma, and SpaceX teams were eager to see how well the spacecraft held up during reentry. In the language of flight testing, this approach is called exploring the corners of the envelope, where engineers evaluate how a new airplane or rocket performs in extreme conditions.Dont be surprised if we see some wackadoodle stuff happen here," Tice said. There was nothing of the sort. One of the ship's flaps appeared to suffer some heating damage, but it remained intact and functional, and the harm looked to be less substantial than damage seen on previous flights.Many of the removed tiles came from the sides of Starship where SpaceX plans to place catch fittings on future vehicles. These are the hardware protuberances that will catch on the top side of the launch tower's mechanical arms, similar to fittings used on the Super Heavy booster."The next flight, we want to better understand where we can install catch hardware, not necessarily to actually do the catch but to see how that hardware holds up in those spots," Tice said. "Today's flight will help inform, does the stainless steel hold up like we think it may, based on experiments that we conducted on Flight 5?"Musk wrote on his social media platform X that SpaceX could try to bring Starship back to Starbase for a catch on the eighth test flight, which is likely to occur in the first half of 2025."We will do one more ocean landing of the ship," Musk said. "If that goes well, then SpaceX will attempt to catch the ship with the tower."The heat shield, Musk added, is a focal point of SpaceX's attention. The delicate heat-absorbing tiles used on the belly of the space shuttle proved vexing to NASA technicians. Early in the shuttle's development, NASA had trouble keeping tiles adhered to the shuttle's aluminum skin. Each of the shuttle tiles was custom-machined to fit on a specific location on the orbiter, complicating refurbishment between flights. Starship's tiles are all hexagonal in shape and agnostic to where technicians place them on the vehicle."The biggest technology challenge remaining for Starship is a fully & immediately reusable heat shield," Musk wrote on X. "Being able to land the ship, refill propellant & launch right away with no refurbishment or laborious inspection. That is the acid test." This photo of the Starship vehicle for Flight 6, numbered Ship 31, shows exposed portions of the vehicle's stainless steel skin after tile removal. Credit: SpaceX There were no details available Tuesday night on what caused the Super Heavy booster to divert from its planned catch on the launch tower. After detaching from the Starship upper stage less than three minutes into the flight, the booster reversed course to begin the journey back to Starbase.Then, SpaceX's flight director announced the rocket would fly itself into the Gulf, rather than back to the launch site: "Booster offshore divert."The booster finished off its descent with a seemingly perfect landing burn using a subset of its Raptor engines. As expected after the water landing, the boosteritself 233 feet (71 meters) talltoppled and broke apart in a dramatic fireball visible to onshore spectators.In an update posted to its website after the launch, SpaceX said automated health checks of hardware on the launch and catch tower triggered the aborted catch attempt. The company did not say what system failed the health check. As a safety measure, SpaceX must send a manual command for the booster to come back to land in order to prevent a malfunction from endangering people or property.Turning it up to 11There will be plenty more opportunities for more booster catches in the coming months as SpaceX ramps up its launch cadence at Starbase. Gwynne Shotwell, SpaceX's president and chief operating officer, hinted at the scale of the company's ambitions last week."We just passed 400 launches on Falcon, and I would not be surprised if we fly 400 Starship launches in the next four years," she said at the Barron Investment Conference.The next batch of test flights will use an improved version of Starship designated Block 2, or V2. Starship Block 2 comes with larger propellant tanks, redesigned forward flaps, and a better heat shield.The new-generation Starship will hold more than 11 million pounds of fuel and oxidizer, about a million pounds more than the capacity of Starship Block 1. The booster and ship will produce more thrust, and Block 2 will measure 408 feet (124.4 meters) tall, stretching the height of the full stack by a little more than 10 feet.Put together, these modifications should give Starship the ability to heave a payload of up to 220,000 pounds (100 metric tons) into low-Earth orbit, about twice the carrying capacity of the first-generation ship. Further down the line, SpaceX plans to introduce Starship Block 3 to again double the ship's payload capacity.Just as importantly, these changes are designed to make it easier for SpaceX to recover and reuse the Super Heavy booster and Starship upper stage. SpaceX's goal of fielding a fully reusable launcher builds on the partial reuse SpaceX pioneered with its Falcon 9 rocket. This should dramatically bring down launch costs, according to SpaceX's vision.With Tuesday's flight, it's clear Starship works. Now it's time to see what it can do.Updated with additional details, quotes, and images.Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 36 Comments Prev story
    0 Comentários 0 Compartilhamentos 8 Visualizações
  • ARSTECHNICA.COM
    Musi fans refuse to update iPhones until Apple unblocks controversial app
    Musi, come back Musi fans refuse to update iPhones until Apple unblocks controversial app Musi doesnt risk extinction over App Store removal, Apple says. Ashley Belanger Nov 19, 2024 4:24 pm | 38 Credit: nicoletaionescu | iStock / Getty Images Plus Credit: nicoletaionescu | iStock / Getty Images Plus Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn more"Who up missing Musi?" a Reddit user posted in a community shocked by the free music streaming app's sudden removal from Apple's App Store in September.Apple kicked Musi out of the App Store after receiving several copyright complaints. Musi works by streaming music from YouTubeseemingly avoiding paying to license songsand YouTube was unsurprisingly chief among those urging Apple to stop allowing the alleged infringement.Musi was previously only available through the App Store. Once Musi was removed from the App Store, anyone who downloaded Musi could continue using the app uninterrupted. But if the app was ever off-loaded during an update or if the user got a new phone, there would be no way to regain access to their Musi app or their playlists.Some Musi fans only learned that Apple booted Musi after they updated their phones, and the app got offloaded with no option to re-download. Panicked, these users turned to the Musi subreddit for answers, where Musi's support staff has consistently responded with reassurances that Musi is working to bring the app back to the App Store. For many Musi users learning from others' mistakes, the Reddit discussions leave them with no choice but to refuse to update their phones or risk losing their favorite app.It may take months before Musi fans can exit this legal limbo. After Apple gave in to the pressure, Musi sued in October, hoping to quickly secure an injunction that would force Apple to reinstate Musi in the App Store until the copyright allegations were decided. But a hearing on that motion isn't scheduled until January, making it appear unlikely that Musi will be available again to download until sometime next year.Musi claimed Apple breached its contract by removing the app before investigating YouTube's claims. The music-streaming app is concerned that the longer the litigation drags on, the more likely that its users will move on. A mass exodus of users "risks extinction," Musi argued, telling the court the app fears a potentially substantial loss in revenue over allegedly unsubstantiated copyright claims.But Apple filed its opposition to the injunction last Friday, urging the court to agree that because Musi fans who still have the app installed can continue streaming, Musi is not at risk of "extinction.""Musi asserts that its app is still in use by its preexisting customer base, and so Musi is presumably still earning revenue from ads," Apple's opposition filing said. "Moreover, Musiprovides no evidence relating to its financial condition and no evidence that it is unable to survive until a decision on the merits in this case."According to Apple, Musi is not being transparent about its finances, but public reporting showed the app "earned more than $100 million in advertising revenue between January 2023 and spring 2024 and employs 10 people at most."Apple warned that granting Musi's injunction puts Apple at risk of copyright violations. The App Store owner claimed that it takes no sides in this dispute that's largely between Musi and YouTube. But to Apple, it would be unreasonable to expect Apple to investigate every copyright notice it receives when thousands of third parties send notices annually. Thats partly why Apple's contract stipulates that any app can be removed from the App Store "at any time, with or without cause." Apple further claimed that Musi has not taken serious steps to address YouTube's or any other rights holders' concerns."The public interest in the preservation of intellectual property rights weighs heavilyagainst the injunction sought here, which would force Apple to distribute an app over the repeated and consistent objections of non-parties who allege their rights are infringed by the app," Apple argued.Musi fans vow loyaltyFor Musi fans expressing their suffering on Reddit, Musi appears to be irreplaceable.Unlike other free apps that continually play ads, Musi only serves ads when the app is initially opened, then allows uninterrupted listening. One Musi user also noted that Musi allows for an unlimited number of videos in a playlist, where YouTube caps playlists at 5,000 videos."Musi is the only playback system I have to play all 9k of my videos/songs in the same library," the Musi fan said. "I honestly don't just use Musi just cause its free. It has features no other app has, especially if you like to watch music videos while you listen to music.""Spotify isn't cutting it," one Reddit user whined."I hate Spotify," another user agreed."I think of Musi every other day," a third user who apparently lost the app after purchasing a new phone said. "Since I got my new iPhone, I have to settle for other music apps just to get by (not enough, of course) to listen to music in my car driving. I will be patiently waiting once Musi is available to redownload."Some Musi fans who still have access gloat in the threads, while others warn the litigation could soon doom the app for everyone.Musi continues to perhaps optimistically tell users that the app is coming back, reassuring anyone whose app was accidentally offloaded that their libraries remain linked through iCloud and will be restored if it does.Some users buy into Musi's promises, while others seem skeptical that Musi can take on Apple. To many users still clinging to their Musi app, updating their phones has become too risky until the litigation resolves."Please," one Musi fan begged. "Musi come back!!!"Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 38 Comments
    0 Comentários 0 Compartilhamentos 11 Visualizações
  • ARSTECHNICA.COM
    A year after ditching waitlist, Starlink says it is sold out in parts of US
    Just you wait A year after ditching waitlist, Starlink says it is sold out in parts of US SpaceX's Starlink doesn't have enough capacity for everyone who wants it. Jon Brodkin Nov 19, 2024 5:11 pm | 32 The standard Starlink satellite dish. Credit: Starlink The standard Starlink satellite dish. Credit: Starlink Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe Starlink waitlist is back in certain parts of the US, including several large cities on the West Coast and in Texas. The Starlink availability map says the service is sold out in and around Seattle; Spokane, Washington; Portland, Oregon; San Diego; Sacramento, California; and Austin, Texas. Neighboring cities and towns are included in the sold-out zones.There are additional sold-out areas in small parts of Colorado, Montana, and North Carolina. As PCMag noted yesterday, the change comes about a year after Starlink added capacity and removed its waitlist throughout the US.Elsewhere in North America, there are some sold-out areas in Canada and Mexico. Across the Atlantic, Starlink is sold out in London and neighboring cities. Starlink is not yet available in most of Africa, and some of the areas where it is available are sold out.Starlink is generally seen as most useful in rural areas with less access to wired broadband, but it seems to be attracting interest in more heavily populated areas, too. While detailed region-by-region subscriber numbers aren't available publicly, SpaceX President Gwynne Shotwell said last week that Starlink has nearly 5 million users worldwide.Capacity problemsIt's been clear for a while that Starlink has enough capacity in much of its network and capacity problems in other areas. This was reflected in pricing as Starlink has a $100 "congestion charge" and used to offer lower monthly prices in areas with excess capacity. The SpaceX division offers broadband from over 6,600 satellites and is frequently launching more.It's still possible to order in waitlisted areas, but it's unclear how long people will have to wait. A message in the checkout system says, "Starlink is at capacity in your area. Order now to reserve your Starlink. You will receive a notification once your Starlink is ready to ship." A $99 deposit is required.PCMag notes that users can "bypass the waitlist by subscribing to the pricier Starlink Roam tier." However, they could run into performance problems in congested areas with Roam, which is marketed for use while traveling, not as a fixed home Internet service. Starlink could also block Roam service in specific areas.Roam costs $599 up-front for hardware and $50 a month for 50GB of data, or $165 for unlimited service. Residential Starlink has a hardware price of $349 and monthly service price of $120.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 32 Comments Prev story
    0 Comentários 0 Compartilhamentos 12 Visualizações
  • ARSTECHNICA.COM
    Valve developers discuss why Half Life 2: Episode 3 was abandoned
    Allergic to "3" Valve developers discuss why Half Life 2: Episode 3 was abandoned Anniversary doc also includes footage of unused ice gun, blob enemies. Kyle Orland Nov 18, 2024 4:06 pm | 32 The ice gun would have been the main mechanical gimmick in Half-Life 2: Episode 3. Credit: Valve The ice gun would have been the main mechanical gimmick in Half-Life 2: Episode 3. Credit: Valve Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAfter Ars spent Half-Life 2's 20th anniversary week looking back at the game's history and impact, Valve marked the occasion with a meaty two-hour YouTube documentary featuring insider memories from the team behind the game itself. Near the end of that documentary, longtime Valve watchers also get a chance to see footage of the long-promised but never-delivered Half-Life 2: Episode 3and hear more about what led the project to be abandoned.The Episode 3 footage included in the documentary focuses heavily on a new ice gun that would have served as the episode's main new feature. Players would have been able to use that gun to freeze enemies, set up ice walls as makeshift cover, or construct icy ledges to make their way down sheer cliff faces. The developers also describe a so-called "Silver Surfer mode" that would have let players extrude a line of ice in their path then slide along it at slippery speeds.The Episode 3 developers were also working on a new, blob-like enemy that could absorb other blobs to grow or split into segments to get around small barriers or pass through grates.Missing the momentAccording to the documentary, Valve spent about six months working on Episode 3 before deciding to pull all hands in to work on Left 4 Dead. At that point, the Episode 3 project was still an unordered set of playable levels set in the Arctic, with few story beats and concepts between them. Developers quoted in the documentary said it would have taken years of more work to get the episode into a releasable state.By the time work on Left 4 Dead was wrapping up in 2008, Valve was still publicly saying that it hoped Episode 3 would be ready by 2010. But after so much time spent away from the Episode 3 project, developers found it was hard to restart the momentum for a prototype that now felt somewhat dated. The technology behind these blob-like enemies ended up being reused for the paint in Portal 2. Credit: Valve Looking back, Valve Engineer David Speyrer said it was "tragic and almost comical" that "by the time we considered going back to Episode 3, the argument was made like, 'Well, we missed it. It's too late now. And we really need to make a new engine to continue the Half-Life series and all that.' And now that just seems, in hindsight, so wrong. We could have definitely gone back and spent two years to make Episode 3."Despite the new weapons and mechanics that were already in the works for Episode 3, many developers quoted in the documentary cite a kind of fatigue that had set in after so much time and effort focused on a single franchise. "A lot of us had been doing Half-Life for eight-plus years" designer and composer Kelly Bailey noted.That lengthy focus on a single franchise helps explain why some Valve developers were eager to work on anything else by that time in their careers. "I think everybody that worked on Half-Life misses working on that thing," Engineer Scott Dalton said. "But it's also hard not to be like, 'Man, I've kind of seen every way that you can fight an Antlion,' or whatever. And so you wanna get some space away from it until you can come back to it with fresh eyes."After the first two Half-Life 2 episodes were received less well than the base game itself, many developers cited in the documentary also said they felt pressure to go "much bigger" for Episode 3. Living up to that pressure, and doing justice to the fan expectations for the conclusion of the three-episode saga, proved to be too much for the team."You cant get lazy and say, 'Oh, were moving the story forward,' Valve co-founder Gabe Newell said of the pressure. "Thats copping out of your obligation to gamers, right? Yes, of course they love the story. They love many, many aspects of it. But sort of saying that your reason to do it is because people want to know what happens next... you know, we couldve shipped it, like, it wouldnt have been that hard."You know, the failure wasmy personal failure was being stumped," Newell continued. "Like, I couldnt figure out why doing Episode 3 was pushing anything forward."Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 32 Comments Prev story
    0 Comentários 0 Compartilhamentos 12 Visualizações
  • ARSTECHNICA.COM
    Trust in scientists hasnt recovered from COVID. Some humility could help.
    Humbling findings Trust in scientists hasnt recovered from COVID. Some humility could help. Intellectual humility could win back much-needed trust in science, study finds Beth Mole Nov 18, 2024 4:52 pm | 110 Illustration of a scientist speaking in front of an audience. Credit: Getty | BRO Vector Illustration of a scientist speaking in front of an audience. Credit: Getty | BRO Vector Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreScientists could win back trust lost during the COVID-19 pandemic if they just showed a little intellectual humility, according to a study published Monday in Nature Human Behavior.It's no secret that scientistsand the science generallytook a hit during the health crisis. Public confidence in scientists fell from 87 percent in April 2000 to a low of 73 percent in October 2023, according to survey data from the Pew Research Center. And the latest Pew data released last week suggests it will be an uphill battle to regain what was lost, with confidence in scientists only rebounding three percentage points, to 76 percent in a poll from October.Building trustThe new study in Nature Human Behavior may guide the way forward, though. The study encompasses five smaller studies probing the perceptions of scientists' trustworthiness, which previous research has linked to willingness to follow research-based recommendations."These are anxiety-provoking times for people, and they feel uncertain about who to trust and which recommendations to follow," said study co-author Karina Schumann, a psychology professor at the University of Pittsburgh. "We wanted to know what can help people feel more confident putting their faith in scientists working to find solutions to some of the complex global challenges we are facing."Schumann and her colleagues homed in on the role of intellectual humility. Unlike general humility, intellectual humility focuses on the limitations of one's knowledge. Specifically, a scientist with high intellectual humility would show a willingness to admit gaps in their knowledge, listen to input from others, and update their views based on new evidence. These characteristics may be viewed by the public as particularly critical among scientists, given that science is rife with uncertainties and lacks complete and unequivocal conclusions, especially from individual studies.There's also good reason to think that scientists may be doing a poor job of displaying intellectual humility. The latest survey data from Pew found that 47 percent of Americans perceive scientists as feeling superior to others, and 52 percent indicated that scientists communicate poorly.Study seriesFor a look into how intellectual humility could help, Schumann and her colleagues first surveyed 298 people and looked to see if there was a link between viewing scientists as intellectually humble and believing in scientific topics considered polarizing. The sub-studystudy 1found strong links between the perceived intellectual humility of scientists, trustworthiness, and support for human-driven climate change, lifesaving vaccinations, and genetically modified foods.In studies 2 through 4, the researchers experimentally tested expressions of intellectual humility (IH)either high or low levelsand how they affected perceived trustworthiness. In study 2, for instance, 317 participants read one of three articles involving a fictional scientist named Susan Moore, who was researching treatments for long COVID. There was a neutral article that functioned as a control, and articles with cues that Dr. Moore had either high or low IH. The cues for high IH included text such as: "Dr. Moore is not afraid to admit when she doesnt yet know something." For low IH, the article included statements such as: "Dr. Moore is not afraid to assert what she knows."The high IH article spurred significantly more trust in Dr. Moore than the low IH articles, the researchers found. However, there wasn't a statistically significant difference in trust between the control and high IH groups. This might suggest that people may have a default assumption of high IH in scientists without other cuesor they are especially annoyed by low IH or arrogance among scientists.Study 3 essentially replicated study 2, but with the tweak that the articles varied whether the fictional scientist was male or female, in case gendered expectations affected how people perceived humility and trustworthiness. The results from 369 participants indicated that gender didn't affect the link between IH and trust. Similarly, in study 4, with 371 participants, the researchers varied the race/ethnicity of the scientist, finding again that the link between IH and trust remained."Together, these four studies offer compelling evidence that perceptions of scientists IH play an important role in both trust in scientists and willingness to follow their research-based recommendations," the authors concluded.Next stepsIn the final study involving 679 participants, researchers examined different ways that scientists might express IH, including whether the IH was expressed as a personal trait, limitations of research methods, or as limitations of research results. Unexpectedly, the strategies to express IH by highlighting limitations in the methods and results of research both increased perceptions of IH, but shook trust in the research. Only personal IH successfully boosted perceptions of IH without backfiring, the authors report.The finding suggests that more research is needed to guide scientists on how best to express high IH. But, it's clear that low IH is not good. "[W]e encourage scientists to be particularly mindful of displaying low IH, such as by expressing overconfidence, being unwilling to course correct or disrespecting others views," the researchers caution.Overall, Schumann said she was encouraged by the team's findings. "They suggest that the public understands that science isnt about having all the answers; it's about asking the right questions, admitting what we dont yet understand, and learning as we go. Although we still have much to discover about how scientists can authentically convey intellectual humility, we now know people sense that a lack of intellectual humility undermines the very aspects of science that make it valuable and rigorous. This is a great place to build from."Beth MoleSenior Health ReporterBeth MoleSenior Health Reporter Beth is Ars Technicas Senior Health Reporter. Beth has a Ph.D. in microbiology from the University of North Carolina at Chapel Hill and attended the Science Communication program at the University of California, Santa Cruz. She specializes in covering infectious diseases, public health, and microbes. 110 Comments
    0 Comentários 0 Compartilhamentos 12 Visualizações
  • ARSTECHNICA.COM
    The ISS has been leaking air for 5 years, and engineers still dont know why
    Closing doors The ISS has been leaking air for 5 years, and engineers still dont know why "This is a an engineering problem, and good engineers should be able to agree on it." Stephen Clark Nov 18, 2024 5:19 pm | 39 The Zvezda service module, seen here near the top of this image, is one the oldest parts of the International Space Station. Credit: NASA The Zvezda service module, seen here near the top of this image, is one the oldest parts of the International Space Station. Credit: NASA Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOfficials from NASA and Russias space agency dont see eye to eye on the causes and risks of small but persistent air leaks on the International Space Station.That was the word from the new chair of NASA's International Space Station Advisory Committee last week. The air leaks are located in the transfer tunnel of the space station's Russian Zvezda service module, one of the oldest elements of the complex.US and Russian officials "don't have a common understanding of what the likely root cause is, or the severity of the consequences of these leaks," said Bob Cabana, a retired NASA astronaut who took the helm of the advisory committee earlier this year. Cabana replaced former Apollo astronaut Tom Stafford, who chaired the committee before he died in March.The transfer tunnel, known by the Russian acronym PrK, connects the Zvezda module with a docking port where Soyuz crew and Progress resupply spacecraft attach to the station.Air has been leaking from the transfer tunnel since September 2019. On several occasions, Russian cosmonauts have repaired the cracks and temporarily reduced the leak rate. In February, the leak rate jumped up again to 2.4 pounds per day, then increased to 3.7 pounds per day in April.This prompted managers to elevate the transfer tunnel leak to the highest level of risk in the space station program's risk management system. This 55 "risk matrix" classifies the likelihood and consequence of risks. Ars reported in June that the leaks are now classified as a "5" both in terms of high likelihood and high consequence.NASA reported in September that the latest round of repairs cut the leak rate by a third, but it did not eliminate the problem.An engineering problem"The Russian position is that the most probable cause of the PrK cracks is high cyclic fatigue caused by micro-vibrations," Cabana said on November 13. "NASA believes the PrK cracks are likely multi-causal, including pressure and mechanical stress, residual stress, material properties, and environmental exposures."The ISS is aging. Zvezda and the PrK launched in July 2000 and will mark a quarter-century in orbit next year. NASA wants to keep the space station operating until at least 2030, while Roscosmos, Russia's space agency, has committed only through 2028.Roscosmos has shared sample metals, welds, and investigation reports with NASA to assist in the study of the cracks and leaks. In a report published in September, NASA's inspector general said NASA's ISS Vehicle Office at Johnson Space Center in Houston said the leaks are "not an immediate risk to the structural integrity of the station."This is because managers have implemented mitigations to protect the entire station in the event of a structural failure of the PrK.Crew members aboard the space station are keeping the hatch leading to the PrK closed when they don't need to access the Progress cargo freighter docked at the other end of the transfer tunnel. Russian cosmonauts must open the hatch to unpack supplies from the Progress or load trash into the ship for disposal.But NASA and Roscosmos disagree on when the leak rate would become untenable. When that happens, the space station crew will have to permanently close the hatch to seal off the PrK and prevent a major failure from affecting the rest of the complex."The station is not young," said Michael Barratt, a NASA astronaut who returned from the space station last month. "It's been up there for quite a while, and you expect some wear and tear, and we're seeing that.""The Russians believe that continued operations are safe, but they can't prove to our satisfaction that they are," said Cabana, who was the senior civil servant at NASA until his retirement in 2023. "And the US believes that it's not safe, but we can't prove that to the Russian satisfaction that that's the case."So while the Russian team continues to search for and seal the leaks, it does not believe catastrophic disintegration of the PrK is realistic," Cabana said. "And NASA has expressed concerns about the structural integrity of the PrK and the possibility of a catastrophic failure."Closing the PrK hatch permanently would eliminate the use of one of the space station's four Russian docking ports.NASA has chartered a team of independent experts to assess the cracks and leaks and help determine the root cause, Cabana said. "This is an engineering problem, and good engineers should be able to agree on it."As a precaution, Barratt said space station crews are also closing the hatch separating the US and Russian sections of the space station when cosmonauts are working in the PrK."The way it's affected us, mostly, is as they go in and open that to unload a cargo vehicle that's docked to it, they've also taken time to inspect and try to repair when they can," Barratt said. "We've taken a very conservative approach to closing the hatch between the US side and the Russian side for those time periods."It's not a comfortable thing, but it is the best agreement between all the smart people on both sides, and it's something that we as a crew live with and adapt."Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 39 Comments
    0 Comentários 0 Compartilhamentos 14 Visualizações
  • ARSTECHNICA.COM
    AI-generated shows could replace lost DVD revenue, Ben Affleck says
    How 'bout them apples? AI-generated shows could replace lost DVD revenue, Ben Affleck says AI won't replace human artistry, says actor, but it will wildly drive down costs. Benj Edwards Nov 18, 2024 5:49 pm | 37 Credit: Donald Iain Smith via Getty Images Credit: Donald Iain Smith via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLast week, actor and director Ben Affleck shared his views on AI's role in filmmaking during the 2024 CNBC Delivering Alpha investor summit, arguing that AI models will transform visual effects but won't replace creative filmmaking anytime soon. A video clip of Affleck's opinion began circulating widely on social media not long after."Didnt expect Ben Affleck to have the most articulate and realistic explanation where video models and Hollywood is going," wrote one X user.In the clip, Affleck spoke of current AI models' abilities as imitators and conceptual translatorsmimics that are typically better at translating one style into another instead of originating deeply creative material."AI can write excellent imitative verse, but it cannot write Shakespeare," Affleck told CNBC's David Faber. "The function of having two, three, or four actors in a room and the taste to discern and construct that entirely eludes AI's capability."Affleck sees AI models as "craftsmen" rather than artists (although some might find the term "craftsman" in his analogy somewhat imprecise). He explained that while AI can learn through imitationlike a craftsman studying furniture-making techniquesit lacks the creative judgment that defines artistry. "Craftsman is knowing how to work. Art is knowing when to stop," he said."It's not going to replace human beings making films," Affleck stated. Instead, he sees AI taking over "the more laborious, less creative and more costly aspects of filmmaking," which could lower barriers to entry and make it easier for emerging filmmakers to create movies like Good Will Hunting.Films will become dramatically cheaper to makeWhile it may seem on its surface like Affleck was attacking generative AI capabilities in the tech industry, he also did not deny the impact it may have on filmmaking. For example, he predicted that AI would reduce costs and speed up production schedules, potentially allowing shows like HBO's House of the Dragon to release two seasons in the same period as it takes to make one.The visual effects industry faces the biggest disruption from these efficiency gains, according to Affleck. "I wouldn't like to be in the visual effects business. They're in trouble," he warned, predicting that expensive effects work will become much cheaper through AI automation.Based on what we've seen of AI video generators, where someone can easily apply AI-generated effects to existing video, this outcome seems plausible. But current AI video synthesis tools like those from Runway may need improvements in getting desired results with some consistency and controlinstead of forcing users to repeat generations while hoping for a usable result.AI-generated content: A new revenue stream?Affleck thinks that AI technology could create a new source of revenue for studios, potentially replacing lost DVD sales that he says once provided a large chunk of industry revenue but dropped dramatically over the past decade due to the rise of streaming video services.For example, although he had previously mentioned that AI would not replace human taste in filmmaking, Affleck described a scenario where a future viewer might pay to generate custom episodes of their favorite shows, though he acknowledged such content may be "janky and a little bit weird."He also imagined a scenario where companies may sell licenses to fans to create custom AI-generate content or AI-generated TikTok videos with character likenesses, similar to how studios sell superhero costumes today.Even so, Affleck maintains that human creativity will remain central to filmmaking. He explained that AI models currently work by "cross-pollinating things that exist" without truly creating anything new. At least not yet. This limitation, combined with AI's lack of artistic judgment, means that he thinks traditional filmmaking crafted by human directors and actors will persist.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 37 Comments Prev story
    0 Comentários 0 Compartilhamentos 14 Visualizações
  • ARSTECHNICA.COM
    Review: Amazons 2024 Kindle Paperwhite makes the best e-reader a little better
    speed reader Review: Amazons 2024 Kindle Paperwhite makes the best e-reader a little better If you use any Kindle other than the 2021 Paperwhite, this is a huge upgrade. Andrew Cunningham Nov 15, 2024 7:51 am | 205 Credit: Andrew Cunningham Credit: Andrew Cunningham Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreI've never particularly loved Amazon, either as a retail behemoth or as a hardware and software company, but despite that I still probably get more excited about new Kindle releases than I do about most other gadgets at this point.Some of that is because I rely on my Kindle for distraction-free reading and because I'm constantly highlighting things and taking notes, so even minor improvements have a major impact on my day-to-day experience. And some of it is because the Kindle's relatively limited tech has left it without a lot of headroom to shove additional ads or other paid add-ons; they include lockscreen ads and "special offers," but they can be permanently turned off with a nominal $20 fee, and even when you don't turn them off, they don't degrade the device's performance or intrude on the actual reading experience. This isn't to say that Kindles are perfect, just that it's rare that I am roughly the same amount of annoyed by a software platform's ads and tracking than I was a decade ago.Enter the new 12th-generation $160 Kindle Paperwhite, which like most Paperwhites is the Kindle that most people should buy.The 11th-gen Paperwhite update, released in late 2021 for $140, was a big quality-of-life upgrade, with a bigger 6.8-inch screen, adjustable color temperature, USB-C, more frontlight LEDs, and (in the more-expensive Signature Edition) an auto-brightness sensor and wireless charging.The new one has all of that stuff, plus an even bigger 7-inch screen. But the killer feature might be that this is the first Kindle I've used that has ever felt genuinely zippy. Obviously you don't need to run out and buy a new Kindle just because it feels fast. But for owners of older Paperwhitesif you last upgraded, say, back in 2018 when the 10th-gen Paperwhite first went waterproof, or if you have an even older modelin a lot of ways this feels like a totally different e-reader.A fast Kindle? From left to right: 2024 Paperwhite, 2021 Paperwhite, and 2018 Paperwhite. Note not just the increase in screen size, but also how the screen corners get a little more rounded with each release. Credit: Andrew Cunningham I don't want to oversell how fast the new Kindle is, because it's still not like an E-Ink screen can really compete with an LCD or OLED panel for smoothness of animations or UI responsiveness. But even compared to the 2021 Paperwhite, tapping buttons, opening menus, opening books, and turning pages feels considerably snappiernot quite instantaneous, but without the unexplained pauses and hesitation that longtime Kindle owners will be accustomed to. For those who type out notes in their books, even the onscreen keyboard feels fluid and responsive.Compared to the 2018 Paperwhite (again, the first waterproofed model, and the last one with a 6-inch screen and micro USB port), the difference is night and day. While it still feels basically fine for reading books, I find that the older Kindle can sometimes pause for so long when opening menus or switching between things that I wonder if it's still working or whether it's totally locked up and frozen."Kindle benchmarks" aren't really a thing, but I attempted to quantify the performance improvements by running some old browser benchmarks using the Kindle's limited built-in web browser and Google's ancient Octane 2.0 testthe 2018, 2021, and 2024 Kindles are all running the same software update here (5.17.0), so this should be a reasonably good apples-to-apples comparison of single-core processor speed. The new Kindle is actually way faster than older models. Credit: Andrew Cunningham The 2021 Kindle was roughly 30 percent faster than the 2018 Kindle. The new Paperwhite is nearly twice as fast as the 2021 Paperwhite, and well over twice as fast as the 2018 Paperwhite. That alone is enough to explain the tangible difference in responsiveness between the devices.Turning to the new Paperwhite's other improvements: compared side by side, the new screen is appreciably bigger, more noticeably so than the 0.2-inch size difference might suggest. And it doesn't make the Paperwhite much larger, though it is a tiny bit taller in a way that will wreck compatibility with existing cases. But you only really appreciate the upgrade if you're coming from one of the older 6-inch Kindles.Amazon's product pages and press releases brag of improved contrast, and the new Paperwhite does produce slightly deeper, less-washed-out shades of black than the 2021 model. Most of the time, you'll only really notice this if you're using the two devices side by side. But if you use Dark Mode frequently, the upgrade is more noticeable, since the background can get quite a bit darker while keeping the text brighter and easier to read. The new Paperwhite, like the 2021 model, uses USB-C for charging. Wireless charging is an optional feature of the more expensive Signature Edition. Credit: Andrew Cunningham To my eyes, the screen brightness and the warm light in the new Kindle look identical to the one from 2021and after years of using a Kindle with a warm light regularly, I would hate to have to go back to a model without one. The bluish default color temperature makes it look less like paper, and it's a bit harder on the eyes in dim lighting.The new Paperwhite still has a USB-C port, like the 2021 Paperwhite, and still has a soft-touch texture on the back that's pleasant to hold for long reading sessions.The upgraders Kindle The back of the new Kindle Paperwhite. Credit: Andrew Cunningham If you're using pretty much any Kindle other than the 2021 Kindle Paperwhite, this new version is going to feel like a huge improvement over whatever you're currently using (unless you're a physical button holdout, but for better or worse that decision has clearly been made). The 7-inch screen is a lot bigger than whatever you're using, the warm light is easier on the eyes, the optional auto-brightness sensor and wireless charging capability are nice-to-haves if you want to pay more for the Signature Edition. And all of that frustrating Kindle slowdown is just gone, thanks to a considerably faster processor.If you're using the 2021 Kindle Paperwhite, on the other hand, you probably don't need to consider an upgrade. There are things I really like about the new Paperwhite, but it's really just building on the foundation laid by the 2021 model. In fact, the availability of a newer model might make a used or refurbished 2021 Paperwhite the best entry-level Kindle you can buynot the marginally improved but still much less capable $110 baseline Kindle that Amazon just introduced.In any case, the new Paperwhite is still the best combination of features and price that Amazon offers in its e-reader lineup, despite the small price increase. The cheaper Kindle is smaller, not waterproof, and has no warm light; we're reserving judgment on the Kindle Colorsoft until we can try it for ourselves, but early user reviews complain about the crispness of black-and-white text and other things that may or may not be software bugs. If you just want to read a book, the Paperwhite is still the best way to do it.The goodA great reading experience backed up by Kindle's strong library and app ecosystem.Larger screen.Ads are relatively easy to ignore and inexpensive to permanently dismiss.Improved display contrast isn't super noticeable most of the time, but it does make a difference in dark mode.The badNo interesting screen tech upgrades like color or pen supportthis one's just for reading.Breaks compatibility with older Kindle accessories.The uglyThe price keeps creeping upward with every refresh.Andrew CunninghamSenior Technology ReporterAndrew CunninghamSenior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 205 Comments
    0 Comentários 0 Compartilhamentos 15 Visualizações
  • ARSTECHNICA.COM
    I, too, installed an open source garage door opener, and Im loving it
    Open source closed garage I, too, installed an open source garage door opener, and Im loving it OpenGarage restored my home automations and gave me a whole bunch of new ideas. Kevin Purdy Nov 15, 2024 7:05 am | 118 Hark! The top portion of a garage door has entered my view, and I shall alert my owner to it. Credit: Kevin Purdy Hark! The top portion of a garage door has entered my view, and I shall alert my owner to it. Credit: Kevin Purdy Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLike Ars Senior Technology Editor Lee Hutchinson, I have a garage. The door on that garage is opened and closed by a device made by a company that, as with Lee's, offers you a way to open and close it with a smartphone app. But that app doesn't work with my preferred home automation system, Home Assistant, and also looks and works like an app made by a garage door company.I had looked into the ratgdo Lee installed, and raved about, but hooking it up to my particular Genie/Aladdin system would have required installing limit switches. So I instead installed an OpenGarage unit ($50 plus shipping). My garage opener now works with Home Assistant (and thereby pretty much anything else), it's not subject to the whims of API access, and I've got a few ideas how to make it even better. Allow me to walk you through what I did, why I did it, and what I might do next.Thanks, Ill take it from here, GenieGenie, maker of my Wi-Fi-capable garage door opener (sold as an "Aladdin Connect" system), is not in the same boat as the Chamberlain/myQ setup that inspired Lee's project. There was a working Aladdin Connect integration in Home Assistant, until the company changed its API in January 2024. Genie said it would release its own official Home Assistant integration in June, and it did, but then it was quickly pulled back, seemingly for licensing issues. Since then, no updates on the matter. (I have emailed Genie for comment and will update this post if I receive reply.)This is not egregious behavior, at least on the scale of garage door opener firms. And Aladdin's app works with Google Home and Amazon Alexa, but not with Home Assistant or my secondary/lazy option, HomeKit/Apple Home. It also logs me out "for security" more often than I'd like and tells me this only after an iPhone shortcut refuses to fire. It has some decent features, but without deeper integrations, I can't do things like have the brighter ceiling lights turn on when the door opens or flash indoor lights if the garage door stays open too long. At least not without Google or Amazon.I've seen OpenGarage passed around the Home Assistant forums and subreddits over the years. It is, as the name implies, fully open source: hardware design, firmware, and app code, API, everything. It is a tiny ESP board that has an ultrasonic distance sensor and circuit relay attached. You can control and monitor it from a web browser, mobile or desktop, from IFTTT, MQTT, and with the latest firmware, you can get email alerts. I decided to pull out the 6-foot ladder and give it a go. Prototypes of the OpenGarage unit. To me, they look like little USB-powered owls, just with very stubby wings. Credit: OpenGarage Installing the little watching owlYou generally mount the OpenGarage unit to the roof of your garage, so the distance sensor can detect if your garage door has rolled up in front of it. There are options for mounting with magnetic contact sensors or a side view of a roll-up door, or you can figure out some other way in which two different sensor depth distances would indicatean open or closed door. If you've got a Security+ 2.0 door (the kind with the yellow antenna, generally), you'll need an adapter, too.The toughest part of an overhead install is finding a spot that gives the unit a view of your garage door, not too close to rails or other obstructing objects, but then close enough for the contact wires and USB micro cable to reach. Ideally, too, it has a view of your car when the door is closed and the car is inside, so it can report its presence. I've yet to find the right thing to do with the "car is inside or not" data, but the seed is planted. OpenGarage's introduction and explanation video. My garage setup, like most of them, is pretty simple. There's a big red glowing button on the wall near the door, and there are two very thin wires running from it to the opener. On the opener, there are four ports that you can open up with a screwdriver press. Most of the wires are headed to the safety sensor at the door bottom, while two come in from the opener button. After stripping a bit of wire to expose more cable, I pressed the contact wires from the OpenGarage into those same opener ports. The wire terminal on my Genie garage opener. The green and pink wires lead to the OpenGarage unit. Credit: Kevin Purdy After that, I connected the wires to the OpenGarage unit's screw terminals, then did some pencil work on the garage ceiling to figure out how far I could run the contact and micro-USB power cable, getting the proper door view while maintaining some right-angle sense of order up there. When I had reached a decent compromise between cable tension and placement, I screwed the sensor into an overhead stud and used a staple gun to secure the wires. It doesn't look like a pro installed it, but it's not half bad. Where I ended up installing my OpenGarage unit. Key points: Above the garage door when open, view of the car below, not too close to rails, able to reach power and opener contact. Credit: Kevin Purdy A very versatile boardIf you've got everything placed and wired up correctly, opening the OpenGarage access point or IP address should give you an interface that shows you the status of your garage, your car (optional), and its Wi-Fi and external connections. The landing screen for the OpenGarage. You can only open the door or change settings if you know the device key (which you should change immediately). Credit: Kevin Purdy It's a handy webpage and a basic opener (provided you know the secret device key you set), but OpenGarage is more powerful in how it uses that data. OpenGarage's device can keep a cloud connection open to Blynk or the maker's own OpenThings.io cloud server. You can hook it up to MQTT or an IFTTT channel. It can send you alerts when your garage has been open a certain amount of time or if it's open after a certain time of day. You're telling me you can just... see the state of these things, at all times, on your own network? Credit: Kevin Purdy You really dont need a corporate garage coderFor me, the greatest benefit is in hooking OpenGarage up to Home Assistant. I've added an opener button to my standard dashboard (one that requires a long-press or two actions to open). I've restored the automation that turns on the overhead bulbs for five minutes when the garage door opens. And I can dig in if I want, like alerting me that it's Monday night at 10 pm and I've yet to open the garage door, indicating I forgot to put the trash out. Or maybe some kind of NFC tag to allow for easy opening while on a bike, if that's not a security nightmare (it might be).Not for nothing, but OpenGarage is also a deeply likable bit of indie kit. It's a two-person operation, with Ray Wang building on his work with the open and handy OpenSprinkler project, trading Arduino for ESP8266, and doing some 3D printing to fit the sensors and switches, and Samer Albahra providing mobile app, documentation, and other help. Their enthusiasm for DIY home control has likely brought out the same in others and certainly in me.Kevin PurdySenior Technology ReporterKevin PurdySenior Technology Reporter Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch. 118 Comments
    0 Comentários 0 Compartilhamentos 19 Visualizações
  • ARSTECHNICA.COM
    OpenAI accused of trying to profit off AI model inspection in court
    Experiencing some technical difficulties OpenAI accused of trying to profit off AI model inspection in court How do you get an AI model to confess what's inside? Ashley Belanger Nov 15, 2024 8:45 am | 36 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSince ChatGPT became an instant hit roughly two years ago, tech companies around the world have rushed to release AI products while the public is still in awe of AI's seemingly radical potential to enhance their daily lives.But at the same time, governments globally have warned it can be hard to predict how rapidly popularizing AI can harm society. Novel uses could suddenly debut and displace workers, fuel disinformation, stifle competition, or threaten national securityand those are just some of the obvious potential harms.While governments scramble to establish systems to detect harmful applicationsideally before AI models are deployedsome of the earliest lawsuits over ChatGPT show just how hard it is for the public to crack open an AI model and find evidence of harms once a model is released into the wild. That task is seemingly only made harder by an increasingly thirsty AI industry intent on shielding models from competitors to maximize profits from emerging capabilities.The less the public knows, the seemingly harder and more expensive it is to hold companies accountable for irresponsible AI releases. This fall, ChatGPT-maker OpenAI was even accused of trying to profit off discovery by seeking to charge litigants retail prices to inspect AI models alleged as causing harms.In a lawsuit raised by The New York Times over copyright concerns, OpenAI suggested the same model inspection protocol used in a similar lawsuit raised by book authors.Under that protocol, the NYT could hire an expert to review highly confidential OpenAI technical materials "on a secure computer in a secured room without Internet access or network access to other computers at a secure location" of OpenAI's choosing. In this closed-off arena, the expert would have limited time and limited queries to try to get the AI model to confess what's inside.The NYT seemingly had few concerns about the actual inspection process but bucked at OpenAI's intended protocol capping the number of queries their expert could make through an application programming interface to $15,000 worth of retail credits. Once litigants hit that cap, OpenAI suggested that the parties split the costs of remaining queries, charging the NYT and co-plaintiffs half-retail prices to finish the rest of their discovery.In September, the NYT told the court that the parties had reached an "impasse" over this protocol, alleging that "OpenAI seeks to hide its infringement by professing an undueyet unquantified'expense.'" According to the NYT, plaintiffs would need $800,000 worth of retail credits to seek the evidence they need to prove their case, but there's allegedly no way it would actually cost OpenAI that much."OpenAI has refused to state what its actual costs would be, and instead improperly focuses on what it charges its customers for retail services as part of its (for profit) business," the NYT claimed in a court filing.In its defense, OpenAI has said that setting the initial cap is necessary to reduce the burden on OpenAI and prevent a NYT fishing expedition. The ChatGPT maker alleged that plaintiffs "are requesting hundreds of thousands of dollars of credits to run an arbitrary and unsubstantiatedand likely unnecessarynumber of searches on OpenAIs models, all at OpenAIs expense."How this court debate resolves could have implications for future cases where the public seeks to inspect models causing alleged harms. It seems likely that if a court agrees OpenAI can charge retail prices for model inspection, it could potentially deter lawsuits from any plaintiffs who can't afford to pay an AI expert or commercial prices for model inspection.Lucas Hansen, co-founder of CivAIa company that seeks to enhance public awareness of what AI can actually dotold Ars that probably a lot of inspection can be done on public models. But often, public models are fine-tuned, perhaps censoring certain queries and making it harder to find information that a model was trained onwhich is the goal of NYT's suit. By gaining API access to original models instead, litigants could have an easier time finding evidence to prove alleged harms.It's unclear exactly what it costs OpenAI to provide that level of access. Hansen told Ars that costs of training and experimenting with models "dwarfs" the cost of running models to provide full capability solutions. Developers have noted in forums that costs of API queries quickly add up, with one claiming OpenAI's pricing is "killing the motivation to work with the APIs."The NYT's lawyers and OpenAI declined to comment on the ongoing litigation.US hurdles for AI safety testingOf course, OpenAI is not the only AI company facing lawsuits over popular products. Artists have sued makers of image generators for allegedly threatening their livelihoods, and several chatbots have been accused of defamation. Other emerging harms include very visible exampleslike explicit AI deepfakes, harming everyone from celebrities like Taylor Swift to middle schoolersas well as underreported harms, like allegedly biased HR software.A recent Gallup survey suggests that Americans are more trusting of AI than ever but still twice as likely to believe AI does "more harm than good" than that the benefits outweigh the harms. Hansen's CivAI creates demos and interactive software for education campaigns helping the public to understand firsthand the real dangers of AI. He told Ars that while it's hard for outsiders to trust a study from "some random organization doing really technical work" to expose harms, CivAI provides a controlled way for people to see for themselves how AI systems can be misused."It's easier for people to trust the results, because they can do it themselves," Hansen told Ars.Hansen also advises lawmakers grappling with AI risks. In February, CivAI joined the Artificial Intelligence Safety Institute Consortiuma group including Fortune 500 companies, government agencies, nonprofits, and academic research teams that help to advise the US AI Safety Institute (AISI). But so far, Hansen said, CivAI has not been very active in that consortium beyond scheduling a talk to share demos.The AISI is supposed to protect the US from risky AI models by conducting safety testing to detect harms before models are deployed. Testing should "address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups," President Joe Biden said in a national security memo last month, urging that safety testing was critical to support unrivaled AI innovation."For the United States to benefit maximally from AI, Americans must know when they can trust systems to perform safely and reliably," Biden said.But the AISI's safety testing is voluntary, and while companies like OpenAI and Anthropic have agreed to the voluntary testing, not every company has. Hansen is worried that AISI is under-resourced and under-budgeted to achieve its broad goals of safeguarding America from untold AI harms."The AI Safety Institute predicted that they'll need about $50 million in funding, and that was before the National Security memo, and it does not seem like they're going to be getting that at all," Hansen told Ars.Biden had $50 million budgeted for AISI in 2025, but Donald Trump has threatened to dismantle Biden's AI safety plan upon taking office.The AISI was probably never going to be funded well enough to detect and deter all AI harms, but with its future unclear, even the limited safety testing the US had planned could be stalled at a time when the AI industry continues moving full speed ahead.That could largely leave the public at the mercy of AI companies' internal safety testing. As frontier models from big companies will likely remain under society's microscope, OpenAI has promised to increase investments in safety testing and help establish industry-leading safety standards.According to OpenAI, that effort includes making models safer over time, less prone to producing harmful outputs, even with jailbreaks. But OpenAI has a lot of work to do in that area, as Hansen told Ars that he has a "standard jailbreak" for OpenAI's most popular release, ChatGPT, "that almost always works" to produce harmful outputs.The AISI did not respond to Ars' request to comment.NYT nowhere near done inspecting OpenAI modelsFor the public, who often become guinea pigs when AI acts unpredictably, risks remain, as the NYT case suggests that the costs of fighting AI companies could go up while technical hiccups could delay resolutions. Last week, an OpenAI filing showed that NYT's attempts to inspect pre-training data in a very, very tightly controlled environment like the one recommended for model inspection were allegedly continuously disrupted."The process has not gone smoothly, and they are running into a variety of obstacles to, and obstructions of, their review," the court filing describing NYT's position said. "These severe and repeated technical issues have made it impossible to effectively and efficiently search across OpenAIs training datasets in order to ascertain the full scope of OpenAIs infringement. In the first week of the inspection alone, Plaintiffs experienced nearly a dozen disruptions to the inspection environment, which resulted in many hours when News Plaintiffs had no access to the training datasets and no ability to run continuous searches."OpenAI was additionally accused of refusing to install software the litigants needed and randomly shutting down ongoing searches. Frustrated after more than 27 days of inspecting data and getting "nowhere near done," the NYT keeps pushing the court to order OpenAI to provide the data instead. In response, OpenAI said plaintiffs' concerns were either "resolved" or discussions remained "ongoing," suggesting there was no need for the court to intervene.So far, the NYT claims that it has found millions of plaintiffs' works in the ChatGPT pre-training data but has been unable to confirm the full extent of the alleged infringement due to the technical difficulties. Meanwhile, costs keep accruing in every direction."While News Plaintiffs continue to bear the burden and expense of examining the training datasets, their requests with respect to the inspection environment would be significantly reduced if OpenAI admitted that they trained their models on all, or the vast majority, of News Plaintiffs copyrighted content," the court filing said.Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 36 Comments
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    These are the lasting things that Half-Life 2 gave us, besides headcrabs and crowbars
    Half-Life 2 Week These are the lasting things that Half-Life 2 gave us, besides headcrabs and crowbars Beyond the game itself (which rocks), Half-Life 2 had a big impact on PC gaming. Kevin Purdy Nov 16, 2024 6:45 am | 17 This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt'sHalf-Life 2week at Ars Technica! This Saturday, November 16, is the 20th anniversary of the release ofHalf-Life 2a game of historical importance for the artistic medium and technology of computer games. Each day up through the 16th, we'll be running a new article looking back at the game and its impact.Well, I just hate the idea that our games might waste peoples time. Why spend four years of your life building something that isn't innovative and is basically pointless?Valve software founder Gabe Newell is quoted by Geoff Keighleyyes, the Game Awards guy, back then a GameSpot writeras saying this in June 1999, six months after the original Half-Life launched. Newell gave his team no real budget or deadline, only the assignment to follow up the best PC game of all time and redefine the genre.When Half-Life 2 arrived in November 2004, the Collectors Edition contained about 2.6GB of files. The game, however, contained so many things that would seem brand new in gaming, or just brave, that its hard to even list them.Except Im going to try that right here. Some will be hard to pin definitively in time to Half-Life 2 (HL2). But like many great games, HL2 refined existing ideas, borrowed others, and had a few of its own to show off.Note that some aspects of the game itself, its status as Steams big push title, and what its like to play it today, are covered by other writers during Ars multi-day celebration of the games 20th anniversary. That includes the Gravity Gun. How many film and gaming careers were launched by people learning how to make the Scout do something goofy? Credit: Valve How many film and gaming careers were launched by people learning how to make the Scout do something goofy? Credit: Valve The Source EngineIts hard to imagine another game developer building an engine with such a forward-thinking mission as Source. Rather than just build the thing that runs their next game, Valve crafted Source to be modular, such that its core could be continually improved (and shipped out over Steam), and newer technologies could be optionally ported into games both new and old, while not breaking any older titles working perfectly fine.Source started development during the late stages of the original Half-Life, but its impact goes far beyond the series. Team Fortress 2, Counter-Strike: Global Offensive, Portal 1/2, and Left 4 Dead, from Valve alone, take up multiple slots on lists of the all-time best games. The Stanley Parable, Vampire: The MasqueradeBloodlines, and a whole lot of other games used Source, too. Countless future game developers, level designers, and mod makers cut their teeth on the very open and freely available Source code tools.And then, of course, where would we be as a society were it not for Source Filmmaker and Garrys Mod, without which we would never have Save as .dmx and Skibidi Toilet.Half-Life: Alyx is a technical marvel of the VR age, but it's pulled along by the emotional bonds of Alyx and Russell, and the quest to save Eli Vance. Credit: Valve Half-Life: Alyx is a technical marvel of the VR age, but it's pulled along by the emotional bonds of Alyx and Russell, and the quest to save Eli Vance. Credit: Valve A shooter with family dynamicsNovelist Marc Laidlaw has made it clear, multiple times, that he did not truly create the Half-Life story when he joined Valve; it was all there when I got there, in embryo, he told Rock Paper Shotgun. Laidlaw helped the developers tell their story through level design and wrote short, funny, unnerving dialogue.For Half-Life 2, Laidlaw and the devs were tasked with creating some honest-to-goodness characters, something you didnt get very often in first-person shooters (they were all dead in 1994s System Shock). So in walked that father/daughter team of Eli and Alyx Vance, and the extended Black Mesa family, including folks like Dr. Kleiner.These real and makeshift family members gave the mute protagonist Gordon Freeman stakes in wanting to fix the future. And Laidlaws basic dramatic unit set a precedent for lots of shooty-yet-soft-hearted games down the road: Mass Effect, The Last of Us, Gears of War, Red Dead Redemption, and far more. Remember when a Boston-area medical manufacturing firm, run by a Half-Life fan, got everyone thinking a sequel was coming? Fun times. Credit: Black Mesa Remember when a Boston-area medical manufacturing firm, run by a Half-Life fan, got everyone thinking a sequel was coming? Fun times. Credit: Black Mesa Intense speculation about what Valve is actually doingAnother unique thing Laidlaw helped develop in PC gaming: intense grief and longing for a sequel that both does and does not exist, channeled through endless speculation about Valve's processes and general radio silence.Half-Life 2 got Episodes but never a true numbered Half-Life 3 sequel. The likelihood of 3 took a hit when Laidlaw unexpectedly announced his retirement in January 2016. Then it got even less likely, or maybe just sad, when Laidlaw posted a barely disguised snapshot of a dream of Epistle 3 to his blog (since deleted and later transposed on Pastebin).Laidlaw has expressed regret about this move. Fans have expressed regret that Half-Life 3 somehow seems even less likely, having seen Valves premiere writer post such a seemingly despondent bit of primary source fan fiction.Fans of popular game eager for sequel isn't itself a unique thing, but it is for Half-Life 3s quantum existence. Valve published its new employee handbook from around 2012 on the web, and in it, you can read about the companys boldly flat structure. To summarize greatly: Projects only get started if someone can get enough fellow employees to wheel their desks over and work on it with them. The company doesnt take canceled or stalled games to heart; in its handbook, its almost celebrated that it killed Prospero as one of its first major decisions.So the fact that Half-Life 3 exists only as something that hasnt been formally canceled is uniquely frustrating. HL2s last (chronological) chapter left off on a global-scale cliffhanger, and the only reason a sequel doesnt exist is because too many other things are more appealing than developing a new first-person shooter. If you worked at Valve, you tell yourself, maybe you could change this! Maybe. What, you're telling me now it's illegal to break in, take source code, and then ask for a job? This is a police state! Credit: Valve What, you're telling me now it's illegal to break in, take source code, and then ask for a job? This is a police state! Credit: Valve Source code leak dramaThe Wikipedia pages List of commercial video games with available source code and its cousin Later released source code show that, up until 2003, most of the notable games whose source code became publicly available were either altruistic efforts at preservation or, for some reason, accidental inclusions of source code on demos or in dummy files on the game disc.And then, in late 2003, Valve and Half-Life superfan Axel Gembe hacked into Valves servers, grabbed the Half-Life 2 source code that existed at the time and posted it to the web. It not only showed off parts of the game Valve wanted to keep under wraps, but it showed just how far behind the games development was relative to the release date that had blown by weeks earlier. Valves response was typically atypical: they acknowledged the source code as real, asked their biggest fans for help, and then released the game a year later, to critical and commercial success.The leak further ensconced Valve as a different kind of company, one with a particularly dedicated fanbase. It also seems to have taught companies a lesson about hardening their servers and development environments. Early builds of games still leakwitness Space Marine 2 this past Julybut full source code leaks, coming from network intrusions, are something you dont see quite so often.Pre-loading a game before releaseIt would be hard to go back in time and tell our pre-broadband selves about pre-loading. You download entire games, over the Internet, and then theyre ready to play one second after the release timeno store lines, no drive back home, no stuffed servers or crashed discs. It seems like a remarkable bit of trust, though its really just a way to lessen server load on release day.Its hard to pin down which game first offered pre-loading in the modern sense, but HL2, being a major launch title for Valves Steam service and a title with heavy demand, definitely popularized the concept.Always-online for single-player gamesHeres one way that Half-Life 2 moved the industry forward that some folks might want to move back.Technically, you can play HL2 without an Internet connection, and maybe for long periods of time. But for most people, playing HL2 without a persistent net connection involves activating the game on Steam, letting it fully update, and then turning on Steams Offline Mode to play it. Theres no time limit, but you need to keep Steam active while playing.Its not so much the particular connection demands of HL2 that make it notable, but the pathway that it, and Steam, created on which other companies moved ahead, treating gaming as something that, by default, happens with at least a connection, and preferably a persistent one. It's Game of the Year. Which year? Most of them, really (until Disco Elysium shows up). Credit: Valve It's Game of the Year. Which year? Most of them, really (until Disco Elysium shows up). Credit: Valve A place on All-time video game rankings foreverHalf-Life 2 introduced many ground-breaking things at oncedeep facial animations and expressions, an accessible physics engine, a compelling global-scale but family-minded storywhile also being tremendously enjoyable game to play through. This has made it hard for anyone to suggest another game to go above it on any "All-time greatest games" list, especially those with a PC focus.Not that they dont try. PC Gamer has HL2 at 7 out of 100, mostly because it has lost an understandable amount of Hotness in 20 years. IGN has it at No. 9 (while its descendant Portal 2 takes third place). Metacritic, however fallible, slots it in universal second place for PC games.So give Half-Life 2 even more credit for fostering innovation in the arbitrary ranked list of games genre. Rock Paper Shotguns top 100 is cited as the best to play on PC today, as they have paid no mind to what was important or influential. And yet, Half-Life 2, as a game you can play in 2024, is still on that list. Its really something, that game.Kevin PurdySenior Technology ReporterKevin PurdySenior Technology Reporter Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch. 17 Comments Prev story
    0 Comentários 0 Compartilhamentos 15 Visualizações
  • ARSTECHNICA.COM
    Silo S2 expands its dystopian world
    A whole new world Silo S2 expands its dystopian world Ars chats with cinematographer Baz Irvine about creating a fresh look for the sophomore season. Jennifer Ouellette Nov 16, 2024 10:09 am | 11 Credit: YouTube/Apple TV+ Credit: YouTube/Apple TV+ Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe second season of Silo, Apple TV's dystopian sc-fi drama, is off to a powerful start with yesterday's premiere. Based on the trilogy by novelist Hugh Howey, was one of the more refreshing surprises on streaming television in 2023: a twist-filled combination of political thriller and police procedural set in a post-apocalyptic world. It looks like S2 will be leaning more heavily into sci-fi thriller territory, expanding its storytellingand its striking cinematographybeyond the original silo.(Spoilers for S1 below as well as first five minutes of S2 premiere.)As previously reported, Silo is set in a self-sustaining underground city inhabited by a community whose recorded history only goes back 140 years, generations after the silo was built by the founders. Outside is a toxic hellscape that is only visible on big screens in the silo's topmost level. Inside, 10,000 people live together under a pact: Anyone who says they want to "go out" is immediately granted that wishcast outside in an environment suit on a one-way trip to clean the cameras. But those who make that choice inevitably die soon after because of the toxic environment.Mechanical keeps the power on and life support from collapsing, and that is where we met mechanical savant Juliette Nichols (Rebecca Ferguson) at one with the giant geothermal generator that spins in the silo's core. There were hints at what came beforerelics like mechanical wristwatches or electronics far beyond the technical means of the silo's current inhabitants, due to a rebellion 140 years ago that destroyed the silo's records in the process. The few computers are managed by the IT department, run by Bernard Holland (Tim Robbins).Over the course of the first season, Juliette reluctantly became sheriff and investigated the murder of her lover, George (Ferdinand Kingsley), who collected forbidden historical artifacts, as well as the murder of silo mayor Ruth Jahns (Geraldine James). Many twists ensued, including the existence of a secret group dedicated to remembering the past whose members were being systemically killed. Juliette also began to suspect that the desolate landscape seen through the silo's camera system was a lie and there was actually a lush green landscape outside.In the season one finale, Juliette made a deal with Holland: She would choose to go outside in exchange for the truth about what happened to George and the continued safety of her friends in Mechanical. The final twist: Juliette survived her outside excursion and realized that the dystopian hellscape was the reality, and the lush green Eden was the lie. And she learned that their silo was one of many, with a ruined city visible in the background.That's where the second season picks up. Apple TV+ released the footage of the first five minutes last week: Official sneak peek for the second season of Apple TV+'s sci-fi drama Silo. The opening battle, with all new characters, clearly took place in one of the other silos (Silo 17), and the residents desperate to break out did so only to meet their deaths. The footage ends with Juliette walking past their skeletons toward the entrance to Silo 17. We know from the official trailer that rebellion is also brewing back in her own silo as rumors spread that she is alive.The expansion of Silo's world was an opportunity for cinematographer Baz Irvine (who worked on four key episodes this season) to play with lenses, color palettes, lighting, and other elements to bring unique looks to the different settings.Ars Technica: How did you make things visually different from last season? What were your guidelines going into this for the cinematography?Baz Irvine: There's few different things going on. I love season one, but we were going to open it up [in S2]. We were going to introduce this new silo, so that was going to be a whole other world that had to look immediately familiar, but also completely different. We start season one with an exterior of the dystopian, future blasted planet. On the technical point, I saw two things I could do very simply. I felt that the format of season one was two to one, so not quite letterbox, not quite widescreen. When I saw the sets and I saw the art, everything the amazing art department had done, I was like, guys, this needs to be widescreen. I think at the time there was still a little bit of reticence from Apple and a few of the other streamers to commit to full widescreen, but I persuaded them.I also changed the lenses because I wanted to keep the retro feel, the dystopian future, but retro feel. I chose slightly different lenses to give me a wider feel of view. I talked to my director, Michael Dinner, and we talked about how at times, as brilliant as season one was, it was a bit theatrical, a bit presentational. Here's the silo, here's the silo, here's the silo...., So what you want to do is stop worrying about the silo. It is incredible and it's in the back of every shot. We wanted to make it more visceral. There was going to be a lot more action. The start of episode one is a full-blown battle. Apple released the first five minutes on Apple. It actually stops at a very critical point, but you can see that it's the previous world of the other Silo 17.We still wanted to see the scope and the scale. As a cinematographer, you've got to get your head around something that's very unusual: the Silo is vertical. When we shoot stuff, we go outside, everything's horizontal. So as a cinematographer, you think horizontally, you frame the skyline, you frame the buildings. But in the silo, it's all up there and it's all down there, but it doesn't exist. A bit of the set exists, but you have to go, oh, okay, what can I see if I point the camera up here, what will VFX brilliantly give me? What can I see down there? So that was another big discussion. The initial view of what's outside the silo YouTube/Apple TV+ The initial view of what's outside the silo YouTube/Apple TV+ What's actually outside the silo YouTube/Apple TV+ What's actually outside the silo YouTube/Apple TV+ The initial view of what's outside the silo YouTube/Apple TV+ What's actually outside the silo YouTube/Apple TV+ Ars Technica: When you talk about wanting to make it more visceral, what does that mean specifically in a cinematography context?Baz Irvine: It's just such a lovely word. Season one had an almost European aesthetic. It was a lot of very beautiful, slow developing shots. Of course it was world building. It was the first time the silo was on the screen. So as a filmmaker, you have a certain responsibility to give the audience a sense of where you are. Season two, we know where we are. Well, we don't with the other silo, but we discover it. This role for me meant not being head of the action. So with Juliet, Rebecca Ferguson's character, we discover what she sees with her, rather than showing it ahead of time.We're trying to be a point of view, almost hand-held. When she's running, we're running with her. When she's trying to smash her helmet, we are very much with her.On another level, visceral for me also means responding to actionnot being too prescriptive about what the camera should do, but when you see the blocking of a scene and you feel it's going a certain way and there's a certain energy, responding to that and getting in there. The silo, as I said, is always going to be in the background, but we're not trying to fetishize the silo too much. We're going to look down, we're going to look up, we're going to use crane moves, but just get in with the action. Just be with the people. That means slightly longer lenses, longer focal lengths at times. And from my point of view, the fall off and focus just looks so beautiful. So I think that's what visceral means. I bet you somebody else would say something completely different.Ars Technica: Other specific choices you made included using a muted green palette and torchlight flashlight. So there is this sense of isolation and mystery and a spooky, more immersive atmosphere.Baz Irvine: The challenge that I could see from when I read the script is that a large part of season two is in the new Silo 17. So the new Silo 17 hasn't been occupied for 35 years. It's been in this dormant, strange, half-lit state. It's overgrown with plants and ivy. Some of the references for that were what Chernobyl looked like 20 years down the line. When humanity leaves, nature just takes over. But as a counterpoint, we needed it to feel dark. Most of the electricity has gone, most of the lights have gone out. I needed to have some lighting motivation to give some sense of the shape of the Silo, so that we weren't plummeting into darkness for the whole episode. So I came up with this idea, the overhead lights that power the silo, that light the silo, were in broken -down mode. They were in reserve power. They'd gone a bit green because that's what the bulb technology would've done. Episode one introduces us to people living in a different silo. YouTube/Apple TV+ Episode one introduces us to people living in a different silo. YouTube/Apple TV+ The residents of Silo 17 seem to have met a sticky end. YouTube/Apple TV+ The residents of Silo 17 seem to have met a sticky end. YouTube/Apple TV+ Episode one introduces us to people living in a different silo. YouTube/Apple TV+ The residents of Silo 17 seem to have met a sticky end. YouTube/Apple TV+ Part of the reason to do that is that when you're cutting between two silos that were built identically, you've got to have something to show that you're in a different world. Yes, it's empty, and yes, it's desolate and it's eerie, and there's strange clanking noises. But I wanted to make it very clear from a lighting point of view that they were two different places.The other thing that you will discover in episode one, when Juliet's character is finally working her way through the Silo 17, she has a flashlight and she breaks into an apartment. As she scans the wallshe starts to notice, oh, it's not like her silo, there are beautiful murals and art. We really wanted to play into this idea that every silo was different. They had different groups of people potentially from different parts of the states. This silo in a way developed quite an artistic community. Murals and frescoes were very much part of this silo. It's not something that is obvious, and it's just the odd little scan of a flashlight that gives you this sense. But also Silo 17 is scary. It's sort of alive, but is there life in it? That is a big question.Ars Technica: You talk about not wanting to all be in darkness. I'm now thinking of that infamous Game of Thrones episode where the night battle footage was so dark viewers couldn't follow what was going on. That's clearly a big challenge for a cinematographer. Where do you find the balance?Baz Irvine: This is the eternal dilemma for cinematographers. It's getting notes back from the grownups going, it's too dark,it's too dark. Well, maybe if you were watching it in a dark room and it wasn't bight outside, it would be fine. You have to balance things. I've also got Rebecca Ferguson walking around the silo, and it can't be in so much shadow that you can't recognize her. So there's a type of darkness that in film world I know how to convey it. It's very subtle. It is underexposed, but I used very soft top light. I didn't want hard shadows. By using that light and filling in little details in the background, I can then take the lighting down. I had an amazing colorist in Company 3 in Toronto and we had a chat about how dark we could go.We have to be very dark in places because a couple of times in this season, the electricity gets pulled altogether in the old silo as well. You can't pull the plug and then suddenly everybody's visible. But it is a film aesthetic that, as a cinematographer, you just learn, how dark can I go? When am I going to get in trouble? Please can I stay on the job, but make it as dark as possible? You mentioned Game of Thrones, clearly audiences have become more used to seeing imagery that I would consider more photographic, more bold generally. I try to tap into that as much as possible. If you have one character with a flashlight, then suddenly that changes everything because you point a flashlight at the surface and the light bounces back in the face. You have to use all the tools that you can. YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ YouTube/Apple TV+ Ars Technica: In season one there were different looks (lighting and textures) for different social hierarchies of the social hierarchies. Does that continue in season two?Baz Irvine: I tried to push that a little bit more in season two. I loved the idea of that J.G. Ballard high rise, the rich at the top, everything inverted. The silo is crazy tall. We worked it out. It's about a kilometer and a half.The mechanical is the fun bit because mechanical is the bottom of the silo. Down there, we wet the walls, wet the floors, so that the more greeny, orangey colors you associate with fluorescent lights and more mechanical fixtures would reflect. You keep the light levels low because you get this lovely sheen off the walls. As you move up through the middle, where a lot of the action takes place, the lighting is more normal. I'm not really trying to push it one way or another.Then you go up top where the judicial live, where the money and power is. You're a lot closer to the light source because there only is this one huge light source that lights down in the silo. So up there the air is more rarefied. It's like you're on top of a Swiss mountain. It just feels cleaner. There's less atmosphere, slightly bluer in light, different color temperatures on the practical lighting in offices. It's less chaotic, more like a more modern aesthetic up there. You've got to be careful not to overplay it. Once you establish colors, you run with it and it just becomes second nature. It was a lot of fun to be able to demarcatess long as you remembered where you were, that was always the trick.Ars Technica: What were the most notable challenges and highlights for youwithout giving away anything beyond episode one.Baz Irvine: I think the big thing about episode one is that it's like a silent movie. Rebecca Ferguson has maybe two lines, or maybe she doesn't actually say anything. It's a journey of discovery, and there's some quite scary, terrifying things that happen. There's a lot of action. Also, we find out there's water in Silo 17. Silo 17 is flooded. You don't find that out until she slips and falls and you think she's fallen to her death. From the outset knew that there would be an extensive amount of underwater, or on the surface of the water, filming that would need to take place. We had to do a massive amount of testing, looking at textures of water, what equipment we could use, how we could get the depth, the width. We built a huge tank at one of our studios in London and used Pinewood's famous underwater tank for the fall.Also there was the challenge of trying to do shots of that scale outside because we actually built sets. We could probably see 50 feet beyond Rebecca. We had the surface of the scorched surface, but beyond that is VFX. So we had huge blue screens and all these different cranes and things called Manitous with massive frames and had to control the sun. That was very challenging. You can really go down a very cliched path when trying to imagine what the fallout of a massive nuclear attack would look like. But we didn't want to overplay it too much, we wanted to embed it in some sort of reality so that you didn't suddenly feel at the start of episode one, oh my, you're on the surface of Mars. It had to feel real, but also just completely different from the interior world of the silo.Ars Technica: I assume that there's a lot more exciting stuff coming in the other episodes that we can't talk about.Baz Irvine: There is so much exciting stuff. There's a lot of action. The silo cafeteria, by the way, is just incredible because you have this huge screen. When I turned up, I was thinking, okay, well this is clearly going to be some big VFX blue screen. It is not. It is a projected image. The work that they did to make it feel like it was a camera mounted to the top of the silo, showing the world outside, and the different times of daywe just literally dialed in. Can I have dusk please? Can I have late afternoon with a little bit of cloud? It was such a fun toy box to play with.New episodes of Silo S2 will premiere every Friday through January 17, 2025, on Apple TV+.Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 11 Comments Prev story
    0 Comentários 0 Compartilhamentos 14 Visualizações
  • ARSTECHNICA.COM
    I played Half-Life 2 for the first time this yearheres how it went
    Half-Life 2 Week I played Half-Life 2 for the first time this yearheres how it went Wake up and smell the ashes, Ms. Washenko. Anna Washenko Nov 15, 2024 9:00 am | 109 This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt'sHalf-Life 2week at Ars Technica! This Saturday, November 16, is the 20th anniversary of the release ofHalf-Life 2a game of historical importance for the artistic medium and technology of computer games. Each day up through the 16th, we'll be running a new article looking back at the game and its impact.The time has finally come to close one of the most notable gaps in my gaming history. Despite more than a decade of writing about video games and even more years enjoying them, I never got around to playing Half-Life 2.Not only have I not played it, but I've managed to keep myself in the dark about pretty much everything to do with it. I always assumed that one day I would get around to playing this classic, and I wanted the experience to be as close as possible to it would have been back in 2004. So my only knowledge about Half-Life 2 before starting this project was 1) the game is set in the same universe as Portal, a game I love, 2) the protagonist is named Gordon Freeman, and he looks uncannily like a silent, spectacled young Hugh Laurie, and 3) there's something called the Gravity Gun.That's it. I didn't even know exactly what the Gravity Gun did, only that it existed.So, the time has come for me to learn what the fuss is all about. I've cataloged my off-the-cuff reactions as well as my more analytical thoughts about Half-Life 2, both as a standalone project and as a catalyst for setting new standards in design. But if you're looking for the TL;DR of whether I think the game holds up, my answer is: it depends.Beginning a classic with a clunkA red letter day indeed! Time to experience this iconic piece of video game history. I spend most of the intro sequence in the train station soaking in the atmosphere of the dystopian City 17. A few minutes in, though, I think I'm supposed to sneak past a guard. Because I'm a fugitive trying to escape this freaky Big Brother building, and I swear Barney told me to avoid detection. Instead, the guard immediately sees me and whomps me on the head for not putting a bottle into the trash. Not an auspicious beginning.I make it to Dr. Kleiner's lab for a little bit of story exposition. I like this rag-tag group of geniuses and the whole vibe of a secret scientific rebellion. I also appreciate that it's not a static cutscene, so I can poke around the lab while I listen or observe the characters interacting.After a failed teleport and getting a crowbar from Barney, I then spend a long time getting shot and dying in a train yard. Like, an embarrassingly long time. Perhaps I was assuming at this early stage that Half-Life 2 would be like Portal with real guns, because I figured this area had to be a puzzle. I'm not sure how I missed the one portion of the environment that I could slip through, but I convinced myself that I was supposed to leap across the tops of the train cars, Frogger style. And Gordon might have many skills, but his jumping leaves something to be desired.Finally, I realize that there's a gap in the cars, and I move along. This canal setting is striking, but I keep being unsure which areas of the map I can access. I've heard that the level design is one of the most lauded parts of Half-Life 2, but this is proving to be a genuine struggle with the game.When I played Portal, I sometimes was unsure how to progress, but because that game is presented in the austere confines of a science experiment, I felt like I was supposed to be challenged. In Half-Life 2, though, where there are higher stakes and I'm running for my life, getting stuck just makes me feel dumb and annoyed. And I'm doubly annoyed because this escape sequence would probably feel amazing if I didn't keep getting lost. Again, not the thrilling start I was hoping for.Killing a barnacle by feeding it an explosive barrel is a definite high point. I may have cackled. This is the sort of clever environmental interaction I expected to see from the minds that later made Portal.Headcrabs, on the other hand, are just obnoxious. My dinky little pea shooter pistol doesn't feel like great protection. What's a rogue physicist gotta do to get a shotgun?From airboats to zombiesAfter a break, I return armed with a renewed determination to grok this game and, more importantly, with an airboat. For 90 percent of the Water Hazard chapter, I am feeling like a badass. I'm cruising in my watery ride, flying over ramps, and watching a silo collapse overhead. Especially in those rare moments when the 2000s electro jams punctuate my fights, I feel like a true action hero. The airboat sequence was divisive in 2004, but this writer enjoyed it. Credit: Anna Washenko Next I reach the Black Mesa East chapter, which is a perfect interlude. The game's approach to world-building is probably the area where my feelings align most closely with those of Half-Life 2 veterans. It is spectacular. Heading down into the lab may be the best elevator ride I've taken in a game. Judith is talking science, and outside the shaft, I see humans and vortigaunts conducting fascinating experiments. Small vignettes like those are a perfect way to introduce more information about the rebellion. They give subtle context to a game that doesn't do much to explain itself and doesn't need to.Also, Dog is the best boy. Seriously, I've seen modern games where the animations didn't have as much personality as when Alyx treats her robot protector like an actual dog, and he shakes in delight. My only sadness is that Dog doesn't accompany me to Ravenholm. Dog is, in fact, the best boy. Credit: Valve I do wish Dog had come with me to Ravenholm. I learned after the fact that this chapter is one of the most iconic and beloved, but I had the opposite reaction. Survival horror is not my jam. These whirling death traps are sweet, but I hate jump scares, and I don't love any of my weapons for the encounters.That brings me to something I don't want to say, but in the spirit of journalistic honesty, I must: I don't adore the Gravity Gun. Obviously it was the game's signature creation here and probably what most of you recall most fondly, but I did not fully grasp its potential immediately. Based on the tutorial in Black Mesa East, I assumed it would mostly be a component to puzzle-solving and traversal rather than a key part of combat. I only started using it as a weapon in Ravenholm because I ran out of ammo for everything else.It's not that I don't get the appeal. Slicing zombies up with a saw blade or bashing them with paint cans is satisfyingno complaints there. But I found the tool inconsistent, which discouraged me from experimenting as much as the developers may have hoped. I'm pretty sure I do as much damage to myself as to enemies trying to lob exploding barrels. I want to be able to fling corpses around and can't (for reasons that became apparent later, but in the moment felt limiting). Later chapters reinforced my uncertainty, when I couldn't pull a car to me, yet a push blast had enough power to overturn the vehicle.And once again, I had a rough time with navigation. Maybe I was missing what other people would have seen as obvious cues, the way I'm attuned to finding climbing paths marked by color in modern gamescontroversial as that yellow marking convention may be, its absence is noted when you're struggling to read the environment with a visual language for the game that emphasizes realism over readability. Or maybe I've gotten over-reliant on the tools of the sprawling RPGs I favor these days, where you have a mini-map and quest markers to help you manage all the threads. But for an agonizingly long time, I stared at an electrified fence and wires that seemed to lead to nowhere before realizing that I was supposed to enter the building where Father Grigori first appeared on the balcony. A giant bonfire of corpses out front seemed like a clear 'do not enter' sign, so it didn't occur to me that I could go inside. Alas.Speaking of which, Father Grigori is the best part of the section. He's a total bro, giving me a shotgun at long last. I feel kind of bad when I just abandon him to his murderous flock at the end of the chapter. I hope he survives?Familiarity and finding my footingThe new weapons are coming fast and furious now. I'm impressed at how good the combat feel is. I like the pulse rifle a lot, and that has become my go-to for most long-distance enemies. I wish I could aim down sights, but at least this feels impactful at range. Although I don't usually favor the slow cadence of a revolver in other games, I also enjoy the magnum. The SMG serves well as a workhorse, while the rocket launcher and crossbow are satisfying tools when the right situation arises.But my favorite weapon, far more than the Gravity Gun, is the shotgun. Especially at point-blank range and into a fast zombie's head. Chef's kiss. Maybe it's my love of Doom (2016) peeking through, but any time I can go charging into a crowd with my shotgun, I'm a happy camper.While the worldbuilding in Half-Life 2 is stellar, I don't think the writing matches that high. Just about every brief encounter with allies starts with someone breathlessly gasping, "Gordon? Gordon Freeman?" It's the sort of repetition that would make for an effective and dangerous drinking game.I was surprised when I entered another vehicle section. I liked the airboat, even though the chapter ran a touch long, but this dune buggy feels a lot jankier. At least it starts with a gun attached.I love the idea of this magnet crane puzzle. I wish it didn't control like something from Octodad, but I do get my buggy up out of the sand. The "floor is lava" sequence involves placing objects with the Gravity Gun to avoid disturbing an army of angry antlions by stepping on the sand. Credit: Anna Washenko Things start turning around for me once I reach the sandy version of 'the floor is lava.' That's a cute idea. Although I keep wanting to rotate objects and have a more controlled placement with the Gravity Gun like I could when I did these kinds of tasks with the Ultrahand ability during The Legend of Zelda: Tears of the Kingdom. I understand that Half-Life 2 crawled so TotK could run, but that knowledge doesn't mean I have a better time using the mechanic. Toward the end of the sequence, I got bored by the slow pace of creating a bridge and just barrelled ahead, willing to face a firefight just to move things along.At this point, however, things take a decided turn for the better when I get my other favorite weapon of the game: My own antlion army! Commanding them is so fabulously ridiculous. The scene where hordes of antlions leap over high walls to attack gunmen on the towers leading into Nova Prospekt may be my favorite moment so far in the entire game. If this is how all of you felt flinging radiators around with the Gravity Gun back in the day, then I get why you love it so. I'm sure I won't be able to keep this glorious power indefinitely, but I would happily finish the rest of the story with just antlions and a shotgun if they'd let me.The Nova Prospekt area is the first time I really see a clear line connecting Half-Life 2 to Portal. None of the puzzles or characters thus far gave me Portal vibes, but I definitely get them here, especially once turrets come into play. By this point, I'm finally navigating the space with some confidence. That might be the result of logging enough hours or maybe it was just the sense that GLaDOS could start talking to me at any moment. Whatever the reason, I think I'm finding my groove at last. Nova Prospekt is one of the first areas Valve made when it developed Half-Life 2, so it's not surprising it bears a lot of similarity to environments and vibes in both the original Half-Life and in Portal. Credit: Anna Washenko Somehow I am not surprised by Judith's sudden but inevitable betrayal in this chapter. Alyx not getting along with her in the Black Mesa East chapter felt pretty telling. But then she's just going to let Judith enter teleport coordinates unsupervised? Alyx, you're supposed to be smart!What do you knownow Judith has re-kidnapped Eli. Color me shocked.Onward and upward to the endIt's nice having human minions. They're no antlions, but I like how the world has shifted to a real uprising. It reminds me of the big charge at the end of Mass Effect 3, running and gunning through a bombed-out city with bug-like baddies overhead.Snipers are not a welcome addition to the enemy roster. Not sure why Barney's whining so much. You could throw some grenades, too, my dude.Barney tags along with my minions as we reach the Overwatch Nexus. Destroying floor turrets is probably the first time I've struggled with combat. These are the least precise grenades of all time. Once we make it through the interior sequence, it's time to face down the striders. I can't imagine how you'd play this section bringing down the swarm of them on a harder difficulty. My health takes a beating as I run around the wreckage desperately looking for ammo reloads and medkits. In theory, this is probably a great setpiece, but I'm just stressed out. Things go a little better once the combat is paired with traversal, and the final showdown on the roof does feel like a gratifying close to a boss fight.On to the Citadel. Why on earth would I get into one of these pods? That's a terrible idea. But apparently that's what I'm going to do. I hope I'm not supposed to be navigating this pod in any way, because I'm just taking in the vibes. It's another transit moment with glimpses into what the enemies have been getting up to while the rebellion rages outside. It's eerie; I like it. Battling the striders as the game moves toward its finale. Credit: Anna Washenko The Gravity Gun is the core of Half-Life 2, so it makes sense that a supercharged version is all I have for the final push. I appreciate that I can use it to fling bodies, but my reaction is a little muted since this was an idea I'd had from the start. But I do find the new angle of sucking up energy orbs to be pretty rad.I arrive in Dr. Breen's office, and it looks grim for our heroes. Judith redeeming herself surprises me more than her betrayal, which is nice. When he runs off, I'm mentally preparing myself as I chase him for a final boss showdown. Surely, something extra bonkers with the Gravity Gun awaits me. I climb the teleportation tower, I pelt Breen's device with energy orbs, I'm waiting for the other shoe to drop, andHuh?Context is everythingIn the moment, I was torn between feeling that the opaque ending was genius and that it was an absolute cop-out. It was certainly not how I expected the game to end.But on reflection, that wound up being a fitting final thought as the credits rolled, because I think 'expectations' were at the heart of my conflicted reactions to finally playing Half-Life 2. I've rarely felt so much pressure to have a particular response. I wanted to love this game. I wanted to share the awe that so many players feel for it. I wanted to have an epic experience that matched the epic legacy Half-Life 2 has in gaming history.I didn't.Instead, I had whiplash, swinging between moments of delight and stretches of being stymied or even downright pissed off. I was tense, often dreading rather than eagerly awaiting each next twist. Aside from a handful of high points, I'm not sure I'd say playing Half-Life 2 was fun.As I mentioned at the start, the big question I felt I had to answer was whether Half-Life 2 felt relevant today or whether it only holds up under the rosy glow of nostalgia. And my answer is, "It depends." As an enigmatic person once said, "The right man in the wrong place can make all the difference in the world." It's all in the context.Moments that were jaw-dropping in 2004 have less impact for someone like me who's played the many titles that copied, standardized, and perfected Half-Life 2's revelations. Intellectually, I understand that the Gravity Gun was a literal game-changer and that a physics engine deployed at this scale was unheard of. But funnily enough, a modern player is even less likely to see those innovations as so, well, innovative when a game has as much polish as Half-Life 2 does. Half-Life 2 has almost no rough edges in the execution. Everything works the way it was intended.Since that polish means the new ideas don't feel like experiments, and since I've seen them in other games in the intervening years, they don't register as notable.Just as you don't need to be a fan of Aristotle's Poetics to appreciate drama, you don't need to love Half-Life 2 to appreciate its legacy. As a fun game to play, whether it holds up will come down to you and your context. However, as a showcase of the technology of the time and a masterclass in world-building, yes, Half-Life 2 holds up today. 109 Comments
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    FTC to launch investigation into Microsofts cloud business
    antitrust again FTC to launch investigation into Microsofts cloud business Microsoft is accused of using punitive licensing terms for Azure. Arash Massoudi, James Fontanella-Khan, Stephen Morris, and Stefania Palma, Financial Times Nov 15, 2024 9:44 am | 31 A Microsoft office (not to be confused with Microsoft Office). Credit: Julien GONG Min / Flickr A Microsoft office (not to be confused with Microsoft Office). Credit: Julien GONG Min / Flickr Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe Federal Trade Commission is preparing to launch an investigation into anti-competitive practices at Microsofts cloud computing business, as the US regulator continues to pursue Big Tech in the final weeks of Joe Bidens presidency.The FTC is examining allegations that Microsoft is abusing its market power in productivity software by imposing punitive licensing terms to prevent customers from moving their data from its Azure cloud service to competitors platforms, according to people with direct knowledge of the matter.Tactics being examined include substantially increasing subscription fees for those that leave, charging steep exit fees, and allegedly making its Office 365 products incompatible with rival clouds, they added.The FTC is yet to formally request documents or other information from Microsoft as part of the inquiry, the people said.A move to challenge Microsofts cloud business practices would mark the latest broadside against Big Tech by the FTCs chair, Lina Khan, who has centered her tenure on aggressively curbing the monopolistic powers of the likes of Meta and Amazon.Khan, who has become the public enemy for most of Wall Streets dealmaking community, is set to be replaced after president-elect Donald Trump enters the White House next year.While any successor to Khan may not adopt as tough a stance, potential contenders are expected to continue targeting Big Tech companies that have attracted bipartisan ire in Washington. The Republican Party has accused online platforms of allegedly censoring conservative voices.The decision to launch a formal probe would come after the FTC sought feedback from industry participants and the public on cloud computing providers business practices. The results in November last year revealed that most responses raised concerns around competition, the agency said at the time, including software licensing practices that curb the ability to use some software in other cloud providers ecosystems.The FTC also highlighted fees charged on users transferring data out of certain cloud systems and minimum spend contracts, which offer discounts to companies in return for a set level of spending.Microsoft has also attracted scrutiny from international regulators over similar matters. The UKs Competition and Markets Authority is investigating Microsoft and Amazon after its fellow watchdog Ofcom found that customers complained about being locked in to a single provider, which offers discounts for exclusivity and charge high egress fees to leave.In the EU, Microsoft has avoided a formal probe into its cloud business after agreeing to a multimillion-dollar deal with a group of rival cloud providers in July.The FTC in 2022 sued to block Microsofts $75 billion acquisition of video game maker Activision Blizzard over concerns the deal would harm competitors to its Xbox consoles and cloud-gaming business. A federal court shot down an attempt by the FTC to block it, which is being appealed. A revised version of the deal in the meantime closed last year following its clearance by the UKs CMA.Since its inception 20 years ago, cloud infrastructure and services has grown to become one of the most lucrative business lines for Big Tech as companies outsource their data storage and computing online. More recently, this has been turbocharged by demand for processing power to train and run artificial intelligence models.Spending on cloud services soared to $561 billion in 2023 with market researcher Gartner forecasting it will grow to $675 billion this year and $825 billion in 2025. Microsoft has about a 20 percent market share over the global cloud market, trailing leader Amazon Web Services that has 31 percent, but almost double the size of Google Cloud at 12 percent.There is fierce rivalry between the trio and smaller providers. Last month, Microsoft accused Google of running shadow campaigns seeking to undermine its position with regulators by secretly bankrolling hostile lobbying groups.Microsoft also alleged that Google tried to derail its settlement with EU cloud providers by offering them $500 million in cash and credit to reject its deal and continue pursuing litigation.The FTC and Microsoft declined to comment. 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.Arash Massoudi, James Fontanella-Khan, Stephen Morris, and Stefania Palma, Financial TimesArash Massoudi, James Fontanella-Khan, Stephen Morris, and Stefania Palma, Financial Times 31 Comments
    0 Comentários 0 Compartilhamentos 19 Visualizações
  • ARSTECHNICA.COM
    As ABL Space departs launch, the 1-ton rocket wars have a clear winner
    Anything but launch As ABL Space departs launch, the 1-ton rocket wars have a clear winner "Our path to making a big contribution as a commercial launch company narrowed considerably." Eric Berger Nov 15, 2024 10:39 am | 15 Hot fire test of integrated second stage for ABL Space System's RS1 rocket in the fall of 2020. Credit: ABL Space Systems Hot fire test of integrated second stage for ABL Space System's RS1 rocket in the fall of 2020. Credit: ABL Space Systems Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA 7-year-old launch company that has yet to have a rocket successfully lift off announced a radical pivot on Thursday. Its new plan? Focusing on missile defense.The founder and president of ABL Space Systems, Dan Piemont, announced the decision on LinkedIn, adding, "We're consolidating our operational footprint and parting ways with some talented members of our team." He said companies interested in hiring great people in Los Angeles or Mojave, California, should reach out.A bright beginningWith a background in economics and physics, Piemont founded ABL in 2017 with the aim of developing a ship-and-shoot rocket. The idea was to set up mobile ground systems in remote locations on short notice and launch on demand for the US military and other customers.Piemont proved successful at raising money, bringing hundreds of millions of dollars into the company, including from Lockheed Martin. At one point the private company was valued at $2.4 billion, and in 2021 Lockheed purchased a block buy of up to 58 launches of the RS1 vehicle. This rocket was intended to carry up to 1.35 metric tons to low-Earth orbit.ABL made its first RS1 launch attempt in December 2023 from Kodiak, Alaska, but a catastrophic fire shortly after liftoff quickly doomed the rocket. A second attempt was precluded in July of this year after an explosion during a static-fire test in Alaska. The company laid off some of its staff in August to control costs."From a personal perspective, if youve never been a part of something like that, it can be difficult to understand the magnitude," Piemont said of these two launch campaigns. "Physically taxing operations happen at all hours across many sites. Seemingly insurmountable problems arise every week, and are overcome. The accomplishments of the RS1 programfrom the individual level to the company levelhave been unbelievable."Shifting launch marketAs the company was failing in its efforts to reach orbit, the launch market was also changing, Piemont said. Although not directly mentioning SpaceX and its Falcon 9 rocket, Piemont said ABL's ability to impact the launch industry has diminished over the last seven years."Take a look around," Piemont wrote. "US rockets fly every couple of days, with perfect success. Its revolutionary. While there is still a need for more providers in certain market segments, those opportunities are decreasing. To succeed in such a demanding effort as scaling up an orbital launch program, you need deep motivation around your mission and potential impact, from many stakeholders. As the launch market matured, those motivations thinned and our path to making a big contribution as a commercial launch company narrowed considerably."Over the last half decade or so, three US companies have credibly vied to develop rockets in the 1-ton class in terms of lift capacity. ABL has been competing alongside Relativity Space and Firefly to bring its rockets to market. ABL never took off. In March 2023, Relativity reached space with the Terran 1 rocket, but, due to second-stage issues, failed to reach orbit. Within weeks, Relativity announced it was shifting its focus to a medium-lift rocket, Terran R. Since then, the California-based launch company has moved along, but there are persistent rumors that it faces a cash crunch.Of the three, only Firefly has enjoyed success. The company's Alpha rocket has reached orbit on multiple occasions, and just this week Firefly announced that it completed a $175 million Series D fundraising round, resulting in a valuation of more than $2 billion. The 1-ton rocket wars are over: Firefly has won.Focusing on defenseJust as Relativity pivoted away from this class of rocket, ABL will now also shift its focusthis time in an even more radical direction.US Defense spending on missile production and defense has skyrocketed since Russia's invasion of Ukraine in 2022, and ABL will now seek to tap into this potentially lucrative market."We have made the decision to focus our efforts on national defense, and specifically on missile defense technologies," Piemont said. "Well have more to share soon on our roadmap and traction in this area. For now, suffice to say we see considerable opportunity to leverage RS1, GS0, the E2 engine, and the rest of the technology weve developed to date to enable a new type of research effort around missile defense technologies."Eric BergerSenior Space EditorEric BergerSenior Space Editor Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston. 15 Comments Prev story
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    A lot of people are mistaking Elon Musks Starlink satellites for UAPs
    What's that? A lot of people are mistaking Elon Musks Starlink satellites for UAPs "We were able to assess that they were all in those cases looking at Starlink flares." Stephen Clark Nov 15, 2024 3:24 pm | 35 Starlink satellites' passage is seen in the sky in southern Poland on November 1, 2024. Credit: Jakub Porzycki/NurPhoto Starlink satellites' passage is seen in the sky in southern Poland on November 1, 2024. Credit: Jakub Porzycki/NurPhoto Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSpaceX's Starlink Internet satellites are responsible for more and more public reports of unexplained anomalous phenomena (UAPs), but most recent cases remain unsolved, according to a US government report released Thursday.Starlinks often move across the sky in "trains" that appear like gleaming gems in the blackness of space. They are particularly visible to the naked eye shortly after each Starlink launch.In recent years, leaks and disclosures from government officials have revitalized open discussion about mysterious lights and objects, some of which move in, to put it bluntly, unquestionably weird ways. Some of these images, particularly those from sophisticated instruments on military fighter jets, have made their way into the national discourse. The New Yorker, Ars' sister publication, has a thorough report on how UAPsyou might know them better as UFOsbecame mainstream.All of this attention has renewed questions about whether these sightings are evidence of extraterrestrial life or a national security threat from a foreign power. The Pentagon's All-domain Anomaly Resolution Office (AARO), created in 2022 to collect and study information related to UAPs, says it has "discovered no evidence of extraterrestrial beings, activity, or technology."NASA commissioned an advisory board to study the topic. Last year, a senior official said the agency has found "no convincing evidence" any of the UAPs have an extraterrestrial origin.Lawmakers from both parties have convened hearings and passed legislation to nudge the Pentagon to become more open about UAPs. On Wednesday, a House committee questioned a panel of former government officials on the matter. The former officials all urged the government to continue studying UAPsand warned against excessive government secrecy.One of the requirements levied by Congress in 2021 called for an annual report from the Department of Defense and the Office of the Director of National Intelligence on UAP sightings submitted by the public. AARO released this year's report Thursday.AARO said it received 757 UAP reports over a 13-month reporting period from mid-2023 through mid-2024. More than half of these reports remain unexplained, AARO said.Thats just ElonBut many UAP cases have verifiable explanations as airplanes, drones, or satellites, and lawmakers argue AARO might be able to solve more of the cases with more funding.Airspace is busier than ever with air travel and consumer drones. More satellites are zooming around the planet as government agencies and companies like SpaceX deploy their constellations for Internet connectivity and surveillance. There's more stuff up there to see."AARO increasingly receives cases that it is able to resolve to the Starlink satellite constellation," the office said in this year's annual report."For example, a commercial pilot reported white flashing lights in the night sky," AARO said. "The pilot did not report an altitude or speed, and no data or imagery was recorded. AARO assessed that this sighting of flashing lights correlated with a Starlink satellite launch from Cape Canaveral, Florida, the same evening about one hour prior to the sighting."Jon Kosloski, director of AARO, said officials compared the parameters of these sightings with Starlink launches. When SpaceX releases Starlink satellites in orbit, the spacecraft are initially clustered together and reflect more sunlight down to Earth. This makes the satellites easier to see during twilight hours before they raise their orbits and become dimmer."We found some of those correlations in time, the direction that they were looking, and the location," Kosloski said. "And we were able to assess that they were all in those cases looking at Starlink flares."SpaceX has more than 6,600 Starlink satellites in low-Earth orbit, more than half of all active spacecraft. Thousands more satellites for Amazon's Kuiper broadband constellation and Chinese Internet network are slated to launch in the next few years."AARO is investigating if other unresolved cases may be attributed to the expansion of the Starlink and other mega-constellations in low-Earth orbit," the report said.The Starlink network is still relatively new. SpaceX launched the first Starlinks five years ago. Kosloski said he expects the number of erroneous UAP reports caused by satellites to go down as pilots and others understand what the Starlinks look like."It looks interesting and potentially anomalous. But we can model that, and we can show pilots what that anomaly looks like, so that that doesn't get reported to us necessarily," Kosloski said.Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 35 Comments Prev story
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    Microsoft finally releases generic install ISOs for the Arm version of Windows
    start your usb drives Microsoft makes it easier to do a clean Windows install on Arm-based PCs Generic install media brings Arm PCs closer to feeling like any old x86 PC. Andrew Cunningham Nov 14, 2024 2:22 pm | 2 Credit: Microsoft Credit: Microsoft Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor some PC buyers, doing a clean install of Windows right out of the box is part of the setup ritual. But for Arm-based PCs, including the Copilot+ PCs with Snapdragon X Plus and Elite chips in them, it hasn't been possible in the same way. Microsoft (mostly) hasn't offered generic install media that can be used to reinstall Windows on an Arm PC from scratch.Microsoft is fixing that todaythe company finally has a download page for the official Arm release of Windows 11, linked to but separate from the ISOs for the x86 versions of Windows. These are useful not just for because-I-feel-like-it clean installs, but for reinstalling Windows after you've upgraded your SSD and setting up Windows virtual machines on Arm-based PCs and Macs.Previously, Microsoft did offer install media for some Windows Insider Preview Arm builds, though these are for beta versions of Windows that may or may not be feature-complete or stable. Various apps, scripts, and websites also exist to grab files from Microsoft's servers and build "unofficial" ISOs for the Arm version of Windows, though obviously this is more complicated than just downloading a single file directly.Per usual when you do a from-scratch installation of Windows, you'll need to make sure you can find all the drivers for your hardware so that all of your hardware functions like it's supposed to. Some of these drivers may be downloaded automatically through Windows Update if you've got an Internet connection; others may need to be grabbed manually from your computer manufacturer's website.If your Arm PC shipped with Windows 11, you should have no problem installing a fresh copy of the operating system. If your PC shipped with Windows 10 instead, Windows 11 ought to be supported most of the time, but there are some early Windows 10 Arm PCs that don't meet the operating system's hardware requirements. You need at least a Snapdragon 850 processor; you can check the full Arm compatibility list here.Andrew CunninghamSenior Technology ReporterAndrew CunninghamSenior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 2 Comments Prev story
    0 Comentários 0 Compartilhamentos 18 Visualizações
  • ARSTECHNICA.COM
    Trump team puts EV tax credit on the block, Tesla is on board: Report
    like we said Trump team puts EV tax credit on the block, Tesla is on board: Report Elon Musk is on record as saying it would hurt competitors more than Tesla. Jonathan M. Gitlin Nov 14, 2024 2:40 pm | 141 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreSome electric vehicles and plug-in hybrids are set to get less affordable from next year, it seems. As expected, the incoming Trump administration has set its sights on killing off the IRS clean vehicle tax credit, according to a report in Reuters this afternoon.The clean vehicle tax credit was overhauled as part of President Joe Biden's signature climate legislation. Until then, the size of a plug-in vehicle's tax credit was based on its battery capacity, with a credit of up to $7,500 available. But from 2023 the rules changed, requiring a certain amount of domestic production to qualify, as well as adding price and income caps to address criticism that the tax credit mostly subsidized the already-wealthy.Far fewer vehicles are now eligible for the rebate at time of purchase, particularly after the US Treasury Department got tougher about Chinese content, although a loophole means that none of these conditions apply to leased EVs.Ending the tax credit is not something the incoming administration can do via executive actionCongress controls government spending, and this would require new legislation. But the budget reconciliation process results in bills that cannot be filibustered, and Reuters says that the Trump transition team will likely use this route as part of a larger revamp of tax laws.Tesla was a major beneficiary of the new clean vehicle tax credit; under the previous scheme, an OEM was only eligible until it sold its 200,000th plug-in vehicle, at which point the credit available to its customers began to sunset. Teslawhich exclusively sells plug-in vehicleswas unsurprisingly the first to reach this threshold, at which point its EVs became more expensive than competitor cars. But the sales cap was eliminated under the new rules.One might expect the company would be up in arms over this proposal. But according to Reuters, that's not the caseTesla is in favor of ending the clean vehicle tax credit, and CEO Elon Musk has previously said such a move would be far more damaging to rival companies than to Tesla.Jonathan M. GitlinAutomotive EditorJonathan M. GitlinAutomotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 141 Comments Prev story
    0 Comentários 0 Compartilhamentos 18 Visualizações
  • ARSTECHNICA.COM
    GOGs Preservation Program is the DRM-free store refocusing on the classics
    The classic PC games market is "in a sorry state," according to DRM-free and classic-minded storefront GOG. Small games that aren't currently selling get abandoned, and compatibility issues arise as technology moves forward or as one-off development ideas age like milk.Classic games are only 20 percent of GOG's catalog, and the firm hasn't actually called itself "Good Old Games" in 12 years. And yet, today, GOG announces that it is making "a significant commitment of resources" toward a new GOG Preservation Program. It starts with 100 games for which GOG's own developers are working to create current and future compatibility, keeping them DRM-free and giving them ongoing tech support, along with granting them a "Good Old Game: Preserved by GOG" stamp.GOG is not shifting its mission of providing a DRM-free alternative to Steam, Epic, and other PC storefronts, at least not entirely. But it is demonstrably excited about a new focus that ties back to its original name, inspired in some part by its work on Alpha Protocol."We think we can significantly impact the classics industry by focusing our resources on it and creating superior products," writes Arthur Dejardin, head of sales marketing at GOG. "If we wanted to spread the DRM-free gospel by focusing on getting new AAA games on GOG instead, we would make little progress with the same amount of effort and money (weve been trying various versions of that for the last 5 years)." GOG Preservation Program's launch video. Getting knights, demons, and zombies up to snuffWhat kind of games? Scanning the list of Good Old Games, most of them are, by all accounts, both good and old. Personally, I'm glad to see the Jagged Alliance games,System Shock 2,Warcraft I & II,Dungeon Keeper Gold andTheme Park,SimCity 3000 Unlimited,and theWing Commander series (particularly, personally,Privateer). Most of them are, understandably, Windows-only, though Mac support extends to 34 titles so far, and Linux may pick up many more through Proton compatibility, beyond the 19 native titles to date.
    0 Comentários 0 Compartilhamentos 24 Visualizações
  • ARSTECHNICA.COM
    How Valve made Half-Life 2 and set a new standard for future games
    Half-Life 2 Week How Valve made Half-Life 2 and set a new standard for future games From physics to greyboxing, Half-Life 2 broke a lot of new ground. Samuel Axon Nov 13, 2024 12:09 pm | 43 This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIt's Half-Life 2 week at Ars Technica! This Saturday, November 16, is the 20th anniversary of the release of Half-Life 2a game of historical importance for the artistic medium and technology of computer games. Each day up through the 16th, we'll be running a new article looking back at the game and its impact.There has been some debate about which product was the first modern triple-A video game, but ask most people and one answer is sure to at least be a contender: Valves Half-Life 2.For Western PC games, Half-Life 2 set a standard that held strong in developers ambitions and in players expectations for well over a decade. Despite that, theres only so much new ground it truly broke in terms of how games are made and designedits just that most games didnt have the same commitment to scope, scale, and polish all at the same time.To kick off a week of articles looking back at the influential classic, were going to go over the way it was made, and just as importantly, the thought that went into its designboth of which were highly influential.A story of cabals and Electronics BoutiqueDevelopment, design, and production practices in the games industry have always varied widely by studio. But because of the success of Half-Life 2, some of the approaches that Valve took were copied elsewhere in the industry after they were shared in blog posts and conference talks at events like the Game Developers Conference (GDC).The cabals of ValveValve is famous for influencing many things in gaming, but it was most influential in its relatively flat and democratic team structure, and that played out even during Half-Life 2s development back in the early 2000s. While many studios are broken up into clear departments big and small for different disciplines (such as art, level design, combat design, narrative design, AI programming, and so on), many parts of Valves Half-Life 2 team consisted of a half-dozen multi-disciplinary small groups the company internally called cabals.Each major chapter in Half-Life 2 had its own unique four-to-five-person cabal made up of level designers and programmers. These groups built their levels largely independently, while frequently showing their work to other cabals for feedback and cross-pollination of good ideas. They all worked within constraints set in a pre-production phase that laid out elements like the main story beats, some of the weapons, and so on. Each major chapter, like this battle-in-the-streets one toward the end of the game, was designed by a largely independent cabal. Credit: Valve Additionally, similarly sized design cabals worked on aspects of the games design that crossed multiple levelsoften made with representatives from the chapter cabalsfor things like weapons.There was even a Cabal Cabal made up of representatives from each of the six chapter teams to critique the work coming from all the teams.Ruthless playtestingMany game designersespecially back in the '80s or '90sworked largely in isolation, determining privately what they thought would be fun and then shipping a finished product to an audience to find out if it really was.By contrast, Valve put a great deal of emphasis on playtesting. To be clear: Valve did not invent playtesting. But it did make that a key part of the design process in a way that is even quite common today.The Half-Life 2 team would send representatives to public places where potential fans might hang out, like Electronics Boutique stores, and would approach them and say something along the lines of, Would you like to play Half-Life 2? (Most said yes!) A photo from an actual early 2000s playtest of an in-development Half-Life 2, courtesy of a presentation slide from a Valve GDC talk. Credit: Valve The volunteer playtesters were brought to a room set up like a real players living room and told to sit at the computer desk and simply play the game. Behind them, the levels cabal would sit and watch a feed of the gameplay on a TV. The designers werent allowed to talk to the testers; they simply took notes.Through this process, they learned which designs and ideas worked and which ones simply confused the players. They then made iterative changes, playtested the level again, and repeated that process until they were happy with the outcome.Todays developers sometimes take a more sophisticated approach to sourcing players for their playtests, making sure theyre putting their games in front of a wider range of people to make the games more accessible beyond a dedicated enthusiast core. But nonetheless, But nonetheless, playtesting across the industry today is at the level it is because of Valves refinement of the process.The alpha waveFor a game as ambitious as Half-Life 2 was, its surprising just how polished it was when it hit the market. That iterative mindset was a big part of it, but it extended beyond those consumer playtests.Valve made sure to allocate a significant amount of time for iteration and refinement on an alpha build, which in this case meant a version of the game that could be played from beginning to end. When speaking to other developers about the process, representatives of Valve said that if youre working on a game for just a year, you should try to get to the alpha point by the end of eight months so you have four for refinement.Apparently, this made a big impact on Half-Life 2s overall quality. It also helped address natural downsides of the cabal structure, like the fact that chapters developed by largely independent teams offered an inconsistent experience in terms of difficulty curve.With processes like this, Valve modeled several things that would be standard in triple-A game development for years to comethough not all of them were done by Valve first.For example, the approach to in-game cutscenes reverberates today. Different cabals focused on designing the levels versus planning out cutscenes in which characters would walk around the room and interact with one another, all while the player could freely explore the environment. Nova Prospekt was one of the first levels completed during Half-Life 2's development. Credit: Valve The team who focused on story performances worked with level designers to block out the walking paths for characters, and the level designers had to use that as a constraint, building the levels around them. That meant that changes to level layouts couldnt create situations where new character animations would have to be made. That approach is still used by many studios today.As is what is now called greyboxing, the practice of designing levels without high-effort artwork so that artists can come in and pretty the levels up after the layout is settled, rather than having to constantly go back and forth with designers as those designers find the fun. Valve diddnt invent this, but it was a big part of the process, and its in-development levels were filled with the color orange, not just gray.Finding the DNA of Half-Life 2 in 20 years of gamesWhen Half-Life 2 hit the market via the newly launched Steam digital distribution platform (more on that later this week), it was widely praised. Critics and players at the time loved it, calling it a must-have title and one that defined the PC gaming experience. Several of the things that came out of its development process that players remember most from Half-Life 2 became staples over the past 20 years.For instance, the game set a new standard for character animations in fully interactive cutscenes, especially with facial animations. Today, far more advanced motion capture is a common practice in triple-A gamesto the point that games that dont do it (like Bethesda Game Studios titles) are widely criticized by players simply for not taking that route, even if motion capture doesnt necessarily make practical sense for those games scope and design.And Half-Life 2s gravity gun, which dramatically built on past games physics mechanics, is in many ways a concept that developers are still playing with and expanding on today. Ultrahand, the flagship player ability in 2023s The Legend of Zelda: Tears of the Kingdom, could be seen as a substantial evolution from the gravity gun. In addition to offering players the ability to pick and place objects in the world, it gives them the power to attach them to one another to build creative contraptions.Theres also Half-Life 2s approach to using environmental lines and art cues to guide the players attention through realistic-looking environments. The game was lauded for that at the time, and it was an approach used by many popular games in the years to come. Today, many studios have moved on to much more explicit player cues like the yellow climbing holds in so many recent AAA titles. As youll see in an upcoming article this week written by someone who played Half-Life 2 for the very first time in 2024, Half-Life 2s approach may have set the stage, but modern players might expect something a little different. Environments like this were carefully designed to guide the player's eye in subtle ways. Today, many AAA games take a less subtle approach because playtesting with broader audiences shows it's sometimes necessary. Credit: Valve One thing about the environment design that Half-Life 2 was praised for hasnt been replaced these days, though: a commitment to subtle environmental storytelling. World-building and vibes are perhaps Half-Life 2s greatest achievements. From BioShock to Dishonored to Cyberpunk 2077, this might be the realm where Half-Life 2s influence is still felt the most today.A legacy rememberedLooking back 20 years later, Half-Life 2 isnt necessarily remembered for radical new gameplay concepts. Instead, its known for outstanding executionand developers everywhere are still applying lessons learned by that development team to try to chase its high standard of quality.Even at the time, critics noted that it wasnt exactly that there was anything in Half-Life 2 that players had never seen before. Rather, it was the combined force of quality, scope, presentation, and refinement that made an impact.Of course, Valve and Half-Life 2 are also known for multiple memorable cultural moments, some of the industrys most infamous controversies, and playing a big part in introducing digital distribution. Well explore some of those things as we count down to the "Red Letter Day" that is this Saturday.Samuel AxonSenior EditorSamuel AxonSenior Editor Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and heis a graduate of DePaul University, where he studied interactive media and software development. 43 Comments
    0 Comentários 0 Compartilhamentos 20 Visualizações
  • ARSTECHNICA.COM
    What did the snowball Earth look like?
    Under ice What did the snowball Earth look like? Entire continents, even in the tropics, seems to have been under sheets of ice. John Timmer Nov 13, 2024 12:25 pm | 25 Artist's impression of what a snowball Earth would look like with our continents in their current configuration. Credit: MARK GARLICK/SCIENCE PHOTO LIBRARY Artist's impression of what a snowball Earth would look like with our continents in their current configuration. Credit: MARK GARLICK/SCIENCE PHOTO LIBRARY Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreBy now, it has been firmly established that the Earth went through a series of global glaciations around 600 million to 700 million years ago, shortly before complex animal life exploded in the Cambrian. Climate models have confirmed that, once enough of a dark ocean is covered by reflective ice, it sets off a cooling feedback that turns the entire planet into an icehouse. And we've found glacial material that was deposited off the coasts in the tropics.We have an extremely incomplete picture of what these snowball periods looked like, and Antarctic terrain provides different models for what an icehouse continent might look like. But now, researchers have found deposits that they argue were formed beneath a massive ice sheet that was being melted from below by volcanic activity. And, although the deposits are currently in Colorado's Front Range, at the time they resided much closer to the equator.In the icehouseGlacial deposits can be difficult to identify in deep time. Massive sheets of ice will scour the terrain down to bare rock, leaving behind loosely consolidated bits of rubble that can easily be swept away after the ice is gone. We can spot when that rubble shows up in ocean deposits to confirm there were glaciers along the coast, but rubble can be difficult to find on land.That's made studying the snowball Earth periods a challenge. We've got the offshore deposits to confirm coastal ice, and we've got climate models that say the continents should be covered in massive ice sheets, but we've got very little direct evidence. Antarctica gives off mixed messages, too. While there are clearly massive ice sheets, there are also dry valleys, where there's barely any precipitation and there's so little moisture in the air that any ice that makes its way into the valleys sublimates away into water vapor.All of which raises questions about what the snowball Earth might have looked like in the continental interiors. A team of US-based geologists think they've found some glacial deposits in the form of what are called the Tavakaiv sandstones in Colorado. These sandstones are found along the Front Range of the Rockies, including areas just west of Colorado Springs. And, if the authors' interpretations are correct, they formed underneath a massive sheet of glacial ice.There are lots of ways to form sandstone deposits, and they can be difficult to date because they're aggregates of the remains of much older rocks. But in this case, the Tavakaiv sandstone is interrupted by intrusions of dark colored rock that contains quartz and large amounts of hematite, a form of iron oxide.These intrusions tell us a remarkable number of things. For one, some process must have exerted enough force to drive material into small faults in the sandstone. Hematite only gets deposited under fairly specific conditions, which tells us a bit more. And, most critically, hematite can trap uranium and the lead it decays into, providing a way of dating when the deposits formed.Under the snowballDepending on which site was being sampled, the hematite produced a range of dates, from as recent as 660 million years ago to as old as 700 million years. That means all of them were formed during what's termed the Sturtian glaciation, which ran from 715 million to 660 million years ago. At the time, the core of what is now North America was in the equatorial region. So, the Tavakaiv sandstones can provide a window into what at least one continent experienced during the most severe global glaciation of the Cryogenian Period.Obviously, a sandstone could be formed from the fine powder that glaciers grind off rock as they flow. The authors argue that the intrusions that led to the hematite are the product of the massive pressure of the ice sheet acting on some liquid water at its base. That, they argue, would be enough to force the water into minor cracks in the deposit, producing the vertical bands of material that interrupt the sandstone.There are plenty of ways for there to be liquid water at the base of the ice sheet, including local heating due to friction, the draining of surface melts to the base of the glacier (we're seeing a lot of the latter in Greenland at present), or simply hitting the right combination of pressure and temperature. But hematite deposits are typically formed at elevated temperatures (in the area of 220 C), which isn't consistent with either of these processes.Instead, the researchers argue that the hematite comes from geothermal fluids. There are signs of volcanic activity in Idaho that dates from this same period, and the researchers suggest that there may have been sporadic volcanism in Colorado related to this. This would create fluids warm enough to carry the iron oxides that ended up deposited as hematite in these sandstones.While this provides some evidence that at least one part of the continental interior was covered in ice during the snowball Earth period, that doesn't necessarily apply to all areas of all continents. As Antarctica indicates, dry valleys and massive ice sheets can coexist in close proximity when the conditions are right. But the discovery does provide a window into a key period in the Earth's history that has otherwise been quite difficult to study.PNAS, 2024. DOI: 10.1073/pnas.2410759121 (About DOIs).John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 25 Comments Prev story
    0 Comentários 0 Compartilhamentos 23 Visualizações
  • ARSTECHNICA.COM
    Trump says Elon Musk will lead DOGE, a new Department of Government Efficiency
    Trump's DOGE man Trump says Elon Musk will lead DOGE, a new Department of Government Efficiency Musk's Department of Government Efficiency to target "massive waste and fraud." Jon Brodkin Nov 13, 2024 3:07 pm | 265 An image posted by Elon Musk after President-elect Donald Trump announced he will lead a new Department of Government Efficiencyor "DOGE." Credit: Elon Musk An image posted by Elon Musk after President-elect Donald Trump announced he will lead a new Department of Government Efficiencyor "DOGE." Credit: Elon Musk Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn morePresident-elect Donald Trump today announced that a new Department of Government Efficiencyor "DOGE"will be led by Elon Musk and former Republican presidential candidate Vivek Ramaswamy. Musk and Ramaswamy, who founded pharma company Roivant Sciences, "will pave the way for my Administration to dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies," according to the Trump statement on Truth Social.DOGE apparently will not be an official federal agency, as Trump said it will provide advice "from outside" of government. But Musk, who has frequently criticized government subsidies despite seeking public money and obtaining various subsidies for his own companies, will apparently have significant influence over spending in the Trump administration. Musk has also had numerous legal disputes with regulators at agencies that regulate his companies."Republican politicians have dreamed about the objectives of 'DOGE' for a very long time," Trump said. "To drive this kind of drastic change, the Department of Government Efficiency will provide advice and guidance from outside of Government, and will partner with the White House and Office of Management & Budget to drive large scale structural reform, and create an entrepreneurial approach to Government never seen before."Muskthe CEO of Tesla and SpaceX, and owner of X (formerly Twitter)was quoted in Trump's announcement as saying that DOGE "will send shockwaves through the system, and anyone involved in Government waste, which is a lot of people!"Trumps perfect gift to AmericaTrump's statement said the department, whose name is a reference to the Doge meme, "will drive out the massive waste and fraud which exists throughout our annual $6.5 Trillion Dollars of Government Spending." Trump said DOGE will "liberate our Economy" and that its "work will conclude no later than July 4, 2026" because "a smaller Government, with more efficiency and less bureaucracy, will be the perfect gift to America on the 250th Anniversary of The Declaration of Independence.""I look forward to Elon and Vivek making changes to the Federal Bureaucracy with an eye on efficiency and, at the same time, making life better for all Americans," Trump said. Today, Musk wrote that the "world is suffering slow strangulation by overregulation," and that "we finally have a mandate to delete the mountain of choking regulations that do not serve the greater good."Musk has been expected to have influence in Trump's second term after campaigning for him. Trump previously vowed to have Musk head a government efficiency commission."That would essentially give the world's richest man and a major government contractor the power to regulate the regulators who hold sway over his companies, amounting to a potentially enormous conflict of interest," said a New York Times article last month.The Wall Street Journal wrote today that "Musk isn't expected to become an official government employee, meaning he likely wouldn't be required to divest from his business empire."Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 265 Comments
    0 Comentários 0 Compartilhamentos 23 Visualizações
  • ARSTECHNICA.COM
    Amazon ends free ad-supported streaming service after Prime Video with ads debuts
    Farewell, Freevee Amazon ends free ad-supported streaming service after Prime Video with ads debuts Selling subscriptions to Prime Video with ads is more lucrative for Amazon. Scharon Harding Nov 13, 2024 3:56 pm | 15 A shot from the Freevee original series Bosch: Legacy. Credit: Amazon Freevee/YouTube A shot from the Freevee original series Bosch: Legacy. Credit: Amazon Freevee/YouTube Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAmazon is shutting down Freevee, its free ad-supported streaming television (FAST) service, as it heightens focus on selling ads on its Prime Video subscription service.Amazon, which has owned IMDb since 1998, launched Freevee as IMDb Freedive in 2019. The service let people watch movies and shows, including Freevee originals, on demand without a subscription fee. Amazon's streaming offering was also previously known as IMDb TV and rebranded to Amazon Freevee in 2022.According to a report from Deadline this week, Freevee is being phased out over the coming weeks, but a firm closing date hasnt been shared publicly.Explaining the move to Deadline, an Amazon spokesperson said:To deliver a simpler viewing experience for customers, we have decided to phase out Freevee branding. There will be no change to the content available for Prime members, and a vast offering of free streaming content will still be accessible for non-Prime members, including select Originals from Amazon MGM Studios, a variety of licensed movies and series, and a broad library of FAST Channels all available on Prime Video.The shutdown also means that producers can no longer pitch shows to Freevee as Freevee originals, and any pending deals for such projects have been cancelled, Deadline reported.Freevee shows still available for freeFreevee original shows include Jury Duty, with James Marsden, Judy Justice, with Judge Judy Sheindlin, and Bosch:Legacy, a continuation of the Prime Video original series Bosch. The Freevee originals are expected to be available to watch on Prime Video after Freevee closes. People won't need a Prime Video or Prime subscription in order to watch these shows. As of this writing, I was also able to play some Freevee original movies without logging in to a Prime Video or Prime account. Prime Video has also made some Prime Video originals, like The Lord of the Rings: The Rings of Power, available under a Freevee section in Prime Video where people can watch for free if they log in to an Amazon (Prime Video or Prime subscriptions not required) account. Before this week's announcement, Prime Video and Freevee were already sharing some content.Bloomberg reported this week that some Freevee shows will remain free to watch due to contractual prohibitions. Its unclear what might happen to such Freevee original content after said contractual obligations conclude.Freevee became redundant, confusingAmazon's FAST service seemed redundant after Amazon launched the Prime Video ad tier in January. With the subscription tier making money from subscription fees and ads shown on Prime Video with ads, it made little sense for Amazon to continue with Freevee, which only has the latter revenue stream.Pushing people from streaming off of Freevee to Prime Video could help Amazon attract people to Prime Video or Amazon's e-commerce site and simplify tracking what people are streaming. Meanwhile, Amazon plans to show more ads on Prime Video in 2025 than it did this year.Two anonymous sources told advertising trade publication Adweek in February that the two services were confusing subscribers and ad buyers. At the time, the publication claimed that Amazon was laying the groundwork" to kill Freevee "for months. It cited two anonymous people familiar with the matter who pointed to moves like Amazon shifting Freevee technical workers to working on the Prime Video ads infrastructure and Amazon laying off Freevee marketing and strategy employees. In February, Amazon denied that it was ending Freevee, saying it was "an important streaming offering providing both Prime and non-Prime customers thousands of hit movies, shows, and originals, all for free, per Adweek.Freevees demise comes as streaming providers try navigating a booming market where profits remain elusive and ad businesses are still developing. Some industry stakeholders and analysts are expecting more consolidation in the streaming industry as competition intensifies and streamers get increasingly picky about constantly rising subscription fees.The end of Freevee also marks another product in the Amazon graveyard. Other products that Amazon has killed since 2023 includeAstro for Business robots, Amazon Halo fitness trackers, theAlexa Built-in smartphone app, and the Amazon Drive file storage service.Scharon HardingSenior Product ReviewerScharon HardingSenior Product Reviewer Scharon is Ars Technicas Senior Product Reviewer writing news, reviews, and analysis on consumer technology, including laptops, mechanical keyboards, and monitors. Shes based in Brooklyn. 15 Comments Prev story
    0 Comentários 0 Compartilhamentos 21 Visualizações
  • ARSTECHNICA.COM
    IBM boosts the amount of computation you can get done on quantum hardware
    Inching toward usefulness IBM boosts the amount of computation you can get done on quantum hardware Incremental improvements across the hardware and software stacks add up. John Timmer Nov 13, 2024 5:42 pm | 5 Credit: IBM Credit: IBM Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThere's a general consensus that we won't be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It's still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that's betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.On their own, none of the changes being announced are revolutionary. But collectively, changes across the hardware and software stacks have produced much more efficient and less error-prone operations. The net result is a system that supports the most complicated calculations yet on IBM's hardware, leaving the company optimistic that its users will find some calculations where quantum hardware provides an advantage.Better hardware and softwareIBM's early efforts in the quantum computing space saw it ramp up the qubit count rapidly, being one of the first companies to reach the 1,000 qubit count. However, each of those qubits had an error rate that ensured that any algorithms that tried to use all of these qubits in a single calculation would inevitably trigger one. Since then, the company's focus has been on improving the performance of smaller processors. Wednesday's announcement was based on the introduction of the second version of its Heron processor, which has 133 qubits. That's still beyond the capability of simulations on classical computers, should it be able to operate with sufficiently low errors.IBM VP Jay Gambetta told Ars that Revision 2 of Heron focused on getting rid of what are called TLS (two-level system) errors. "If you see this sort of defect, which can be a dipole or just some electronic structure that is caught on the surface, that is what we believe is limiting the coherence of our devices," Gambetta said. This happens because the defects can resonate at a frequency that interacts with a nearby qubit, causing the qubit to drop out of the quantum state needed to participate in calculations (called a loss of coherence).By making small adjustments to the frequency that the qubits are operating at, it's possible to avoid these problems. This can be done when the Heron chip is being calibrated before it's opened for general use.Separately, the company has done a rewrite of the software that controls the system during operations. "After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that," Gambetta said. The result is a dramatic speed-up. "Something that took 122 hours now is down to a couple of hours," he told Ars.Since people are paying for time on this hardware, that's good for customers now. However, it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.Deeper computationsDespite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:"The researchers turned to a method where they intentionally amplified and then measured the processor's noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all."The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it's still easier to do error mitigation calculations than simulate the quantum computer's behavior on the same hardware, there's still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. "They've got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU," Gambetta told Ars. "So I think it's a combination of both."That doesn't mean the computational challenge of error mitigation goes away, but it does allow the method to be used with somewhat larger quantum circuits before things become unworkable.Combining all these techniques, IBM has used this setup to model a simple quantum system called an Ising model. And it produced reasonable results after performing 5,000 individual quantum operations called gates. "I think the official metric is something like if you want to estimate an observable with 10 percent accuracy, we've shown that we can get all the techniques working to 5,000 gates now," Gambetta told Ars.That's good enough that researchers are starting to use the hardware to simulate the electronic structure of some simple chemicals, such as iron-sulfur compounds. And Gambetta viewed that as an indication that quantum computing is becoming a viable scientific tool.But he was quick to say that this doesn't mean we're at the point where quantum computers can clearly and consistently outperform classical hardware. "The question of advantagewhich is when is the method of running it with quantum circuits is the best method, over all possible classical methodsis a very hard scientific question that we need to get algorithmic researchers and domain experts to answer," Gambetta said. "When quantum's going to replace classical, you've got to beat the best possible classical method with the quantum method, and that [needs] an iteration in science. You try a different quantum method, [then] you advance the classical method. And we're not there yet. I think that will happen in the next couple of years, but that's an iterative process."John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 5 Comments Prev story
    0 Comentários 0 Compartilhamentos 20 Visualizações
  • ARSTECHNICA.COM
    New secret math benchmark stumps AI models and PhDs alike
    secret math problems dept. New secret math benchmark stumps AI models and PhDs alike FrontierMath's difficult questions remain unpublished so that AI companies can't train against it. Benj Edwards Nov 12, 2024 5:49 pm | 31 Credit: Getty Images Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Friday, research organization Epoch AI released FrontierMath, a new mathematics benchmark that has been turning heads in the AI world because it contains hundreds of expert-level problems that leading AI models solve less than 2 percent of the time, according to Epoch AI. The benchmark tests AI language models (such as GPT-4o, which powers ChatGPT) against original mathematics problems that typically require hours or days for specialist mathematicians to complete.FrontierMath's performance results, revealed in a preprint research paper, paint a stark picture of current AI model limitations. Even with access to Python environments for testing and verification, top models like Claude 3.5 Sonnet, GPT-4o, o1-preview, and Gemini 1.5 Pro scored extremely poorly. This contrasts with their high performance on simpler math benchmarksmany models now score above 90 percent on tests like GSM8K and MATH.The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination. Many existing AI models are trained on other test problem datasets, allowing the AI models to easily solve the problems and appear more generally capable than they actually are. Many experts cite this as evidence that current large language models (LLMs) are poor generalist learners.Problems spanning multiple disciplinesEpoch AI says it developed FrontierMath through collaboration with over 60 mathematicians from leading institutions. The problems underwent peer review to verify correctness and check for ambiguities. About 1 in 20 problems needed corrections during the review process, a rate comparable to other major machine learning benchmarks.The problems in the new set span multiple mathematical disciplines, from computational number theory to abstract algebraic geometry. And they are reportedly difficult to solve. Really, really difficult.Epoch AI allowed Fields Medal winners Terence Tao and Timothy Gowers to review portions of the benchmark. "These are extremely challenging," Tao said in feedback provided to Epoch. "I think that in the near term basically the only way to solve them, short of having a real domain expert in the area, is by a combination of a semi-expert like a graduate student in a related field, maybe paired with some combination of a modern AI and lots of other algebra packages." A chart showing AI models' limited success on the FrontierMath problems, taken from Epoch AI's research paper. Credit: Epoch AI To aid in the verification of correct answers during testing, the FrontierMath problems must have answers that can be automatically checked through computation, either as exact integers or mathematical objects. The designers made problems "guessproof" by requiring large numerical answers or complex mathematical solutions, with less than a 1 percent chance of correct random guesses.Mathematician Evan Chen, writing on his blog, explained how he thinks that FrontierMath differs from traditional math competitions like the International Mathematical Olympiad (IMO). Problems in that competition typically require creative insight while avoiding complex implementation and specialized knowledge, he says. But for FrontierMath, "they keep the first requirement, but outright invert the second and third requirement," Chen wrote.While IMO problems avoid specialized knowledge and complex calculations, FrontierMath embraces them. "Because an AI system has vastly greater computational power, it's actually possible to design problems with easily verifiable solutions using the same idea that IOI or Project Euler doesbasically, 'write a proof' is replaced by 'implement an algorithm in code,'" Chen explained.The organization plans regular evaluations of AI models against the benchmark while expanding its problem set. They say they will release additional sample problems in the coming months to help the research community test their systems.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 31 Comments
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    This elephant figured out how to use a hose to shower
    An elephant never forgets This elephant figured out how to use a hose to shower A younger rival may have learned how to sabotage those showers by disrupting water flow. Jennifer Ouellette Nov 12, 2024 6:06 pm | 15 An elephant named Mary has been filmed using a hose to shower herself. Credit: Urban et al./Current Biology An elephant named Mary has been filmed using a hose to shower herself. Credit: Urban et al./Current Biology Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn more Mary the elephant shows off her hose-showering skills. Credit: Urban et al./Current Biology An Asian elephant named Mary living at the Berlin Zoo surprised researchers by figuring out how to use a hose to take her morning showers, according to a new paper published in the journal Current Biology. Elephants are amazing with hoses, said co-author Michael Brecht of the Humboldt University of Berlin. As it is often the case with elephants, hose tool use behaviors come out very differently from animal to animal; elephant Mary is the queen of showering.Tool use was once thought to be one of the defining features of humans, but examples of it were eventually observed in primates and other mammals. Dolphins have been observed using sea sponges to protect their beaks while foraging for food, and sea otters will break open shellfish like abalone with rocks. Several species of fish also use tools to hunt and crack open shellfish, as well as to clear a spot for nesting. And the coconut octopus collects coconut shells, stacking them and transporting them before reassembling them as shelter.Birds have also been observed using tools in the wild, although this behavior was limited to corvids (crows, ravens, and jays), although woodpecker finches have been known to insert twigs into trees to impale passing larvae for food. Parrots, by contrast, have mostly been noted for their linguistic skills, and there has only been limited evidence that they use anything resembling a tool in the wild. Primarily, they seem to use external objects to position nuts while feeding.And then there's Figaro, a precociousmale Goffin's cockatoo kept in captivity and cared for by scientists in the "Goffin lab" at the University of Veterinary Medicine in Vienna. Figaro showed a surprising ability to manipulate single tools to maneuver a tasty nut out of a box. Other cockatoos who repeatedly watched Figaro's performance were also able to do so. Figaro and his cockatoo cronies even learned how to combine toolsa stick and a ballto play a rudimentary form of "golf."Shower timeBoth captive and wild elephants are known to use and modify branches for fly switching. Brecht's Humboldt colleague, Lina Kaufman, is the one who first observed Mary using a hose to shower at the Berlin Zoo and told Brecht about it. They proceeded to undertake a more formal study of the behavior not just of Mary, but two other elephants at the zoo, Pang Pha and her daughter Anchali. Mary was born in the wild in Vietnam, while Pang Pha was a gift from Thailand; Anchali was born at the Berlin Zoo, where she was hand-raised by zookeepers. Showering was part of the elephants' morning routine, and all had been trained not to step on the hoses. Mary's rival Anchali blocking the flow of water. Urban et al./Current Biology All the elephants used their trunks to spray themselves with water, but Mary was the only one who also used the hose, picking it up with her trunk. Her hose showers lasted about seven minutes, and she dropped the hose when the water was turned off. Where she gripped the hose depended on which body part she was showering: she grasped it further from the end when spraying her back than when showering the left side of her body, for instance. This is a form of tool modification that has also been observed in New Caledonian crows.And the hose-showering behavior was "lateralized," that is, Mary preferred targeting her left body side more than her right. (Yes, Mary is a "left-trunker.") Mary even adapted her showering behavior depending on the diameter of the hose: she preferred showering with a 24-mm hose over a 13-mm hose and preferred to use her trunk to shower rather than a 32-mm hose.It's not known where Mary learned to use a hose, but the authors suggest that elephants might have an intuitive understanding of how hoses work because of the similarity to their trunks. "Bathing and spraying themselves with water, mud, or dust are very common behaviors in elephants and important for body temperature regulation as well as skin care," they wrote. "Mary's behavior fits with other instances of tool use in elephants related to body care."Perhaps even more intriguing was Anchali's behavior. While Anchali did not use the hose to shower, she nonetheless exhibited complex behavior in manipulating the hose: lifting it, kinking the hose, regrasping the kink, and compressing the kink. The latter, in particular, often resulted in reduced water flow while Mary was showering. Anchali eventually figured out how to further disrupt the water flow by placing her trunk on the hose and lowering her body onto it. Control experiments were inconclusive about whether Anchali was deliberately sabotaging Mary's shower; the two elephants had been at odds and behaved aggressively toward each other at shower times. But similar cognitively complex behavior has been observed in elephants.When Anchali came up with a second behavior that disrupted water flow to Mary, I became pretty convinced that she is trying to sabotage Mary, Brecht said. Do elephants play tricks on each other in the wild? When I saw Anchali's kink and clamp for the first time, I broke out in laughter. So, I wonder, does Anchali also think this is funny, or is she just being mean?Current Biology, 2024. DOI: 10.1016/j.cub.2024.10.017 (About DOIs).Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 15 Comments Prev story
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • ARSTECHNICA.COM
    FTX sues Binance for $1.76B in battle of crypto exchanges founded by convicts
    Bankruptcy court FTX sues Binance for $1.76B in battle of crypto exchanges founded by convicts Lawsuit seeks "at least $1.76 billion that was fraudulently transferred" by SBF. Jon Brodkin Nov 11, 2024 2:14 pm | 63 Former Binance CEO Changpeng Zhao arrives at federal court in Seattle for sentencing on Tuesday, April 30, 2024. Credit: Getty Images | Changpeng Zhao Former Binance CEO Changpeng Zhao arrives at federal court in Seattle for sentencing on Tuesday, April 30, 2024. Credit: Getty Images | Changpeng Zhao Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe bankruptcy estate of collapsed cryptocurrency exchange FTX has sued the company's former rival Binance in an attempt to recover $1.76 billion or more. The lawsuit seeks "at least $1.76 billion that was fraudulently transferred to Binance and its executives at the FTX creditors' expense, as well as compensatory and punitive damages to be determined at trial."The complaint filed yesterday in US Bankruptcy Court in Delaware names Binance and co-founder and former CEO Changpeng Zhao among the defendants. FTX founder Sam Bankman-Fried sold 20 percent of his crypto exchange to Binance in November 2019, but Binance exited that investment in 2021, the lawsuit said."As Zhao would later remark, he decided to exit his position in FTX because of personal grievances he had against Bankman-Fried," the lawsuit said. "In July 2021, the parties negotiated a deal whereby FTX bought back Binance's and its executives' entire stakes in both FTX Trading and [parent company] WRS. Pursuant to that deal, FTX's Alameda Research division directly funded the share repurchase with a combination of FTT (FTX's exchange token), BNB (Binance's exchange token), and BUSD (Binance's dollar-pegged stablecoin). In the aggregate, those tokens had a fair market value of at least $1.76 billion."Because FTX and Alameda were balance-sheet insolvent by early 2021, the $1.76 billion transfer "was a constructive fraudulent transfer based on a straightforward application" of bankruptcy law, and an intentional fraudulent transfer "because the transfer was made in furtherance of Bankman-Fried's scheme," the lawsuit said.Alameda could not fund the transaction because of its insolvency, the lawsuit said. "Indeed, as Bankman-Fried's second-in-command, Caroline Ellison, would later testify, she contemporaneously told Bankman-Fried 'we don't really have the money for this, we'll have to borrow from FTX to do it,'" the lawsuit said.The complaint alleges that after the 2021 divestment, Zhao "set out to destroy" FTX, and accuses Binance and Zhao of fraud, injurious falsehood, intentional misrepresentation, and unjust enrichment.Binance is far from the only entity being sued by FTX. The firm filed 23 lawsuits in the bankruptcy court on Friday "as part of a broader effort to claw back money for creditors of the bankrupt company," Bloomberg reported. Defendants in other suits include Anthony Scaramucci and his hedge fund SkyBridge Capital, Crypto.com, and the Mark Zuckerberg-founded FWD.US.Lawsuit cites SBFs false statementsEllison, who was sentenced to two years in prison, testified that Alameda funded the repurchase with about $1 billion of FTX Trading capital received from depositors, the lawsuit said. It continued:Ellison further testified that Bankman-Fried dismissed her concerns about financial resources, telling her that, notwithstanding the need to use customer deposits, the repurchase was "really important, we have to get it done." Indeed, as discussed below, one of the reasons Bankman-Fried viewed the transaction as "really important" was precisely because of his desire to conceal his companies' insolvency and send a false signal of strength to the market. In connection with the share repurchase, Bankman-Fried was asked directly by a reporter whether Alameda funded the entire repurchase using its own assets, expressing surprise that Alameda could have done so given the purchase price and what was publicly known regarding Alameda's financial resources. In response, Bankman-Fried falsely stated: "The purchase was entirely from Alameda. Yeah, it had a good last year :P" (i.e., an emoji for a tongue sticking out).The transaction contributed to FTX's downfall, according to the lawsuit. It "left the platform in an even greater imbalance, which Bankman-Fried attempted to cover up in a pervasive fraud that infected virtually all aspects of FTX's business," FTX's complaint said. Bankman-Fried is serving a 25-year prison sentence.Because FTX trading was insolvent in July 2021 when the Binance share repurchase was completed, "the FTX Trading shares acquired through the share repurchase were actually worthless based on a proper accounting of FTX Trading's assets and liabilities," the lawsuit said.Zhao allegedly set out to destroyFTX claims that once Zhao divested himself of the equity stake in FTX, "Zhao then set out to destroy his now-unaffiliated competitor" because FTX was "a clear threat to Binance's market dominance." Zhao resigned from Binance last year after agreeing to plead guilty to money laundering violations and was sentenced to four months in prison. He was released in September.FTX's lawsuit alleges that "Zhao's succeed-at-all-costs business ethos was not limited to facilitating money laundering. Beginning on November 6, 2022, Zhao sent a series of false, misleading, and fraudulent tweets that were maliciously calculated to destroy his rival FTX, with reckless disregard to the harm that FTX's customers and creditors would suffer. As set forth herein in more detail, Zhao's false tweets triggered a predictable avalanche of withdrawals at FTXthe proverbial run on the bank that Zhao knew would cause FTX to collapse."Zhao's tweet thread said Binance liquidated its remaining FTT "due to recent revelations." The lawsuit alleges that "contrary to Zhao's denial, Binance's highly publicized apparent liquidation of its FTT was indeed a 'move against a competitor' and was not, as Zhao indicated, 'due to recent revelations.'""As Ellison testified, 'if [Zhao] really wanted to sell his FTT, he wouldn't preannounce to the market that he was going to sell it. He would just sell it [] his real aim in that tweet, as I saw it, was not to sell his FTT, but to hurt FTX and Alameda,'" the lawsuit said.The lawsuit further claims that while FTX was "in freefall, Zhao sent additional false tweets calculated, in part, to prevent FTX from seeking and obtaining alternative financing to cauterize the run on the institution by customers deceived by the tweets. Collectively and individually, these false public statements destroyed value that would have otherwise been recoverable by FTX's stakeholders."Binance calls lawsuit meritlessOn November 8, 2022, Bankman-Fried and Zhao agreed to a deal in which "Binance would acquire FTX Trading and inject capital sufficient to address FTX's liquidity issues," the lawsuit said. But the next day, Binance published tweets saying it was backing out of the deal "as a result of corporate due diligence."When Zhao agreed to the deal on November 8, he had "already been made aware of the 'mishandled' customer funds during his conversation with Bankman-Fried," the lawsuit said. "This is contrary to Binance's representation in the November 9 Tweets that he learned that fact after entering into the Letter of Intent. In addition, Zhao was also aware that the Debtors were insolvent when he entered into the Letter of Intent."In the 24 hours between the November 8 agreement and the November 9 tweets, "no new material information was provided to Zhao and Binance in the diligence process that would have revealed new issues" causing Binance to exit the deal, according to the lawsuit.Binance said it will fight FTX's lawsuit. "The claims are meritless, and we will vigorously defend ourselves," a Binance spokesperson said in a statement provided to Ars.The defendants also included "Does 1-1,000," people who allegedly received fraudulent transfers in 2021 and "whose true names, identities and capacities are presently unknown to the Plaintiffs." FTX is seeking recovery of fraudulent transfers from all defendants. FTX also asked the court to award punitive damages and find that Binance and Zhao committed fraud, injurious falsehood, intentional misrepresentation, and unjust enrichment.Jon BrodkinSenior IT ReporterJon BrodkinSenior IT Reporter Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry. 63 Comments Prev story
    0 Comentários 0 Compartilhamentos 22 Visualizações
  • ARSTECHNICA.COM
    There are some things the Crew-8 astronauts arent ready to talk about
    Fullness of time There are some things the Crew-8 astronauts arent ready to talk about "I did not say I was uncomfortable talking about it. I said we're not going to talk about it." Stephen Clark Nov 11, 2024 6:35 pm | 20 NASA astronaut Michael Barratt works with a spacesuit inside the Quest airlock of the International Space Station on May 31. Credit: NASA NASA astronaut Michael Barratt works with a spacesuit inside the Quest airlock of the International Space Station on May 31. Credit: NASA Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThe astronauts who came home from the International Space Station last month experienced some drama on the high frontier, and some of it accompanied them back to Earth.In orbit, the astronauts aborted two spacewalks, both under unusual circumstances. Then, on October 25, one of the astronauts was hospitalized due to what NASA called an unspecified "medical issue" after splashdown aboard a SpaceX Crew Dragon capsule that concluded the 235-day mission. After an overnight stay in a hospital in Florida, NASA said the astronaut was released "in good health" and returned to their home base in Houston to resume normal post-flight activities.The space agency did not identify the astronaut or any details about their condition, citing medical privacy concerns. The three NASA astronauts on the Dragon spacecraft included commander Matthew Dominick, pilot Michael Barratt, and mission specialist Jeanette Epps. Russian cosmonaut Alexander Grebenkin accompanied the three NASA crew members. Russia's space agency confirmed he was not hospitalized after returning to Earth.Dominick, Barratt, and Epps answered media questions in a post-flight press conference Friday, but they did not offer more information on the medical issue or say who experienced it. NASA initially sent all four crew members to the hospital in Pensacola, Florida, for evaluation, but Grebenkin and two of the NASA astronauts were quickly released and cleared to return to Houston. One astronaut remained behind until the next day."Spaceflight is still something we don't fully understand," said Barratt, a medical doctor and flight surgeon. "We're finding things that we don't expect sometimes. This was one of those times, and we're still piecing things together on this, and so to maintain medical privacy and to let our processes go forward in an orderly manner, this is all we're going to say about that event at this time."NASA typically makes astronaut health data available to outside researchers, who regularly publish papers while withholding identifying information about crew members. NASA officials often tout gaining knowledge about the human body's response to spaceflight as one of the main purposes of the International Space Station. The agency is subject to federal laws, including the Health Insurance Portability and Accountability Act (HIPAA) of 1996, restricting the release of private medical information."I did not say I was uncomfortable talking about it," Barratt said. "I said we're not going to talk about it. I'm a medical doctor. Space medicine is my passion ... and how we adapt, how we experience human spaceflight is something that we all take very seriously."Maybe some dayBarratt said NASA will release more information about the astronaut's post-flight medical issue "in the fullness of time." This was Barratt's third trip to space and the first spaceflight for Dominick and Epps.One of the most famous incidents involving hospitalized astronauts was in 1975, before the passage of the HIPAA medical privacy law, when NASA astronauts Thomas Stafford, Deke Slayton, and Vance Brand stayed at a military hospital in Hawaii nearly two weeks after inhaling toxic propellant fumes that accidentally entered their spacecraft's internal cabin as it descended under parachutes. They were returning to Earth at the end of the Apollo-Soyuz mission, in which they docked their Apollo command module to a Soviet Soyuz spacecraft in orbit.NASA's viewand perhaps the public's, tooof medical privacy has changed in the nearly 50 years since. On that occasion, NASA disclosed that the astronauts suffered from lung irritation, and officials said Brand briefly passed out from the fumes after splashdown, remaining unconscious until his crewmates fitted an oxygen mask tightly over his face. NASA and the military also made doctors available to answer media questions about their condition.The medical concern after splashdown last month was not the only part of the Crew-8 mission that remains shrouded in mystery. Dominick and NASA astronaut Tracy Dyson were supposed to go outside the International Space Station for a spacewalk June 13, but NASA called off the excursion, citing a "spacesuit discomfort issue." NASA replaced Dominick with Barratt and rescheduled the spacewalk for June 24 to retrieve a faulty electronics box and collect microbial samples from the exterior of the space station. But that excursion ended after just 31 minutes, when Dyson reported a water leak in the service and cooling umbilical unit of her spacesuit.While Barratt discussed the water leak in some detail Friday, Dominick declined to answer a question from Ars regarding the suit discomfort issue. "We're still reviewing and trying to figure all the details," he said.Aging suitsRegarding the water leak, Barratt said he and Dyson noticed her suit had a "spewing umbilical, which was quite dramatic, actually." The decision to abandon the spacewalk was a "no-brainer," he said."It was not a trivial leak, and we've got footage," Barratt said. "Anybody who was watching NASA TV at the time could see there was basically a snowstorm, a blizzard, spewing from the airlock because we already had the hatch open. So we were seeing flakes of ice in the airlock, and Tracy was seeing a lot of them on her helmet, on her gloves, and whatnot. Dramatic is the right word, to be real honest."Dyson, who came back to Earth in September on a Russian Soyuz spacecraft, reconnected the leaking umbilical with her gloves and helmet covered with ice, with restricted vision. "Tracy's actions were nowhere short of heroic," Barratt said.Once the leak stabilized, the astronauts closed the hatch and began repressurizing the airlock."Getting the airlock closed was kind of me grabbing her legs and using her as an end effector to lever that thing closed, and she just made it happen," Barratt said. "So, yeah, there was this drama. Everything worked out fine. Again, normal processes and procedures saved our bacon."Barratt said the leak wasn't caused by any procedural error as the astronauts prepared their suits for the spacewalk."It was definitely a hardware issue," he said. "There was a little poppet valve on the interface that didn't quite seat, so really, the question became why didn't that seat? We solved that problem by changing out the whole umbilical."By then, NASA's attention on the space station had turned to other tasks, such as experiments, the arrival of a new cargo ship, and testing of Boeing's Starliner crew capsule docked at the complex, before it ultimately departed and left its crew behind. The spacewalk wasn't urgent, so it had to wait. NASA now plans to attempt the spacewalk again as soon as January with a different set of astronauts.Barratt thinks the spacesuits on the space station are good to go for the next spacewalk. However, the suits are decades old, and their original designs date back more than 40 years, when NASA developed the units for use on the space shuttle. Efforts to develop a replacement suit for use in low-Earth orbit have stalled. In June, Collins Aerospace dropped out of a NASA contract to build new spacesuits for servicing the International Space Station and future orbiting research outposts."None of our spacesuits are spring chickens, so we will expect to see some hardware issues with repeated use and not really upgrading," Barratt said.Stephen ClarkSpace ReporterStephen ClarkSpace Reporter Stephen Clark is a space reporter at Ars Technica, covering private space companies and the worlds space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet. 20 Comments Prev story
    0 Comentários 0 Compartilhamentos 23 Visualizações
  • ARSTECHNICA.COM
    How a stubborn computer scientist accidentally launched the deep learning boom
    Deep learning How a stubborn computer scientist accidentally launched the deep learning boom "Youve taken this idea way too far," a mentor told Prof. Fei-Fei Li. Timothy B. Lee Nov 11, 2024 7:00 am | 13 Credit: Aurich Lawson | Getty Images Credit: Aurich Lawson | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreDuring my first semester as a computer science graduate student at Princeton, I took COS 402: Artificial Intelligence. Toward the end of the semester, there was a lecture about neural networks. This was in the fall of 2008, and I got the distinct impressionboth from that lecture and the textbookthat neural networks had become a backwater.Neural networks had delivered some impressive results in the late 1980s and early 1990s. But then progress stalled. By 2008, many researchers had moved on to mathematically elegant approaches such as support vector machines.I didnt know it at the time, but a team at Princetonin the same computer science building where I was attending lectureswas working on a project that would upend the conventional wisdom and demonstrate the power of neural networks. That team, led by Prof. Fei-Fei Li, wasnt working on a better version of neural networks. They were hardly thinking about neural networks at all.Rather, they were creating a new image dataset that would be far larger than any that had come before: 14 million images, each labeled with one of nearly 22,000 categories.Li tells the story of ImageNet in her recent memoir, The Worlds I See. As she worked on the project, she faced plenty of skepticism from friends and colleagues.I think youve taken this idea way too far, a mentor told her a few months into the project in 2007. The trick is to grow with your field. Not to leap so far ahead of it.It wasnt just that building such a large dataset was a massive logistical challenge. People doubted that the machine learning algorithms of the day would benefit from such a vast collection of images.Pre-ImageNet, people did not believe in data, Li said in a September interview at the Computer History Museum. Everyone was working on completely different paradigms in AI with a tiny bit of data.Ignoring negative feedback, Li pursued the project for more than two years. It strained her research budget and the patience of her graduate students. When she took a new job at Stanford in 2009, she took several of those studentsand the ImageNet projectwith her to California.ImageNet received little attention for the first couple of years after its release in 2009. But in 2012, a team from the University of Toronto trained a neural network on the ImageNet dataset, achieving unprecedented performance in image recognition. That groundbreaking AI model, dubbed AlexNet after lead author Alex Krizhevsky, kicked off the deep learning boom that has continued to the present day.AlexNet would not have succeeded without the ImageNet dataset. AlexNet also would not have been possible without a platform called CUDA, which allowed Nvidias graphics processing units (GPUs) to be used in non-graphics applications. Many people were skeptical when Nvidia announced CUDA in 2006.So the AI boom of the last 12 years was made possible by three visionaries who pursued unorthodox ideas in the face of widespread criticism. One was Geoffrey Hinton, a University of Toronto computer scientist who spent decades promoting neural networks despite near-universal skepticism. The second was Jensen Huang, the CEO of Nvidia, who recognized early that GPUs could be useful for more than just graphics.The third was Fei-Fei Li. She created an image dataset that seemed ludicrously large to most of her colleagues. But it turned out to be essential for demonstrating the potential of neural networks trained on GPUs.Geoffrey HintonA neural network is a network of thousands, millions, or even billions of neurons. Each neuron is a mathematical function that produces an output based on a weighted average of its inputs.Suppose you want to create a network that can identify handwritten decimal digits like the number two in the red square above. Such a network would take in an intensity value for each pixel in an image and output a probability distribution over the ten possible digits0, 1, 2, and so forth.To train such a network, you first initialize it with random weights. You then run it on a sequence of example images. For each image, you train the network by strengthening the connections that push the network toward the right answer (in this case, a high-probability value for the 2 output) and weakening connections that push toward a wrong answer (a low probability for 2 and high probabilities for other digits). If trained on enough example images, the model should start to predict a high probability for 2 when shown a twoand not otherwise.In the late 1950s, scientists started to experiment with basic networks that had a single layer of neurons. However, their initial enthusiasm cooled as they realized that such simple networks lacked the expressive power required for complex computations.Deeper networksthose with multiple layershad the potential to be more versatile. But in the 1960s, no one knew how to train them efficiently. This was because changing a parameter somewhere in the middle of a multi-layer network could have complex and unpredictable effects on the output.So by the time Hinton began his career in the 1970s, neural networks had fallen out of favor. Hinton wanted to study them, but he struggled to find an academic home in which to do so. Between 1976 and 1986, Hinton spent time at four different research institutions: Sussex University, the University of California San Diego (UCSD), a branch of the UK Medical Research Council, and finally Carnegie Mellon, where he became a professor in 1982. Geoffrey Hinton speaking in Toronto in June. Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images Geoffrey Hinton speaking in Toronto in June. Credit: Photo by Mert Alper Dervis/Anadolu via Getty Images In a landmark 1986 paper, Hinton teamed up with two of his former colleagues at UCSD, David Rumelhart and Ronald Williams, to describe a technique called backpropagation for efficiently training deep neural networks.Their idea was to start with the final layer of the network and work backward. For each connection in the final layer, the algorithm computes a gradienta mathematical estimate of whether increasing the strength of that connection would push the network toward the right answer. Based on these gradients, the algorithm adjusts each parameter in the models final layer.The algorithm then propagates these gradients backward to the second-to-last layer. A key innovation here is a formulabased on the chain rule from high school calculusfor computing the gradients in one layer based on gradients in the following layer. Using these new gradients, the algorithm updates each parameter in the second-to-last layer of the model. The gradients then get propagated backward to the third-to-last layer, and the whole process repeats once again.The algorithm only makes small changes to the model in each round of training. But as the process is repeated over thousands, millions, billions, or even trillions of training examples, the model gradually becomes more accurate.Hinton and his colleagues werent the first to discover the basic idea of backpropagation. But their paper popularized the method. As people realized it was now possible to train deeper networks, it triggered a new wave of enthusiasm for neural networks.Hinton moved to the University of Toronto in 1987 and began attracting young researchers who wanted to study neural networks. One of the first was the French computer scientist Yann LeCun, who did a year-long postdoc with Hinton before moving to Bell Labs in 1988.Hintons backpropagation algorithm allowed LeCun to train models deep enough to perform well on real-world tasks like handwriting recognition. By the mid-1990s, LeCuns technology was working so well that banks started to use it for processing checks.At one point, LeCuns creation read more than 10 percent of all checks deposited in the United States, wrote Cade Metz in his 2022 book Genius Makers.But when LeCun and other researchers tried to apply neural networks to larger and more complex images, it didnt go well. Neural networks once again fell out of fashion, and some researchers who had focused on neural networks moved on to other projects.Hinton never stopped believing that neural networks could outperform other machine learning methods. But it would be many years before hed have access to enough data and computing power to prove his case.Jensen Huang Jensen Huang speaking in Denmark in October. Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images Jensen Huang speaking in Denmark in October. Credit: Photo by MADS CLAUS RASMUSSEN/Ritzau Scanpix/AFP via Getty Images The brain of every personal computer is a central processing unit (CPU). These chips are designed to perform calculations in order, one step at a time. This works fine for conventional software like Windows and Office. But some video games require so many calculations that they strain the capabilities of CPUs. This is especially true of games like Quake, Call of Duty, and Grand Theft Auto, which render three-dimensional worlds many times per second.So gamers rely on GPUs to accelerate performance. Inside a GPU are many execution unitsessentially tiny CPUspackaged together on a single chip. During gameplay, different execution units draw different areas of the screen. This parallelism enables better image quality and higher frame rates than would be possible with a CPU alone.Nvidia invented the GPU in 1999 and has dominated the market ever since. By the mid-2000s, Nvidia CEO Jensen Huang suspected that the massive computing power inside a GPU would be useful for applications beyond gaming. He hoped scientists could use it for compute-intensive tasks like weather simulation or oil exploration.So in 2006, Nvidia announced the CUDA platform. CUDA allows programmers to write kernels, short programs designed to run on a single execution unit. Kernels allow a big computing task to be split up into bite-sized chunks that can be processed in parallel. This allows certain kinds of calculations to be completed far faster than with a CPU alone.But there was little interest in CUDA when it was first introduced, wrote Steven Witt in The New Yorker last year:When CUDA was released, in late 2006, Wall Street reacted with dismay. Huang was bringing supercomputing to the masses, but the masses had shown no indication that they wanted such a thing.They were spending a fortune on this new chip architecture, Ben Gilbert, the co-host of Acquired, a popular Silicon Valley podcast, said. They were spending many billions targeting an obscure corner of academic and scientific computing, which was not a large market at the timecertainly less than the billions they were pouring in.Huang argued that the simple existence of CUDA would enlarge the supercomputing sector. This view was not widely held, and by the end of 2008, Nvidias stock price had declined by seventy percentDownloads of CUDA hit a peak in 2009, then declined for three years. Board members worried that Nvidias depressed stock price would make it a target for corporate raiders.Huang wasnt specifically thinking about AI or neural networks when he created the CUDA platform. But it turned out that Hintons backpropagation algorithm could easily be split up into bite-sized chunks. So training neural networks turned out to be a killer app for CUDA.According to Witt, Hinton was quick to recognize the potential of CUDA:In 2009, Hintons research group used Nvidias CUDA platform to train a neural network to recognize human speech. He was surprised by the quality of the results, which he presented at a conference later that year. He then reached out to Nvidia. I sent an e-mail saying, Look, I just told a thousand machine-learning researchers they should go and buy Nvidia cards. Can you send me a free one? Hinton told me. They said no.Despite the snub, Hinton and his graduate students, Alex Krizhevsky and Ilya Sutskever, obtained a pair of Nvidia GTX 580 GPUs for the AlexNet project. Each GPU had 512 execution units, allowing Krizhevsky and Sutskever to train a neural network hundreds of times faster than would be possible with a CPU. This speed allowed them to train a larger modeland to train it on many more training images. And they would need all that extra computing power to tackle the massive ImageNet dataset.Fei-Fei Li Fei-Fei Li at the SXSW conference in 2018. Credit: Photo by Hubert Vestil/Getty Images for SXSW Fei-Fei Li at the SXSW conference in 2018. Credit: Photo by Hubert Vestil/Getty Images for SXSW Fei-Fei Li wasnt thinking about either neural networks or GPUs as she began a new job as a computer science professor at Princeton in January of 2007. While earning her PhD at Caltech, she had built a dataset called Caltech 101 that had 9,000 images across 101 categories.That experience had taught her that computer vision algorithms tended to perform better with larger and more diverse training datasets. Not only had Li found her own algorithms performed better when trained on Caltech 101, but other researchers also started training their models using Lis dataset and comparing their performance to one another. This turned Caltech 101 into a benchmark for the field of computer vision.So when she got to Princeton, Li decided to go much bigger. She became obsessed with an estimate by vision scientist Irving Biederman that the average person recognizes roughly 30,000 different kinds of objects. Li started to wonder if it would be possible to build a truly comprehensive image datasetone that included every kind of object people commonly encounter in the physical world.A Princeton colleague told Li about WordNet, a massive database that attempted to catalog and organize 140,000 words. Li called her new dataset ImageNet, and she used WordNet as a starting point for choosing categories. She eliminated verbs and adjectives, as well as intangible nouns like truth. That left a list of 22,000 countable objects ranging from ambulance to zucchini.She planned to take the same approach shed taken with the Caltech 101 dataset: use Googles image search to find candidate images, then have a human being verify them. For the Caltech 101 dataset, Li had done this herself over the course of a few months. This time she would need more help. She planned to hire dozens of Princeton undergraduates to help her choose and label images.But even after heavily optimizing the labeling processfor example, pre-downloading candidate images so theyre instantly available for students to reviewLi and her graduate student Jia Deng calculated that it would take more than 18 years to select and label millions of images.The project was saved when Li learned about Amazon Mechanical Turk, a crowdsourcing platform Amazon had launched a couple of years earlier. Not only was AMTs international workforce more affordable than Princeton undergraduates, but the platform was also far more flexible and scalable. Lis team could hire as many people as they needed, on demand, and pay them only as long as they had work available.AMT cut the time needed to complete ImageNet down from 18 to two years. Li writes that her lab spent two years on the knife-edge of our finances as the team struggled to complete the ImageNet project. But they had enough funds to pay three people to look at each of the 14 million images in the final data set.ImageNet was ready for publication in 2009, and Li submitted it to the Conference on Computer Vision and Pattern Recognition, which was held in Miami that year. Their paper was accepted, but it didnt get the kind of recognition Li hoped for.ImageNet was relegated to a poster session, Li writes. This meant that we wouldnt be presenting our work in a lecture hall to an audience at a predetermined time but would instead be given space on the conference floor to prop up a large-format print summarizing the project in hopes that passersby might stop and ask questions After so many years of effort, this just felt anticlimactic.To generate public interest, Li turned ImageNet into a competition. Realizing that the full dataset might be too unwieldy to distribute to dozens of contestants, she created a much smaller (but still massive) dataset with 1,000 categories and 1.4 million images.The first years competition in 2010 generated a healthy amount of interest, with 11 teams participating. The winning entry was based on support vector machines. Unfortunately, Li writes, it was only a slight improvement over cutting-edge work found elsewhere in our field.The second year of the ImageNet competition attracted fewer entries than the first. The winning entry in 2011 was another support vector machine, and it just barely improved on the performance of the 2010 winner. Li started to wonder if the critics had been right. Maybe ImageNet was too much for most algorithms to handle.For two years running, well-worn algorithms had exhibited only incremental gains in capabilities, while true progress seemed all but absent, Li writes. If ImageNet was a bet, it was time to start wondering if wed lost.But when Li reluctantly staged the competition a third time in 2012, the results were totally different. Geoff Hintons team was the first to submit a model based on a deep neural network. And its top-5 accuracy was 85 percent10 percentage points better than the 2011 winner.Lis initial reaction was incredulity: Most of us saw the neural network as a dusty artifact encased in glass and protected by velvet ropes.This is proof Yann LeCun testifies before the US Senate in September. Credit: Photo by Kevin Dietsch/Getty Images Yann LeCun testifies before the US Senate in September. Credit: Photo by Kevin Dietsch/Getty Images The ImageNet winners were scheduled to be announced at the European Conference on Computer Vision in Florence, Italy. Li, who had a baby at home in California, was planning to skip the event. But when she saw how well AlexNet had done on her dataset, she realized this moment would be too important to miss: I settled reluctantly on a twenty-hour slog of sleep deprivation and cramped elbow room.On an October day in Florence, Alex Krizhevsky presented his results to a standing-room-only crowd of computer vision researchers. Fei-Fei Li was in the audience. So was Yann LeCun.Cade Metz reports that after the presentation, LeCun stood up and called AlexNet an unequivocal turning point in the history of computer vision. This is proof.The success of AlexNet vindicated Hintons faith in neural networks, but it was arguably an even bigger vindication for LeCun.AlexNet was a convolutional neural network, a type of neural network that LeCun had developed 20 years earlier to recognize handwritten digits on checks. (For more details on how CNNs work, see the in-depth explainer I wrote for Ars in 2018.) Indeed, there were few architectural differences between AlexNet and LeCuns image recognition networks from the 1990s.AlexNet was simply far larger. In a 1998 paper, LeCun described a document-recognition network with seven layers and 60,000 trainable parameters. AlexNet had eight layers, but these layers had 60 million trainable parameters.LeCun could not have trained a model that large in the early 1990s because there were no computer chips with as much processing power as a 2012-era GPU. Even if LeCun had managed to build a big enough supercomputer, he would not have had enough images to train it properly. Collecting those images would have been hugely expensive in the years before Google and Amazon Mechanical Turk.And this is why Fei-Fei Lis work on ImageNet was so consequential. She didnt invent convolutional networks or figure out how to make them run efficiently on GPUs. But she provided the training data that large neural networks needed to reach their full potential.The technology world immediately recognized the importance of AlexNet. Hinton and his students formed a shell company with the goal to be acquihired by a big tech company. Within months, Google purchased the company for $44 million. Hinton worked at Google for the next decade while retaining his academic post in Toronto. Ilya Sutskever spent a few years at Google before becoming a cofounder of OpenAI.AlexNet also made Nvidia GPUs the industry standard for training neural networks. In 2012, the market valued Nvidia at less than $10 billion. Today, Nvidia is one of the most valuable companies in the world, with a market capitalization north of $3 trillion. That high valuation is driven mainly by overwhelming demand for GPUs like the H100 that are optimized for training neural networks.Sometimes the conventional wisdom is wrongThat moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time, Li said in a September interview at the Computer History Museum. The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.Today, leading AI labs believe the key to progress in AI is to train huge models on vast data sets. Big technology companies are in such a hurry to build the data centers required to train larger models that theyve started to lease out entire nuclear power plants to provide the necessary power.You can view this as a straightforward application of the lessons of AlexNet. But I wonder if we ought to draw the opposite lesson from AlexNet: that its a mistake to become too wedded to conventional wisdom.Scaling laws have had a remarkable run in the 12 years since AlexNet, and perhaps well see another generation or two of impressive results as the leading labs scale up their foundation models even more.But we should be careful not to let the lessons of AlexNet harden into dogma. I think theres at least a chance that scaling laws will run out of steam in the next few years. And if that happens, well need a new generation of stubborn nonconformists to notice that the old approach isnt working and try something different.Tim Lee was on staff at Ars from 2017 to 2021. Last year, he launched a newsletter,Understanding AI,that explores how AI works and how it's changing our world. You can subscribehere.Timothy B. LeeSenior tech policy reporterTimothy B. LeeSenior tech policy reporter Timothy is a senior reporter covering tech policy and the future of transportation. He lives in Washington DC. 13 Comments Prev story
    0 Comentários 0 Compartilhamentos 23 Visualizações
Mais Stories