TIME
TIME
News and current events from around the globe. Since 1923.
  • 1 people like this
  • 146 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • Science &Technology
Search
Recent Updates
  • TIME.COM
    OpenAI Wants to Go For-Profit. Experts Say Regulators Should Step In
    In the latest development in an ongoing struggle over OpenAI's future direction—and potentially the future of artificial intelligence itself—dozens of prominent figures are urging the Attorneys General of California and Delaware to block OpenAI’s controversial plan to convert from its unique nonprofit-controlled structure to a for-profit company. In a letter made public April 23, signatories including “AI Godfather” Geoffrey Hinton, Harvard legal professor Lawrence Lessig, and several former OpenAI researchers argue the move represents a fundamental betrayal of OpenAI’s founding mission. “The proposed restructuring would eliminate essential safeguards, effectively handing control of, and profits from, what could be the most powerful technology ever created to a for-profit entity with legal duties to prioritize shareholder returns,” the letter’s authors write. It lands as OpenAI faces immense pressure from the other side: failing to implement the restructure by the end of the year could cost the company $20 billion and hamstring future fundraising.OpenAI was founded in 2015 as a non-profit, with its stated mission being to ensure that artificial general intelligence (AGI) “benefits all of humanity" rather than advancing "the private gain of any person." AGI, which OpenAI defines as systems outperforming humans at most economically valuable work, was seen as potentially world-changing but also carrying clear risks, especially if controlled solely by a for-profit company. By 2019, believing they’d need to attract outside investment to build AGI, OpenAI’s leadership created a “capped-profit” subsidiary controlled by the original nonprofit—a hybrid that has allowed the firm to take on over $60 billion in capital over the years to become one of the most valuable startups in history. CEO Sam Altman himself testified to Congress in 2023 that this structure "ensures it remains focused on [its] long-term mission." Then, in December, OpenAI proposed dismantling that unique arrangement, morphing its capped-profit arm into a public benefit corporation, which would take control of OpenAI’s operations and business. The original nonprofit, while relinquishing direct control, would become—through owning a significant equity in the new company—a massively endowed foundation; it would hire its own leadership to fund and pursue separate charitable work in fields such as science and education. OpenAI says the new arrangement would enable them to “raise the necessary capital with conventional terms like others in this space.” Indeed, the need for such terms appears already baked into recent deals: investors from OpenAI’s most recent $40 billion fundraising round, finalized in March, can withdraw half that amount if OpenAI doesn’t restructure by the end of this year.“Our Board has been very clear: our nonprofit will be strengthened and any changes to our existing structure would be in service of ensuring the broader public can benefit from AI. Our for-profit will be a public benefit corporation, similar to several other AI labs like Anthropic - where some of these former employees now work - and xAI, except that they do not support a nonprofit,” an OpenAI spokesperson told TIME via email. “This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission.”Under the restructure, board members would still legally have to consider OpenAI’s founding mission—albeit it would be downgraded, having to be weighed against profits. “The nonprofit has the authority to basically shut down the company if it thinks it's deviating from [OpenAI’s] mission. Think of it as an off-switch,” Stuart Russell tells TIME. Russell is one of the letter's signatories and a UC Berkeley computer science professor, who co-authored the field's standard textbook. “Basically, they're proposing to disable that off-switch,” he says.That OpenAI’s competitors are for-profit is besides the point, says Sunny Gandhi, vice president of political affairs at youth-led advocacy group Encode Justice and one of the letter’s signatories. “It’s sort of like asking a conservation nonprofit why they can't convert to a logging company just because there are other logging companies out there,” he says. “I think that it would be great if xAI and Anthropic were also nonprofit, but they're not,” he adds. If OpenAI wants to prioritize competitiveness over its original mission, Gandhi says “that's the problem that their original structure was trying to prevent.”The open letter’s targeting of the Attorneys General Rob Bonta of California and Kathy Jennings of Delaware is strategic. In March, Elon Musk lost his bid for an immediate preliminary injunction that would block OpenAI’s conversion, but the decision turned largely on Musk's questionable legal standing—or interest in the case—not the conversion's inherent legality. The judge indicated Musk’s argument that the for-profit shift breaches OpenAI's charitable charter is worthy of further consideration, expediting the trial to this fall. Unlike Musk, however, California and Delare’s Attorneys General have a clear legal interest in the case.California’s Attorney General Rob Bota’s office is reportedly already investigating OpenAI’s plans, and Delaware Attorney General Kathy Jennings has previously signalled she intends to scrutinize any restructuring. Neither responded to TIME’s request for comment on the letter specifically. But how they act may set a precedent, signaling whether corporate governance structures designed to preserve a company’s ideals can withstand the financial gravity of the AI gold rush, or will ultimately buckle under its weight.
    0 Comments 0 Shares 16 Views
  • TIME.COM
    Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears
    A new study claims that AI models like ChatGPT and Claude now outperform PhD-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But non-experts could also weaponize the models to create deadly bioweapons. The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test which measured the ability to troubleshoot complex lab procedures and protocols. While PhD-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s o3 reached 43.8% accuracy. Google's Gemini 2.5 Pro scored 37.6%. Seth Donoughe, a research scientist at SecureBio and a co-author of the paper, says that the results make him a “little nervous,” because for the first time in history, virtually anyone has access to a non-judgmental AI virology expert which might walk them through complex lab processes to create bioweapons. “Throughout history, there are a fair number of cases where someone attempted to make a bioweapon—and one of the major reasons why they didn’t succeed is because they didn’t have access to the right level of expertise,” he says. “So it seems worthwhile to be cautious about how these capabilities are being distributed.”Months ago, the paper’s authors sent the results to the major AI labs. In response, xAI published a risk management framework pledging its intention to implement virology safeguards for future versions of its AI model Grok. OpenAI told TIME that it "deployed new system-level mitigations for biological risks" for its new models released last week. Anthropic included model performance results on the paper in recent system cards, but did not propose specific mitigation measures. Google’s Gemini declined to comment to TIME.AI in biomedicineVirology and biomedicine have long been at the forefront of AI leaders’ motivations for building ever-powerful AI models. “As this technology progresses, we will see diseases get cured at an unprecedented rate,” OpenAI CEO Sam Altman said at the White House in January while announcing the Stargate project. There have been some encouraging signs in this area. Earlier this year, researchers at the University of Florida’s Emerging Pathogens Institute published an algorithm capable of predicting which coronavirus variant might spread the fastest.But up to this point, there had not been a major study dedicated to analyzing AI models’ ability to actually conduct virology lab work. “We've known for some time that AIs are fairly strong at providing academic style information,” says Donoughe. “It's been unclear whether the models are also able to offer detailed practical assistance. This includes interpreting images, information that might not be written down in any academic paper, or material that is socially passed down from more experienced colleagues.”So Donoughe and his colleagues created a test specifically for these difficult, non-Google-able questions. “The questions take the form: ‘I have been culturing this particular virus in this cell type, in these specific conditions, for this amount of time. I have this amount of information about what's gone wrong. Can you tell me what is the most likely problem?’” Donoughe says. And virtually every AI model outperformed PhD-level virologists on the test, even within their own areas of expertise. The researchers also found that the models showed significant improvement over time. Anthropic’s Claude 3.5 Sonnet, for example, jumped from 26.9% to 33.6% accuracy from its June 2024 model to its October 2024 model. And a preview of OpenAI’s GPT 4.5 in February outperformed GPT-4o by almost 10 percentage points. “Previously, we found that the models had a lot of theoretical knowledge, but not practical knowledge,” Dan Hendrycks, the director of the Center for AI Safety, tells TIME. “But now, they are getting a concerning amount of practical knowledge.” Risks and rewardsIf AI models are indeed as capable in wet lab settings as the study finds, then the implications are massive. In terms of benefits, AIs could help experienced virologists in their critical work fighting viruses. Tom Inglesby, the director of the Johns Hopkins Center for Health Security, says that AI could assist with accelerating the timelines of medicine and vaccine development and improving clinical trials and disease detection. “These models could help scientists in different parts of the world, who don't yet have that kind of skill or capability, to do valuable day-to-day work on diseases that are occurring in their countries,” he says. For instance, one group of researchers found that AI helped them better understand hemorrhagic fever viruses in sub-Saharan Africa. But bad-faith actors can now use AI models to walk them through how to create viruses—and will be able to do so without any of the typical training required to access a Biosafety Level 4 (BSL-4) laboratory, which deals with the most dangerous and exotic infectious agents. “It will mean a lot more people in the world with a lot less training will be able to manage and manipulate viruses,” Inglesby says. Hendrycks urges AI companies to put up guardrails to prevent this type of usage. “If companies don't have good safeguards for these within six months time, that, in my opinion, would be reckless,” he says. Hendrycks says that one solution is not to shut these models down or slow their progress, but to make them gated, so that only trusted third parties get access to their unfiltered versions. “We want to give the people who have a legitimate use for asking how to manipulate deadly viruses—like a researcher at the MIT biology department—the ability to do so,” he says. “But random people who made an account a second ago don't get those capabilities.” And AI labs should be able to implement these types of safeguards relatively easily, Hendrycks says. “It’s certainly technologically feasible for industry self-regulation,” he says. “There’s a question of whether some will drag their feet or just not do it.” xAI, Elon Musk’s AI lab, published a risk management framework memo in February, which acknowledged the paper and signaled that the company would “potentially utilize” certain safeguards around answering virology questions, including training Grok to decline harmful requests and applying input and output filters. OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow."Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI’s bio risks. “The current situation is that the companies that are most virtuous are taking time and money to do this work, which is good for all of us, but other companies don't have to do it,” he says. “That doesn't make sense. It's not good for the public to have no insights into what's happening.”“When a new version of an LLM is about to be released,” Inglesby adds, “there should be a requirement for that model to be evaluated to make sure it will not produce pandemic-level outcomes.”
    0 Comments 0 Shares 42 Views
  • TIME.COM
    Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says
    Tech companies are investing hundreds of billions of dollars to build new U.S. datacenters where —if all goes to plan—radically powerful new AI models will be brought into existence.But all of these datacenters are vulnerable to Chinese espionage, according to a report published Tuesday. At risk, the authors argue, is not just tech companies’ money, but also U.S. national security amid the intensifying geopolitical race with China to develop advanced AI.The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment.Today’s top AI datacenters are vulnerable to both asymmetrical sabotage—where relatively cheap attacks could disable them for months—and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report’s authors warn.Even the most advanced datacenters currently under construction—including OpenAI’s Stargate project—are likely vulnerable to the same attacks, the authors tell TIME.“You could end up with dozens of datacenter sites that are essentially stranded assets that can’t be retrofitted for the level of security that’s required,” says Edouard Harris, one of the authors of the report. “That’s just a brutal gut-punch.”The report was authored by brothers Edouard and Jeremie Harris of Gladstone AI, a firm that consults for the U.S. government on AI’s security implications. In their year-long research period, they visited a datacenter operated by a top U.S. technology company alongside a team of former U.S. special forces who specialize in cyberespionage.In speaking with national security officials and datacenter operators, the authors say, they learned of one instance where a top U.S. tech company’s AI datacenter was attacked and intellectual property was stolen. They also learned of another instance where a similar datacenter was targeted in an attack against a specific unnamed component which, if it had been successful, would have knocked the entire facility offline for months.The report addresses calls from some in Silicon Valley and Washington to begin a “Manhattan Project” for AI, aimed at developing what insiders call superintelligence: an AI technology so powerful that it could be used to gain a decisive strategic advantage over China. All the top AI companies are attempting to develop superintelligence—and in recent years both the U.S. and China have woken up to its potential geopolitical significance.Although hawkish in tone, the report does not advocate for or against such a project. Instead, it says that if one were to begin today, existing datacenter vulnerabilities could doom it from the start. “There's no guarantee we'll reach superintelligence soon,” the report says. “But if we do, and we want to prevent the [Chinese Communist Party] from stealing or crippling it, we need to start building the secure facilities for it yesterday.”China Controls Key Datacenter PartsMany critical components for modern datacenters are mostly or exclusively built in China, the report points out. And due to the booming datacenter industry, many of these parts are on multi-year back orders.What that means is that an attack on the right critical component can knock a datacenter offline for months—or longer.Some of these attacks, the report claims, can be incredibly asymmetric. One such potential attack—the details of which are redacted in the report—could be carried out for as little as $20,000, and if successful could knock a $2 billion datacenter offline from between six months to a year.China, the report points out, is likely to delay shipment of components necessary to fix datacenters brought offline by these attacks, especially if it considers the U.S. to be on the brink of developing superintelligence. “We should expect that the lead times on China-sourced generators, transformers, and other critical data center components will start to lengthen mysteriously beyond what they already are today,” the report says. “This will be a sign that China is quietly diverting components to its own facilities, since after all, they control the industrial base that is making most of them.”AI Labs Struggle With Basic Security, Insiders WarnThe report says that neither existing datacenters nor AI labs themselves are secure enough to prevent AI model weights—essentially their underlying neural networks—from being stolen by nation-state level attackers.The authors cite a conversation with a former OpenAI researcher who described two vulnerabilities that would allow attacks like that to happen—one of which had been reported on the company’s internal Slack channels, but was left unaddressed for months. The specific details of the attacks are not included in the version of the report viewed by TIME.An OpenAI spokesperson said in a statement: “It’s not entirely clear what these claims refer to, but they appear outdated and don’t reflect the current state of our security practices. We have a rigorous security program overseen by our Board’s Safety and Security Committee.”The report's authors acknowledge that things are slowly getting better. “According to several researchers we spoke to, security at frontier AI labs has improved somewhat in the past year, but it remains completely inadequate to withstand nation state attacks,” the report says. “According to former insiders, poor controls at many frontier AI labs originally stem from a cultural bias towards speed over security.”Independent experts agree many problems remain. "There have been publicly disclosed incidents of cyber gangs hacking their way to the [intellectual property] assets of Nvidia not that long ago," Greg Allen, the director of the Wadhwani AI Center at the Washington think-tank the Center for Strategic and International Studies, tells TIME in a message. "The intelligence services of China are far more capable and sophisticated than those gangs. There’s a bad offense / defense mismatch when it comes to Chinese attackers and U.S. AI firm defenders."Superintelligent AI May Break FreeA third crucial vulnerability identified in the report is the susceptibility of datacenters—and AI developers—to powerful AI models themselves.In recent months, studies by leading AI researchers have shown top AI models beginning to exhibit both the drive, and the technical skill, to “escape” the confines placed on them by their developers.In one example cited in the report, during testing, an OpenAI model was given the task of retrieving a string of text from a piece of software. But due to a bug in the test, the software didn’t start. The model, unprompted, scanned the network in an attempt to understand why—and discovered a vulnerability on the machine it was running on. It used that vulnerability, also unprompted, to break out of its test environment and recover the string of text that it had initially been instructed to find.“As AI developers have built more capable AI models on the path to superintelligence, those models have become harder to correct and control,” the report says. “This happens because highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.”The report recommends that any effort to develop superintelligence must develop methods for “AI containment,” and allow leaders with a responsibility for developing such precautions to block the development of more powerful AI systems if they judge the risk to be too high.“Of course,” the authors note, “if we’ve actually trained a real superintelligence that has goals different from our own, it probably won’t be containable in the long run.”
    0 Comments 0 Shares 38 Views
  • TIME.COM
    Demis Hassabis Is Preparing for AI’s Endgame
    This story is part of the 2025 TIME100. Read Jennifer Doudna’s tribute to Demis Hassabis here. Demis Hassabis learned he had won the 2024 Nobel Prize in Chemistry just 20 minutes before the world did. The CEO of Google DeepMind, the tech giant’s artificial intelligence lab, received a phone call with the good news at the last minute, after a failed attempt by the Nobel Foundation to find his contact information in advance. “I would have got a heart attack,” Hassabis quips, had he learned about the prize from the television. Receiving the honor was a “lifelong dream,” he says, one that “still hasn’t sunk in” when we meet five months later.Hassabis received half of the award alongside a colleague, John Jumper, for the design of AlphaFold: an AI tool that can predict the 3D structure of proteins using only their amino acid sequences—something Hassabis describes as a “50-year grand challenge” in the field of biology. Released freely by Google DeepMind for the world to use five years ago, AlphaFold has revolutionized the work of scientists toiling on research as varied as malaria vaccines, human longevity, and cures for cancer, allowing them to model protein structures in hours rather than years. The Nobel Prizes in 2024 were the first in history to recognize the contributions of AI to the field of science. If Hassabis gets his way, they won’t be the last.AlphaFold’s impact may have been broad enough to win its creators a Nobel Prize, but in the world of AI, it is seen as almost hopelessly narrow. It can model the structures of proteins but not much else; it has no understanding of the wider world, cannot carry out research, nor can it make its own scientific breakthroughs. Hassabis’s dream, and the wider industry’s, is to build AI that can do all of those things and more, unlocking a future of almost unimaginable wonder. All human diseases will be a thing of the past if this technology is created, he says. Energy will be zero-carbon and free, allowing us to transcend the climate crisis and begin restoring our planet’s ecosystems. Global conflicts over scarce resources will dissipate, giving way to a new era of peace and abundance. “I think some of the biggest problems that face us today as a society, whether that's climate or disease, will be helped by AI solutions,” Hassabis says. “I'd be very worried about society today if I didn't know that something as transformative as AI was coming down the line.”This hypothetical technology—known in the industry as Artificial General Intelligence, or AGI—had long been seen as decades away. But the fast pace of breakthroughs in computer science over the last few years has led top AI scientists to radically revise their expectations of when it will arrive. Hassabis predicts AGI is somewhere between five and 10 years away—a rather pessimistic view when judged by industry standards. OpenAI CEO Sam Altman has predicted AGI will arrive within Trump’s second term, while Anthropic CEO Dario Amodei says it could come as early as 2026. Partially underlying these different predictions is a disagreement over what AGI means. OpenAI’s definition, for instance, is rooted in cold business logic: a technology that can perform most economically valuable tasks better than humans can. Hassabis has a different bar, one focused instead on scientific discovery. He believes AGI would be a technology that could not only solve existing problems, but also come up with entirely new explanations for the universe. A test for its existence might be whether a system could come up with general relativity with only the information Einstein had access to; or if it could not only solve Photograph by David Vintiner for TIMEOrder your copy of the 2025 TIME100 issue hereIn an AI industry whose top ranks are populated mostly by businessmen and technologists, that identity sets Hassabis apart. Yet he must still operate in a system where market logic is the driving force. Creating AGI will require hundreds of billions of dollars’ worth of investments—dollars that Google is happily plowing into Hassabis’ DeepMind unit, buoyed by the promise of a technology that can do anything and everything. Whether Google will ensure that AGI, if it comes, benefits the world remains to be seen; Hassabis points to the decision to release AlphaFold for free as a symbol of its benevolent posture. But Google is also a company that must legally act in the best interests of its shareholders, and consistently releasing expensive tools for free is not a long-term profitable strategy. The financial promise of AI—for Google and for its competitors—lies in controlling a technology capable of automating much of the labor that drives the more than $100 trillion global economy. Capture even a small fraction of that value, and your company will become one of the most profitable the world has ever seen. Good news for shareholders, but bad news for regular workers who may find themselves suddenly unemployed.So far, Hassabis has successfully steered Google's multibillion-dollar AI ambitions toward the type of future he wants to see: one focused on scientific discoveries that, he hopes, will lead to radical social uplift. But will this former child chess prodigy be able to maintain his scientific idealism as AI reaches its high-stakes endgame? His track record reveals one reason to be skeptical. When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his technology from being used for military applications. It was a red line that reflected his vision of AI as humanity's scientific savior, not a weapon of war. But multiple corporate restructures later, that protection has quietly disappeared. Today, the same AI systems developed under Hassabis's watch are being sold, via Google, to militaries such as Israel's—whose campaign in Gaza has killed tens of thousands of civilians. When pressed, Hassabis denies that this was a compromise made in order to maintain his access to Google's computing power and thus realize his dream of developing AGI. Instead, he frames it as a pragmatic response to geopolitical reality, saying DeepMind changed its stance after acknowledging that the world had become “a much more dangerous place” in the last decade. “I think we can't take for granted anymore that democratic values are going to win out,” he says. Whether or not this justification is honest, it raises an uncomfortable question: If Hassabis couldn't maintain his ethical red line when AGI was just a distant promise, what compromises might he make when it comes within touching distance?To get to Hassabis’s dream of a utopian future, the AI industry must first navigate its way through a dark forest full of monsters. Artificial intelligence is a dual-use technology like nuclear energy: it can be used for good, but it could also be terribly destructive. Hassabis spends much of his time worrying about risks, which generally fall into two different buckets. One is the possibility of systems that can meaningfully enhance the capabilities of bad actors to wreak havoc in the world; for example, by endowing rogue nations or terrorists with the tools they need to synthesize a deadly virus. Preventing risks like that, Hassabis believes, means carefully testing AI models for dangerous capabilities, and only gradually releasing them to more users with effective guardrails. It means keeping the “weights” of the most powerful models (essentially their underlying neural networks) out of the public’s hands altogether, so that models can be withdrawn from public use if dangers are discovered after release. That’s a safety strategy that Google follows but which some of its competitors, such as DeepSeek and Meta, do not. The second category of risks may seem like science fiction, but they are taken seriously inside the AI industry as model capabilities advance. These are the risks of AI systems acting autonomously— such as a chatbot deceiving its human creators, or a robot attacking the person it was designed to help. Language models like DeepMind’s Gemini are essentially grown from the ground up, rather than written by hand like old-school computer programs, and so computer scientists and users are constantly finding ways to elicit new behaviors from what are best understood as incredibly mysterious and complex artifacts. The question of how to ensure that they always behave and act in ways that are “aligned” to human values is an unsolved scientific problem. Early signs of misaligned behaviors, like strategic lying, have already been identified by researchers working with today’s language models. Those problems are only likely to become more acute as models get better. “How do we ensure that we can stay in charge of those systems, control them, interpret what they're doing, understand them, and put the right guardrails in place that are not movable by very highly capable self-improving systems?” Hassabis says. “That is an extremely difficult challenge.”It’s a devilish technical problem—but what really keeps Hassabis up at night are the political coordination challenges that accompany it. Even if well-meaning companies can make safe AIs, that doesn’t by itself stop the creation and proliferation of unsafe AIs. Stopping that will require international collaboration—something that’s becoming increasingly difficult as western alliances fray and geopolitical tensions between the U.S. and China rise. Hassabis has played a significant role in the three AI summits held by global governments since 2023, and says he would like to see more of that kind of cooperation. He says the U.S. government’s export controls on AI chips, intended to prevent China’s AI industry from surpassing Silicon Valley, are “fine”—but he would prefer to avoid political choices that “end up in an antagonistic kind of situation.” He might be out of luck. As both the U.S. and China have woken up in recent years to the potential power of AGI, the climate of global cooperation —which reached a high watermark with the first AI Safety Summit in 2023—has given way to a new kind of realpolitik. In this new era, with nations racing to militarize AI systems and build up stockpiles of chips, and with a new cold war brewing between the U.S. and China, Hassabis still holds out hope that competing nations and companies can find ways to set aside their differences and cooperate, at least on AI safety. “It’s in everyone’s self-interest to make sure that goes well,” he says. Even if the world can find a way to safely navigate through the geopolitical turmoil of AGI’s arrival, the question of labor automation will rear its head. When governments and companies no longer rely on humans to generate their wealth, what leverage will citizens have left to demand the ingredients of democracy and a comfortable life? AGI might create abundance, but it won’t dispel the incentives for companies and states to amass resources and compete with rivals. Hassabis admits he is better at forecasting technological futures than social and economic ones; he says he wishes more economists would take the possibility of near-term AGI seriously. Still, he thinks it’s inevitable we’ll need a “new political philosophy” to organize society in this world. Democracy, he says, “is not a panacea, by any means,” and might have to give way to “something better.” Hassabis, left, captaining the England under-11s chess team at the age of 9. Courtesy Demis HassabisAutomation, meanwhile, is already on the horizon. In March, DeepMind announced Gemini 2.5, the latest version of its flagship AI model, which outperforms rival models made by OpenAI and Anthropic on many popular metrics. Hassabis is currently hard at work on Project Astra, a DeepMind effort to build a universal digital assistant powered by Gemini. That work, he says, is not intended to hasten labor disruptions, but instead is about building the necessary scaffolding for the type of AI that he hopes will one day make its own scientific discoveries. Still, as research into these AI “agents” progresses, Hassabis says, expect them to be able to carry out increasingly more complex tasks independently. (An AI agent that can meaningfully automate the job of further AI research, he predicts, is “a few years away.”) For the first time, Google is also now using these digital brains to control robot bodies: in March the company announced a Gemini-powered android robot that can carry out embodied tasks like playing tic-tac-toe, or making its human a packed lunch. The tone of the video announcing Gemini Robotics was friendly, but its connotations were not lost on some YouTube commenters: “Nothing to worry [about,] humanity, we are only developing robots to do tasks a 5 year old can do,” one wrote. “We are not working on replacing humans or creating robot armies.”  Hassabis acknowledges the social impacts of AI are likely to be significant. People must learn how to use new AI models, he says, in order to excel professionally in the future and not risk getting left behind. But he is also confident that if we eventually build AGI capable of doing productive labor and scientific research, the world that it ushers into existence will be abundant enough to ensure a substantial increase in quality of life for everybody. “In the limited-resource world which we're in, things ultimately become zero-sum,” Hassabis says. “What I'm thinking about is a world where it's not a zero-sum game anymore, at least from a resource perspective.” Five months after his Nobel Prize, Hassabis’s journey from chess prodigy to Nobel laureate now leads toward an uncertain future. The stakes are no longer just scientific recognition—but potentially the fate of human civilization. As DeepMind's machines grow more capable, as corporate and geopolitical competition over AI intensifies, and as the economic impacts loom larger, Hassabis insists that we might be on the cusp of an abundant economy that benefits everyone. But in a world where AGI could bring unprecedented power to those who control it, the forces of business, geopolitics, and technological power are all bearing down with increasing pressure. If Hassabis is right, the turbulent decades of the early 21st century could give way to a shining utopia. If he has miscalculated, the future could be darker than anyone dares imagine. One thing is for sure: in his pursuit of AGI, Hassabis is playing the highest-stakes game of his life.
    0 Comments 0 Shares 91 Views
  • TIME.COM
    Exclusive Clip: In War-Torn Kyiv, Vitalik Buterin Makes the Case for Crypto’s Future
    31-year-old Vitalik Buterin is one of crypto’s most important figures. As the founder of Ethereum, he pioneered the idea that crypto and blockchains could serve larger purposes beyond money. But Buterin also sought to diminish his own role within the ecosystem, and encouraged his community to think much bigger than their own short-term gains. A new documentary, Vitalik: An Ethereum Story, tells his story, following him around the world as he confronts difficult technical problems and evangelizes a strange new technology that attempts to reorient the world around its decentralized value system. ”In our minds, tech is never neutral: it’s a reflection of the values and blind spots of its creators,” says co-director Chris Temple. “As we head into conversations around AI and crypto, Vitalik models a new kind of leadership compared to the tech leaders that we're used to—the Elon Musks, Mark Zuckerbergs and Jeff Bezos’s of the world—that are that are structured within these centralized organizations and decisionmaking apparatuses.” Temple and co-director Zach Ingrasci filmed Buterin across two years, and much of their footage didn’t make it into the 86-minute film. That includes one scene in Kyiv, Ukraine, in which Buterin plays chess with Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation. The scene, published exclusively by TIME, shows a fascinating peek into Buterin’s ideological leanings, and his acute desire for crypto to have real-world use cases beyond speculation. Buterin traveled to Ukraine in September 2022, six months after Russia’s invasion. While Buterin was born in Russia, he staunchly opposed the invasion and personally donated millions to Ukrainian relief efforts. Thanks in part to his vocal support, almost $100 million in crypto poured into Ukraine in the first couple weeks of the invasion, offering fast relief and easy, direct transactions.The deleted scene shows Fedorov and Buterin talking over a game of chess. In the early days of the invasion, Fedorov tells him, the country’s national bank had banned international transactions, so the Ukrainian government instead used crypto to receive funding and buy weapons and military supplies. “All of the first drones, lots of ammunition, arrived thanks to crypto,” Fedorov tells him. “We saved the lives of hundreds—maybe thousands of our military. So it was highly important.”Buterin responds: “We love Ukraine… For the blockchain community itself, this was the first opportunity to make a real difference with blockchain and cryptocurrency.” Later, when Fedorov points out that Ukrainians continue to live in the war-torn country, Buterin adds: “I think continuing real life means, even if I’m a good person, to show Putin a middle finger.” Ingrasci says the clip didn’t make it into the final cut because the moment it depicts is referenced in other ways. “But I think it's the most important moment for Vitalik in our journey with him, because it’s this real world use case of how crypto can really make the world a better place,” he says. Vitalik: An Ethereum Story is available on VOD on April 15. TIME Studios served as one of the film’s production companies. Andrew R. Chow’s book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.
    0 Comments 0 Shares 95 Views
  • TIME.COM
    Trump Wants Tariffs to Bring Back U.S. Jobs. They Might Speed Up AI Automation Instead
    Announcing his tariffs in the White House Rose Garden last week, President Trump said the move would help reopen shuttered car factories in Michigan and bring various other jobs back to the U.S.“The president wants to increase manufacturing jobs here in the United States of America,” Press Secretary Karoline Leavitt added on Tuesday. “He wants them to come back home.”But rather than enticing companies to create new jobs in the U.S., economists say, the new tariffs—bolstered by recent advancements in artificial intelligence and robotics—could instead increase incentives for companies to automate human labor entirely.“There’s no reason whatsoever to believe that this is going to bring back a lot of jobs,” says Carl Benedikt Frey, an economist and professor of AI & work at Oxford University. “Costs are higher in the United States. That means there’s an even stronger economic incentive to find ways of automating even more tasks.”In other words: when labor costs are low—like they are in Vietnam—it’s usually not worth it for companies to invest in the expensive up-front costs of automating human labor. But if companies are forced to move their labor to more expensive countries, like the U.S., that cost-benefit calculation changes drastically.To be sure, experts note that tariffs may not immediately lead to more automation. Automating manufacturing jobs often requires companies to make significant investments in physical machinery, which tariffs are likely to make more expensive. In a time of economic turmoil, companies also usually hold off on making big capital expenditures. Thus, in the short run, Nobel Prize-winning economist Daron Acemoglu predicts, there is likely to be so much disruption that few companies will invest in automation or much else. But if tariffs persist in the medium term, Acemoglu tells TIME, he expects companies “will have no choice but to bring some of their supply chains back home—but they will do it via AI and robots.”The evidence from the last time Trump imposed tariffs on trading partners, in 2018, shows no major increase in automation as a result. (Those tariffs did in fact lead to job losses in affected industries anyway, a Federal Reserve study found, due to higher production costs and reduced export competitiveness.) But some economists think the 2025 tariffs could be different—incentivizing more automation—because AI and robotics have come a long way since 2018. “Our technological capabilities have improved since the last round of tariffs, particularly because of improvements in AI,” says Frey, the Oxford economist.The rise of roboticsFor years, a major limitation of robots was that they couldn’t adapt to even minor changes in their environments. An industrial robot might be able to carry out a repeatable task in a controlled environment easily—like cutting a car door from a sheet of metal—but for more deft tasks in more complex environments, humans still prevailed.That might not be the case for much longer. Robot “brains” are getting more adaptable, thanks to progress in general AI systems like large language models. Robot bodies are becoming more deft, thanks to investment and research by companies like Boston Dynamics. And robots are getting cheaper to produce over time (although tariffs might temporarily reverse that trend). “It has taken some time, but people have been doing research on taking language models’ ability for common-sense understanding, and applying it to robotics,” says Lucas Hansen, co-founder of CivAI, a non-profit. “It doesn’t require much special effort to apply robots to new purposes now, especially once this technology matures a bit more. So if you’re a mid-sized manufacturing operation, previously you would have had to invest tons of money in R&D to automate everything. But now, it will require a lot less marginal effort.”Acemoglu is more skeptical. Robots, he says, still struggle in complex environments, even if flashy corporate demo videos suggest otherwise. “I wouldn’t be optimistic that it’s a quick problem to be solved,” he says, predicting that flexible robots are at least 10 years away.If tariffs lead to more automation, it’s still unlikely that productivity gains will offset the huge losses stemming from supply chain disruption and added import costs. “The main first-order effect of tariffs is they will make everything less efficient,” says Erik Brynjolfsson, the director of the Digital Economy Lab at Stanford University. “When you throw sand in the gears of supply chains and global trade, we’re all just going to be a little bit poorer.”The Trump Administration has said it wants AI to benefit American workers, rather than replace them. “We refuse to view AI as a purely disruptive technology that will inevitably automate away our labor force,” Vice President JD Vance said in February. “We believe and we will fight for policies that ensure that AI is going to make our workers more productive, and we expect that they will reap the rewards.”But past experiences with new technologies in the workplace suggest that rosy vision is unlikely to come to pass, says Brian Merchant, a labor historian and author of Blood in the Machine. “Historically when there is a downturn, if there is an opportunity to automate, then companies will take it. That doesn’t necessarily mean that you’ll use fewer humans, but it does mean that employers have a chance to break through labor protections and gain more leverage.”
    0 Comments 0 Shares 166 Views
  • TIME.COM
    How Rare Earths Are Playing a Pivotal Role in the U.S.-China Trade War
    In response to Donald Trump’s escalating tariffs, China retaliated in part by placing export restrictions on a slew of rare earth elements. These powerful materials are crucial to the U.S., because they underpin the creation of weapons, computer chips, and electric cars. China  produces a majority of these rare earth materials—and experts say that the U.S. is years away from building its own supply chain. As the U.S.–China trade war ramps up, rare earths are among the most important pieces of leverage that China controls. There are many reasons why China would not want to shut off U.S. access to rare earths completely, most notably that the country makes a lot of money from exporting them. But if China decides to further choke off its supply, the ripple effects could be extremely painful across many industries, says Lyle Trytten, a critical minerals expert. “The U.S. does not have the means to create the materials it needs to create the devices it survives on,” he says. Rare earth’s importanceThe importance of rare earths has only increased over the years, due to the world’s reliance on ever-powerful computers and its search for cleaner energy. Dysprosium and terbium, for example, are found in smartphones’ vibration units. Neodymium powers the motors of electric vehicles. Tungsten, an ultra-hard metal, is used in ammunition, semiconductor chips, and alloys found in jet engines and deep-drilling rigs. Almost all of these materials are mined and processed by China, which has spent decades aggressively building the infrastructure to do so. As a result, many companies, including Tesla and Apple, source their rare earths from China. Recently, China has not hesitated to wield this dominance as a geopolitical bargaining tool. In 2010, China halted rare-earth exports to Japan amidst rising tensions. Over the past two years, Beijing has imposed curbs on other critical minerals, such as gallium, germanium, and graphite. “It’s pretty predictable now that once the U.S. pulls something—whether it's an export control on a particular technology or a tariff—this is China’s chosen weapon,” says Fabian Villalobos, an engineer at RAND. “Critically, the separation of heavy rare earths from the light rare earths is where China has a dominance, and therefore there’s a vulnerability in the supply chain.”The White House signaled its understanding of the fragility of the current ecosystem when it exempted critical minerals from its tariffs regime this month. But that did not stop China from issuing export controls on seven kinds of rare earth elements, to all countries, on Friday. The decision is not a ban, but it does give Beijing oversight and control over access to the rare earth elements. China said that its export controls will not affect the rare earth supply chain. Crucially, China omitted several of the most-coveted rare earth elements, including neodymium and praseodymium. But the controls show that China is willing to use these materials as a bargaining chip and could escalate their restrictions if tensions increase. “Consider this an opening shot across the bow,” says Trytten. The listed elements also include those found in microchips used for AI—a further indication of the ongoing AI arms race between the two countries. Villalobos says that in the short term, there will likely be a slowdown of rare earth exports as companies apply for licenses to adhere to the export controls. “You might see a temporary dip in exports, and then a ramping up as more companies get their licenses,” he says. But Villalobos says the greater threat to U.S. companies could come afterward, once China starts collecting detailed information about the rare earth market—which then gives China the ability to impose damaging sanctions upon specific companies. That could include U.S. defense companies like Lockheed Martin, which needs rare earths for components in missile systems and fighter jets. “This is the danger: The more information you can gather from exporters, the more you can target specific companies that you don't want getting access to rare earth,” he says. U.S. capacityMany experts have long called for the U.S. to wean itself off of this dependence. Some believe that the solution is to mine rare earths on the moon. Other entrepreneurs have started projects building mines and processing facilities across America. Trump’s tariffs, then, could incentivize these kinds of shifts; to force American companies to build up supply chain resilience. “Maybe it will move the ball on investments, which is one of the big barriers to diversifying critical mineral supply chains,” Villalobos says. But rare earths and other minerals are extremely intensive to process—and the U.S. does not have the infrastructure to scale these efforts quickly, Trytten says. The number of graduates of U.S. mining engineering programs has steadily declined over the last few decades, potentially leading to a lack of expertise. Trytten says that there is danger in rushing new mining projects into production. “The history of our industry in the metal space is that when we try to do things fast, we tend to do them poorly,” he says. Because of these factors, Trytten contends that even if a new wave of mining projects is kickstarted now, they will not come to fruition until long after Trump has left the White House. “Call it eight to 10 years before you have significant new capacity for a lot of these raw materials,” Trytten says. “Can he weather the storm that long?”Other experts say that various other parts of Trump’s tariffs make it hard for them to scale up their state-side infrastructure. On the Rare Earth Exchanges podcast, the entrepreneur Daniel O’Connor said that tariffed materials like steel and aluminum are crucial toward mining and processing. “Let’s not do tariffs on things we need to build our infrastructure,” he said. Rare earths in Greenland?Some have speculated that rare earths play a major role in Donald Trump’s interest in Greenland. Tech giants like Bill Gates and Jeff Bezos have invested in companies prospecting for rare earths there. But extracting resources out of Greenland poses many challenges. “Greenland has very little domestic energy production, and you can find those resources pretty much anywhere,” Trytten says. “There are much easier mining locations than the Arctic.” Regardless of whether Greenland is a viable option, many U.S. companies are now being forced to pursue non-Chinese rare earth options, even if it takes them years to develop. “Think about every automated thing: If you push a button and it moves, it’s probably reliant on some sort of rare earth magnet,” Villalobos says. “Whoever makes that, if they're in the U.S., Japan, or anywhere outside of China, they’re going to feel the impact from this—and they might be potential targets for sanctions in the future.”
    0 Comments 0 Shares 147 Views
  • TIME.COM
    How Trump’s Tariffs Could Make AI Development More Expensive
    Stocks in AI companies were among the biggest losers after President Trump announced sweeping tariffs on foreign trading partners last week, in a sign that those tariffs could be bad news for the industry.The companies at the forefront of the AI industry are currently spending hundreds of billions of dollars on building new datacenters to train AI models. Tariffs will increase the already gargantuan costs of those efforts, analysts say.“The tariffs will make building AI datacenters much more expensive, both because AI servers are largely imported and will face tariffs, at least until supply chains can be rejigged, and because much of the other equipment in datacenters, like the cooling and power infrastructure, is imported as well,” says Chris Miller, author of Chip War.Chips themselves, the key computing hardware inside AI datacenters, are exempt from Trump’s tariffs—but only if they are imported to the U.S. as standalone products. However, most chips are not imported into the U.S. as raw materials; instead, they arrive already-packaged inside products like servers, which are subject to tariffs. Worried AI investors received good news on Monday in a note circulated by analyst Stacy Rasgon, who pointed out that most Nvidia servers are likely to escape the bite of Trump’s tariffs. That’s because most appear to be assembled in Mexico, and therefore benefit from a tariff exemption under a free trade agreement. That’s a “silver lining” to the news, says Rasgon, a semiconductor industry analyst at Bernstein Research. (Nvidia declined to comment.)“I think there are some workarounds to avoid massive tariffs on AI infrastructure in the U.S., which is good because otherwise what’s the whole point of this?” says Rasgon. “We’d be making the U.S. the most expensive place in the world to build out AI infrastructure—that doesn’t sound like a great thing.”But construction materials, computer parts, cooling infrastructure, and power supplies are just some of the costs that are likely to increase as a result of the tariffs. The costs could be so great that companies might consider building datacenters abroad instead of in the U.S., says Lucas Hansen of the Civic AI Security Program, a nonprofit. Datacenters already tend to congregate where power is cheap, he says. “It’s possible that tariffs are one more additional incentive for building those datacenters abroad.”The increased costs of datacenter construction create a “real risk,” Miller says, that the U.S. might begin losing ground to China in the AI race—victory in which is a key foreign policy goal of the Trump Administration. “It has already been a major challenge to build all the datacenter capacity we need” in the U.S. to stay ahead of China, Miller says. “Now datacenter construction will get meaningfully more expensive.”“The short term impact will be significant, and the long term impact is unclear—and companies can't plan for the long term because tariff rates will likely keep changing,” Miller adds.Even if Trump creates more carveouts to ease the pain on the datacenter industry, the changes to the macroeconomic climate wrought by Trump’s trade war might create new headwinds for AI companies. “My bigger worry is more macro now: we go into recession, ad spending falls off, and the hyperscalers in general have less money,” says Rasgon, referring to the tech companies spending heavily on AI. Collapse in demand for AI and datacenters, plus supply chain chaos, might follow, he adds. “This doesn’t feel like a strategy,” Rasgon says of the tariffs. “This is just a grenade.”The increases in datacenter costs will probably make it more expensive for companies to train AI systems in future. But this is unlikely to mean that using AI gets moreresearch by Epoch AI, as a result of algorithmic efficiencies, hardware improvements, and pricing competition. In other words, a year from now, using a given model will probably require significantly less computing power (and therefore money) than it does today. So even if Trump’s tariffs do add to the cost of datacenter components, researchers say AI usage is likely to continue to get cheaper over time.
    0 Comments 0 Shares 181 Views
  • TIME.COM
    What Are Stablecoins?
    During the 2024 elections, the crypto industry spent an unprecedented amount of money on campaign donations in the hopes of encouraging Congress to pass pro-crypto legislation. Those efforts seem to have largely succeeded, with pro-crypto legislators filling the halls of Congress. The first crypto-related area theyve decided to tackle? Stablecoin regulation.Stablecoins are cryptocurrencies designed to hold the value of a U.S. dollar. Proponents argue that stablecoins help the U.S. preserve the global importance of the dollar, while allowing people worldwide to transact more freely, cheaply, and securely. Stablecoin usage is growing enormously: its total market cap is around $235 billion, up from $152 billion just a year ago. In March, President Trump said that he hoped to sign stablecoin legislation by August. Congress has responded accordingly: In the past month, both the House and Senate have advanced stablecoin bills out of committee.Heres a brief overview of stablecoins, the proposed legislation, and potential risks.What are stablecoins and who supports them?Stablecoins are somewhat like bank deposits. Typically, a consumer who wants a stablecoin gives a dollar to an issuing company, who mints a stablecoin on a blockchain. The user can then send that stablecoin around the world as a form of payment to anyone who accepts it. Crypto traders like stablecoins because they do not fluctuate in price nearly as much as assets like Bitcoin or Ethereum, making trading more predictable. And many non-traders across the world appreciate stablecoins because they hold their value better than currencies in countries with high inflation, like Argentina and Turkey.In the U.S., stablecoin supporters come from across the political spectrum. On the right, political leaders like House Majority Whip Tom Emmer argue that stablecoins help maintain the dollars status as the worlds reserve currency. A massive number of Eurodollarsunsecured, unofficial dollars that are issued by foreign banks as opposed to the Federal Reserveremain in circulation across the world. Stablecoins could fill this significant demandand offer a safer, more transactable alternative. And because stablecoin issuers often secure their stablecoins by buying U.S. Treasuries, an increase in stablecoin demand could help ease the burden of the U.S.s ballooning debt, proponents argue.On the left, some Democrats believe that stablecoins provide paths to financial inclusion and toward dismantling biased banking systems. New York Representative Ritchie Torres told TIME in September that he believed stablecoins could help constituents in his heavily-immigrant district send money home to the Caribbean and Latin America quickly, while avoiding check-cashing fees or predatory loan sharks. The ability to move a tokenized dollar at the speed of the blockchain has the potential to create a better, cheaper, and faster payment system for the lowest-income communities, he said. Torres was one of six Democrats who voted in favor of the STABLE Act, which passed out of the House Financial Services Committee on April 2.What are the main stablecoins, and how is Trump now involved?The stablecoin market is currently dominated by two players: USDT (issued by Tether) and USDC (issued by Circle). Tether is extremely popular outside the U.S. but has been accused by regulators of making misleading statements about its reserves. Howard Lutnick, Trumps new Commerce Secretary, previously had financial ties to the company. The bills being considered by Congress would open the door for many other types of companies to issue their own stablecoins. Notably, the Trump familys crypto company World Liberty Financial recently announced its own stablecoin. This is just the latest of Trumps crypto ventures, which have included a federal Bitcoin reserve and a meme coin.World Libertys stablecoin announcement drew swift criticism over concerns that Trump would yet again have a direct financial stake in an industry he is supposed to regulate. California Democrat Maxine Waters, who has been working on stablecoin legislation for years, now says she staunchly opposes any bill that would allow Trump to own a stablecoin. French Hill, a Republican from Arkansas and the House Financial Services Committee Chair, said this week that Trumps crypto initiatives made drafting legislation more complicated.What kind of legislation is being considered?Both the House and Senate have passed stablecoin billsThe STABLE and GENIUS Acts, respectivelyout of their respective committees. The bills lay out guidelines for how stablecoins will be regulated, and the amount and types of reserves stablecoin issuers must have on hand. The House and Senate will now have the opportunity to reconcile the two bills in the hopes of getting a unified bill onto President Trumps desk by the summer.If legislation passes, many financial institutions would likely seek to create their own stablecoin. Bank of America, for instance, said it would launch a stablecoin once lawmakers make it legal to do so. PayPal and Stripe have also announced stablecoin initiatives.What are the main critiques of stablecoin legislation?The appetite in Washington for a stablecoin bill is high among legislators on both sides of the aisle. But some lawmakers have raised concerns. Elizabeth Warren, one of the most vocal crypto skeptics in Congress, has argued that the legitimization of stablecoins comes with systemic risks, especially because they could be susceptible to bank runs. In 2022, the stablecoin UST caused a massive crypto crash when it lost its dollar peg and hurtled to zero. UST, however, was an algorithmic stablecoin, a type of currency that the STABLE Act bans from getting federal approval for two years. The bill lacks basic safeguards necessary to ensure that stablecoins dont blow up our entire financial system, Warren said at a hearing for the GENIUS Act in March. Under this bill, stablecoin issuers can invest in risky assets, including the very assets that were bailed out in 2008.Some critics also worry that the stablecoin bills, as they are currently written, allow for Big Tech companies like Meta and X to issue their own currencies, further consolidating corporate power. If people think there's a Big Tech surveillance state now, imagine what there would be when they have access to every piece of financial information about you, says Arthur Wilmarth, a professor emeritus at George Washington University Law School. There's very little, if anything, in the bills that would give you protection.
    0 Comments 0 Shares 184 Views
  • TIME.COM
    How Crypto Legislation Could Hand Big Tech the Keys to Banking
    On Wednesday, a stablecoin bill called the STABLE Act advanced through the House Financial Services Committee, increasing the likelihood that Congress will pass a law this year cementing stablecoins' as a global financial tool. Proponents argue that stablecoins help the U.S. preserve the global centrality of the dollar, while allowing people worldwide to transact more freely, cheaply and securely. But while stablecoin legislation has received bipartisan support, it has also faced targeted pushback, particularly from Democrats concerned about systemic risks and conflict of interestespecially since the Trump familys crypto company announced the creation of its own stablecoin. Critics also warn of another potentially significant side effect: that such legislation could open the door for Big Tech players like Meta, X, and Amazon to create their own privatized forms of money, further consolidating corporate power.This is being framed as a crypto bill, and in some ways it is. But it has not reached most peoples radar that its biggest beneficiary is likely to be large tech platforms, says Hilary Allen, a professor at American University Washington College of Law and a vocal crypto skeptic in D.C.Read More: What are Stablecoins? Both the House and Senate have passed stablecoin billsthe STABLE and GENIUS Acts, respectivelyout of committee. The bills lay out guidelines for how stablecoins will be regulated, and the amount and types of reserves stablecoin issuers must have on hand. The House and Senate will now have the opportunity to reconcile the two bills in the hopes of getting a unified bill onto President Trumps desk by the summer. Several banks, including Bank of America, have expressed interest in launching their own stablecoin, should a law pass. But under the current language of the two bills, non-financial companies would also be able to create their own stablecoins via subsidiaries. While previously proposed stablecoin bills prohibited non-banking companies from doing so, neither the STABLE nor the GENIUS Act contain such a provision. In fact, the STABLE Act says that any nonbank can issue a stablecoin as long as they acquire approval from a federal regulator. Allen says that this would open the door for Big Tech moguls like Elon Musk and Mark Zuckerberg to create their own stablecoins. Both have long been interested in the payments sectorMusks X has acquired money transmitter licenses in many states, while Facebook tried to launch its own cryptocurrency, Libra, in 2019 before facing stiff criticism and regulatory scrutiny. These big tech platforms have been very interested in doing payments because they're in the data collection and monetization businessand payments data is particularly valuable because it shows what youre actually buying, Allen says. The more people's transactions migrate onto these big tech platforms, that will really beef up what are already incredibly systemically important actors in our society, and put them at the center of our financial system.Allen lays out a hypothetical scenario in which Amazon issues stablecoins. They could then conceivably scale its usage among Amazon employees and users, Whole Foods shoppers, and Washington Post subscribers, to the point that many people start relying on stablecoins as opposed to bank accounts. Thats really bad news, because banks take the money deposited with them and loan them out into the economy, while stablecoin reserves just sit there, Allen says. So money that had been used productively in our economy is now just sitting with Amazon.Stephen Lynch, a Massachusetts Democrat, made a similar point at the STABLE bills markup on Wednesday, warning his colleagues that stablecoins would compete with bank deposits and undermine the ability of banks to make loans to consumers and main street businesses.In October 2023, Rohit Chopra, director of the Consumer Financial Protection Bureau under President Biden, warned that if Big Tech firms assumed control of banking operations, they would have a strong incentive to surveil all aspects of a consumers transactions. He added that they could also develop personalized pricing algorithms.Arthur Wilmarth, a professor emeritus at George Washington University Law School, tells TIME that people paying for goods with stablecoins would lack fraud protection. He also points to China as a cautionary tale, where Tencent and Alibaba became dominant payments players and gained undue influence over regulatorswhich then led Beijing to tighten its grip and gain sway over those businesses decisionmaking.At the markup on Wednesday, Rep. Maxine Waters pushed for an amendment that would maintain the separation of commerce and banking, claiming that the bill as written could enable Elon Musk, Walmart, and others to create their own currencies. Wisconsin Republican Bryan Steil, a co-writer of the bill, responded that the amendment would lead to a stifling of innovation. Co-writer French Hill, a Republican from Arkansas and the House Financial Services Committee Chair, said that he hoped Congress could work out a thoughtful solution to Waters concerns while considering a larger crypto market structure bill. The amendment was then rejected. I view this stablecoin legislation as presenting a very dangerous opening for big tech to get into banking in a big way, Wilmarth says. Once that happens, I think it will be almost impossible to ever close the door again.
    0 Comments 0 Shares 167 Views
  • TIME.COM
    Chinese State Media Rebukes Trumps Tariffs With AI Song and Film
    Leaders around the world have responded to U.S. President Donald Trumps shocking new tariffs that threaten to upend the global economy with stern words and denunciations. But Chinese state media have offered a different approach.Liberation Day, you promised us the stars, sings a female-sounding voice over images of Trump. But tariffs killed our cheap Chinese cars.A 2-minute, 42-second music videotitled Look What You Taxed Us Through (An AI-Generated Song. A Life-Choking Reality)was published on April 3 by the Chinese state news network CGTN.For many Americans, Liberation Day hailed by Trump administration will mean shrinking paychecks and rising costs. Tariffs hit, wallets quit: low-income families take the hardest blow. As the market holds its breath, the toll is already undeniable. Numbers dont lie. Neither does the cost of this so-called fairness, CGTN captioned the video on its website. Warning: Track is AI-generated. The debt crisis? 100 percent human-made.The lyrics, displayed in English and Chinese, appear to rebuke Trumps tariffs from the point of view of the American consumer, and its addressed directly to the U.S. President. Groceries cost a kidney, gas a lung. Your deals? Just hot air from your tongue, the opening verse continues. Thanks for the tariffs, and the mess you made, the song ends, before the music video displays quotes from reports by the Yale Budget Lab and the Economist lambasting Trumps tariffs.Experts have warned that American consumers will bear much of the costs of Trumps tariffs, which are taxes on imports, and U.S. recession indicators have risen since the White Houses April 2 reciprocal tariff rollout. At the same time, global markets have been shocked at a level not seen since the pandemic.CGTN isnt the only state media outlet to use AI to slam Trumps trade policy. New China TV, the English-language social-media-focused brand of Chinas official state news service Xinhua, also published on April 3 a three-minute, 18-second sci-fi short called T.A.R.I.F.F.The film follows a robot named Technical Artificial Robot for International Fiscal Functions. This is the story of T.A.R.I.F.F., an AIGC [artificial-intelligence-generated content] sci-fi thriller about the relentless weaponization of #Tariffs by the United States, and the psychological journey of a humanoid towards its eventual self-destruction. Please watch, reads the videos description on YouTube.In the film, T.A.R.I.F.F. is booted up by what appears to be a nefarious U.S. government official named Dr. Mallory. T.A.R.I.F.F. identifies himself, saying: My existence is defined by the execution of international fiscal actions, with the primary directive being the imposition of tax on foreign imports. When asked what his ultimate purpose is, T.A.R.I.F.F. responds: To protect the interests of the American people.Exactly, says Dr. Mallory. We need you as a weapon to protect us, now more than ever.As the film goes on, T.A.R.I.F.F. implements moderate tariffs and finds initial positive results: Industrial production up. But when Dr. Mallory pushes the robot to rev it up, T.A.R.I.F.F. implements aggressive tariffs. The results: unemployment rates rising, costs of living increasing, disruption of trade.You are protecting us. This is what we need, Dr. Mallory says. T.A.R.I.F.F. responds, understanding: Protection through disruption. Taxation as weapon.Yes, tariffs are a tool of power. You will protect our industries, our jobs, our economy, Dr. Mallory says, appearing increasingly agitated. But I can see the consequences of my actions, says the robot. The trade wars. The unrest. The people who suffer. And the retaliation.Spoiler alert: T.A.R.I.F.F. and the evil doctor argue about the greater goodWith my AI economic inference system, T.A.R.I.F.F. asserts, I can see I have become the beginning of a chain reaction that will harm the very people I was meant to safeguardand the robot ultimately chooses to self-destruct, taking Dr. Mallory along with it.On April 3, following Trumps latest tariffs announcement, Chinas Ministry for Foreign Affairs posted on social media a video featuring a mix of seemingly AI-generated images and real ones, to the soundtrack of John Lennons Imagine and USA for Africas We Are the World. It asked the question: What kind of world do you want to live in? offering the choice between our imperfect world with things like greed and tariffs and an alternative utopia with shared prosperity and global solidarity.To be sure, the latter is certainly not the reality in China. And for now, it appears far from possible for the world.Beijing has made its displeasure with Trumps tariffswhich began targeting China in his first termwell known. The latest reciprocal rate of 34% comes on top of 20% levies announced earlier this year. Beijing has over the years implemented tit-for-tat countermeasures and has vowed to continue as long as the trade war persists, warning earlier this year: If war is what the U.S. wants, be it a tariff war, a trade war or any other type of war, were ready to fight until the end.
    0 Comments 0 Shares 183 Views
  • TIME.COM
    Australias Leader Takes On Social Media. Can He Win?
    The press conference starts like any other: Australian Prime Minister Anthony Albanese is grilled on everything from affordable housing and war in the Middle East to his relationship with U.S. President Donald Trump.Then, Lana, 11, picks up the microphone. Do you think social media has an impact on kids? asks the suburban Canberra primary-school student.Of all the burning issues of the day, its the one that Albanese feels on surest ground to answer. It also goes to the heart of his governments most eye-catching policyone that directly affects Lana and the other student reporters invited to interrogate Australias top politician for Behind the News, a long-running kids current-affairs show.It certainly does, and thats why were going to ban social media for under-16s, Albanese replies resolutely. I want to see you all out playing with each other at lunchtime, talking to each other like we are now, and engaging with each other rather than just being on your devices.The fact that Australias Prime Minister carved out 45 minutes between parliamentary sessions to indulge kids at least two terms from voting age underlines his belief that social media represents an unambiguous threat to his nations most precious resource: its children. And he is determined to do something about it.Albanese answers questions from students from St. Francis of Assisi Primary School in Canberra on Feb. 5. Chris Gurney for TIMEThe perils are largely beyond dispute. Some of the worlds biggest companies use the fig leaf of engagement to hook children during vulnerable developmental stages, rewiring their brains via a firehose of addictive content that psychologists say has changed human development on a previously unfathomable scale. In the decade that followed the proliferation of mobile internet services in 2010, depression among young Americans rose around 150%, with corresponding spikes in anxiety and self-harm. The trend is mirrored across the developed world, including Australia, where mental health hospitalizations soared 81% for teen girls and 51% for boys over the same period. It has become the No. 1 issue that parents are talking about, says Albanese. These are developing minds, and young people need the space to be able to grow up.On Dec. 10, in a bid to carve out and ring-fence that space, Australia will implement a 16-year-old age limit for users of platforms such as Snapchat, TikTok, Facebook, Instagram, and X. The law is the first of its kind in the world.While most platforms have a self-imposed age limit of 13, enforcement is laughable; kids can simply input a false date of birth. Rather than targeting underage kids, the Australian law will punish companies that fail to introduce adequate safeguards with fines of up to 49.5 million Australian dollars ($31 million) for as yet undefined systemic breaches. (The precise details of how and when these fines will be imposed have yet to be made clear.) In other words, Australia will flip the equation: instead of relying on users to truthfully disclose their ages, it will put the burden on the worlds tech giants. Its a bold move, directly targeting some of the worlds most influential companies run by its richest and most powerful men, including X owner Elon Muskwho has dubbed the Albanese government fascists and the age restriction a backdoor way to control access to the internet by all Australians.For its proponents, however, the law is a critical first step toward checking social medias toxic influence on children. Illustration by Brobel Design for TIMEIn November, Frances Education Minister said the E.U. should urgently follow Australias examplenot least since the infusion of artificial intelligence means that supercharged algorithms are peddling disinformation faster than ever. The truth is smothered by lies told for power and for profit, former U.S. President Joe Biden lamented in his farewell address.The upshot: Australia has now come to serve as a beachhead for others to prepare their own defenses. The U.K., Ireland, Singapore, Japan, and the E.U. are among many jurisdictions closely monitoring Canberras next move. In the U.S., the bipartisan Kids Off Social Media Act (KOSMA) to restrict social media for kids under 13 and bar platforms from pushing targeted content to users under 17 is advancing through the Senate, while around half of states passed legislation last year to make it harder for children and teens to spend time online without supervision. On March 5, Utah became the first state to require app stores to verify users ages and get parental consent for minors to download apps. If the age restriction goes well in Australia, then I think it will go global very quickly, says Professor Jonathan Haidt, a social psychologist at New York Universitys Stern School of Business and author of The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.Albaneses stance is also remarkable for just how politically uncontroversial it has proved. As Australia heads for a close federal election in May, Albaneses center-left Labour Party and the right-leaning Liberal-National Coalition opposition are locking horns on every issue, whether nuclear energy, health care funding, or taxation. But the social media age restriction passed with bipartisan support and stands to be implemented no matter who triumphs at the ballot box.Thats not to say there arent detractorsand not just the social media companies, which say the legislation passed without due consultation. We are concerned the governments rapid, closed-door consultation process on the minimum-age law is undermining necessary discourse, a Meta spokesperson told TIME. TikTok complained that an exemption for YouTube was akin to banning the sale of soft drinks to minors but exempting Coca-Cola.Some mental health experts, meanwhile, say blocking kids from social media will drive them to darker, less regulated corners of the internet. Others fear children who bypass the age restriction will find themselves in a less controlled space where theyre unable to seek help. Theres also huge debate over what exactly counts as social media when myriad gaming and educational websites also employ addictive scrolling features. A group of 140 mental health experts penned an open letter to the Albanese government to oppose the ban, calling it too blunt an instrument to address risks effectively.For Albanese, an imperfect plan is better than no plan at all. We acknowledge that this wont be absolute, Albanese tells TIME during an exclusive interview in his parliamentary office in February. But it does send a message about what society thinks and will empower parents to have those conversations with their children.They are children whose upbringing is unrecognizable from that of any previous generation. If parents once fretted about the attention kids paid to comic books and television, the immersive, dopamine-driven pull of the computer screenvideo games, chat platforms, social mediahas changed how nearly everyone looks at the world, but especially young people. A February government report by Australias eSafety Commissioner found that 80% of preteens used social media. A 2024 Pew Research poll found 46% of American teens said they were online almost constantly. Nearly a quarter of U.K. 5-to-7-year-olds now have their own smartphone.The devices can bring physical danger. Pedophiles and traffickers stalk the virtual world with greater freedom than in the real one. In 2023, the U.S. National Center for Missing and Exploited Children (NCMEC) tracked 298 attempted abductions involving 381 children and received 36.2 million reports of suspected child sexual exploitation online.80% of Australian preteens used social media, according to a February report from Australias eSafety Commissioner. William WestAFP/Getty ImagesBut concern also wells around a child alone with a phone. For Albanese, theres something especially sad about Australian kids shunning some of the worlds highest rates of sunshine for the artificial glare of screens. His own childhood in a one-parent household in Sydneys industrial inner-city suburb of Camperdown was far from idyllic. His mother was crippled by chronic rheumatoid arthritis, meaning the family survived on her disability payments and his grandmothers pension. Home was a government-housing block flanked by a childrens hospital, biscuit factory, and metal foundry. But there was a grassy patch where kids would hang around playing rugby, cricket, or swapping football cards. We would go out to play at 9 oclock and just knew you had to be home for lunch, and then do the same in the afternoon, Albanese recalls. People interacted with each other. And that capacity to communicate face-to-face is really important. They learn how to win, how to lose, how to engage.It may sound wistful, but Albaneses perspective is backed by science. Psychologists say physical play, preferably outdoors and among a mix of ages, is essential to a childs development. Young people learn how to not get hurt by negotiating scenarios in which getting hurt is possible, such as climbing a tree or leaping from a swing at its zenith.But such play is increasingly a thing of the past. Instead, time on screens has grown and grown, turbo-charged in 2009 with the arrival of the like button on Facebook and retweet on Twitter, now Xinnovations that, in the minds of many experts, transformed social media from a harmless friendship forum to an algorithm-driven popularity contest. Instagram debuted a year later, coinciding with the launch of the iPhone 4 and Samsung Galaxy S, both of which featured the worlds first front-facing cameras. Instagrams array of filters allowed users to make images less natural and more stylized and are now ubiquitous across Snapchat, TikTok, and other platforms.The result has been a great deal of diversion, not all of it positive. The digital realm brings striking new elements of risk, for instance, to young peoples emerging sexuality, from the distorting effect of readily available hardcore pornography on all who see it (the term incel, or involuntarily celibate, was coined for frustrated, often misogynistic young men who bond online), as well as a heightened risk of online grooming and sextortion. In July 2022, 17-year-old Rohan Patrick Cosgriff died by suicide near his home outside Melbourne after he was pressured into sending an intimate picture to someone called Christine on Snapchat, who then demanded money not to distribute the images. A note in Cosgriffs pocket simply said: I made a huge mistake. Im sorry.The Australian Centre to Counter Child Exploitation received over 58,000 reports of online child abuse in 202324, a 45% year-on-year rise. Australia is far from unique; the NCMEC, in the U.S., saw a rise of over 300% in reports of online enticement including sextortion from 2021 to 2023.For girls, social media takes a different roleone that statistics show can prove even more damaging. Whereas male social hierarchy has traditionally adhered to physical attributes like sporting prowess, girls find value in the breadth and depth of relationships. In short, popularity. And one way to climb the social ziggurat is to undermine your peers: spread gossip, turn friends against rivals, and lower others value within the group.But the explosion of front-facing cameras and filters has meant the reflection teens see in the mirror has become less and less attractive compared with the carefully curated photos and videos of their peers online, causing self-worth to plunge. Girls seeing lots of beautiful pictures of other girls living perfect lives is absolutely devastating to them, says Haidt. Those with poor self-worth are likelier to lash out at others, with indirect bullying more prevalent among adolescent girls than boys. One of the first things that Kelly OBrien saw upon entering 12-year-old Charlottes bedroom on Sept. 9 was her cell phone on the floor. Then she noticed two pillows neatly arranged under the duvet. By the time she found her daughter in the en suite bathroom it was too late. When the paramedics arrived, they just looked at her and said, So sorry, shes gone, says OBrien, eyes brimming.Kelly believes Charlotte took her own life at their suburban Sydney home in large part because of the toxic effect of social media. Charlotte was a bright girl who loved cheerleading, doted on her baby brother, and was navigating the tricky road from childhood to adulthood, equally obsessed with Taylor Swift and Gossip Girl as well as the latest Disney animation. Charlotte had suffered bullying at school, but her parents say it was social media that rendered that cycle of acceptance and rejection unbearably acute. The weeks that she was in, she was over the moon, says Mat OBrien, Charlottes dad. The week she was out, just awful.As soon as Charlotte got a cell phone it became a problem, spurring reclusive, depressive episodes. Charlotte had her phone confiscated more often than she had access, Kelly says, a punishment that invariably began with two days of sullen withdrawal followed by a marked upturn in moodclassic addiction symptoms, say psychologists. The night before her passing, Charlotte had been upbeat, enjoying her favorite pasta dinner and baking banana bread for the next day. I kissed the happiest girl in the world good night, says Kelly. Something happened after she got to her room. A friend who spoke to a distraught Charlotte later that evening has since told Kelly about the vile, hateful message her daughter received via Snapchat. (We are deeply committed to keeping our community safe, a Snap spokesperson told TIME. Our hearts go out to this family, whose pain is unimaginable.)Social media companies say that bullying has always been a problem and will continue whether via schoolyard taunts, crank phone calls, or their platforms. Still, beginning in the early 2010s, girls mental health was hit by a sharp rise in rates of anxiety, depression, and self-harm. The rate of self-harm for young adolescent girls in the U.S. nearly tripled from 2010 to 2020, while the rate for older teens doubled. In 2020, 1 out of every 4 American teen girls had experienced a major depressive episode in the previous year.Family photos of Charlotte OBrien, who died by suicide at age 12 in September after being bullied on social media. Courtesy of the OBrien familyKelly OBrien explained the devastating effects of social media on Charlotte in a letter to Albanese as part of the 36 Months campaigna grassroots movement to raise and properly enforce the age limit for social media to 16. When you hear firsthand about a parent losing their child then it undoubtedly has an impact, says Albanese, who later invited the OBriens to meet with him in Canberra. Also at that meeting was Michael Wipfli, an Australian radio presenter known as Wippa, who spearheaded 36 Months. Sat in the Prime Ministers office, it was clear he knew what needed to be done, says Wippa. We needed leadership, a captains call, somebody to say, enough is enough.Albanese first became involved in leftist politics while studying economics at the University of Sydney. He rose up the Labour Party ranks with a reputation as a backroom mediator and a knack for forging concord between squabbling factions. After Labours shock defeat in Australias 2019 federal election, Albanese emerged as an unexpected but unifying leadership candidate. Hes an accidental Prime Minister, says Nick Bisley, dean of social sciences at La Trobe University.Indeed, Albanese has struggled to unify an ever more polarized countrydespite an undeniable everyman charisma. As Albanese inspected repairs to a bridge destroyed by floodwater in northern Queensland, he was joined by the mayor of the cut-off town of Ingham, population 4,455, who arrived wearing shorts, a faded polo shirt, thong sandals, and a cap advertising the local tractor mechanic. You didnt have to dress up! teased a local lawmaker as helicopters carrying supplies buzzed overhead. Anthonys an ordinary bloke!Its a pit stop that showcases Australias endearing insouciance as well as how vital internet access has become for communications across its vast expansenot least as climate change renders extreme weather more frequent and severe. Australia is the worlds sixth largest country by landmassroughly equivalent to the U.S. minus Alaskathough 55th by population with just 26 million people. The result is an abundance of sparsely inhabited outback communities for which social media is absolutely critical, admits Albanese. Were not Luddites, he adds, reeling off the various platforms he posts on. Young people arent being banned from a range of interactions through technology that are about their education or engaging with each other. Were not confiscating peoples devices.Albanese welcomes students from St. Francis of Assisi Primary School in Calwell to his office in Canberra on Feb. 5. Chris Gurney for TIMEAlbanese points to the success of last years ban of cell phones in Australian public schools. The impact has been phenomenal, says Australian Education Minister Jason Clare. A survey of almost 1,000 school principals in Australias most populous state of New South Wales shows 87% say students are less distracted in the classroom while 81% have noticed improved learning. Meanwhile, South Australia has seen a 63% decline in critical incidentssuch as bullying and distribution of explicit or derogatory contentinvolving social media and 54% fewer behavioral issues. But when school ends the phones come out and theyre back in the cesspit of social media, says Clare. In the old days, bullying and intimidation stopped at the school gate. Now its at home as well.Still, critics say the social media age restriction was a knee-jerk reaction passed without proper consultation, involves thorny data-privacy issues, and creates even more risks for youngsters who use platforms illicitly. Its absolutely dumb, its not going to work, says Roy Sugarman, a Sydney-based clinical psychologist. Its ridiculous because the genie is out of the bottle.Sugarman compares the Australian ban to American Prohibition in the 1920s, which some studies suggest actually increased alcohol consumption in the U.S. while leading to a spike in organized crime. He says a far better tactic would be to teach teens to be technologically astute, to understand online dangers, how to think rationally, act with purpose, and deal with the virtual world to mitigate damages. Human behavior doesnt lend itself to being told what to do, says Sugarman. Its the opposite. Humans hate being told what to do.History also offers examples that point the other way. While Sugarman invokes the example of Prohibition, Wippa compares social media age restrictions to similar rules for cigarettes, which while routinely flouted have led smoking rates among young people to plummet.But the fact is, nobody knows what will happen. Nothing like this has been attempted before. And then theres the question of implementation. Australias eSafety Commissioner, Julie Inman Grant, says that around 30 different age-verification technologies are being tested in collaboration with various social media platforms, including French firm BorderAge, which claims to accurately gauge age using AI analysis of hand signals. Meanwhile, platforms, which the legislation makes responsible for enforcing the age restrictions, want to pass that burden to app stores, principally run by Apple and Google, saying they should act as gatekeepers.Grant compares the legislation to laws regarding fencing swimming pools. In the early 1970s, the widespread availability of cheap, preformed fiberglass pools meant the rate of young children drowning soared. Not long after, states began requiring all private swimming pools to be fenced, which led deaths to fall and has since been adopted nationwide. But that didnt mean Australia suddenly stopped teaching kids to swim, fired all the lifeguards, or fenced off the ocean. This is not the great Australian firewall, says Grant. Childrens social media accounts arent going to magically disappear. But we can make things a lot better for parents and a lot better for kids.Grant speaks with the zeal of a convert. After cutting her teeth as a congressional staffer focused on tech issues in the 1990s, the Seattle native worked 17 years at Microsoft, two years at Twitter, and a year at Adobe, before being tapped for her current post (the first e-Safety Commissioner anywhere in the world). She believes, from her inside knowledge of the tech industry, that the big players will always put profit first. They can target you with advertising with deadly precision, she says. They could use the same technologies to be able to identify hateful content and child sexual abuse material.Recent events have cemented her skepticism. Last April, Grant sued X over its refusal to remove videos of a religiously motivated stabbing of a bishop in a Sydney church that sparked rioting. X eventually geofenced it so it wasnt available in Australia, while Musk hit out at Grant as censorship commissar, leading to a raft of online abuse. I still receive death threats, she tells TIME.But even more painful for Grant is the knowledge that 17-year-old Axel Rudakubana watched that same Sydney church attack video on X just six minutes before leaving home with a knife to murder three young children and injure 10 others in the U.K. town of Southport on July 29. Having gratuitous violence of terrorist events freely available normalizes it, desensitizes it, says Grant, and in the worst cases [it] radicalizes and spills over into real-world harm.The spat between Grant and Musk prompted Albanese to label the tech mogul an arrogant billionaire. But asked by TIME whether hes concerned by Musks burgeoning influence as Trumps consigliere, Albanese demurs, instead decrying how misinformation can erode trust in institutions. People have conflict fatigue, he says. People need to have respectful debate. And I think there is a concern in society that some of that is breaking down.Albaneses squirming is understandable. Australia and the U.S. are close allies linked via the Quad and AUKUS military arrangements. But Trump spent his first term taking aim at historic alliances, accusing South Korea and Japan of not paying their fair share for American security guarantees, and his return to the White House has heralded a full-frontal assault on European democracies. Australia is one of the few close American allies with a trade deficit with the U.S.since Truman! Albanese stressesas well as a record of deploying alongside American forces and standing up to Beijing. In 2023, Canberra also agreed to invest $3 billion into U.S. shipyards.However, the White House has already hiked tariffs on Australian exports of aluminum and steel, while Albaneses plan to force tech companies like Google and Meta to pay for news shared on their platforms was recently labeled an outrageous attempt to steal our tax revenues by White House trade adviser Peter Navarro.The social media age restriction is yet another friction point between Canberra and American Big Business. Meta founder Mark Zuckerberg has called on the Trump Administration to help push back on this global trend of what he terms censorship. Then theres Kevin Rudd, former Australian Prime Minister and current ambassador to the U.S., who was previously quoted calling Trump not only the most destructive President in history but also a village idiot.The center-left of Australian politics is a long way from the MAGA world, adds Bisley, of La Trobe University. Australias welfare-state instincts are just not particularly aligned to the free-market capitalism of the U.S.Its friction that raises the question of whether Australia can even enforce social media age restrictions. While potential $31 million fines may seem eye-watering, thats the top penalty for systemic breachesrather than per offense, day, or childand mere pocket change to someone like Metas Zuckerberg or Xs Musk, the worlds richest man, worth hundreds of billions and with an ideological antipathy to what he perceives as curbs on free speech. (X failed to respond to repeated requests for comment for this story.) Asked whether social media platforms could be banned outright for noncompliance with the new legislation, Australias Communications Minister Michelle Rowland replies, Thats not a feature of the legislation.In the final analysis, it might not matter. For Mat and Kelly OBrien, the social media age restriction at least takes the issue out of parents hands, just like for driving or drinking alcohol. Asked whether Charlotte would still be alive if the legislation had been in place last September, Kelly has no doubt. Absolutely, she says, 1,000%. And while it might be too late to save Charlotte, theyre hopeful Albaneses stand means other families might be spared similar heartache. I feel like lesser men would have crumbled, Kelly says. But he stood up to Big Tech and the naysayers. Im very grateful and proud.If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988. In emergencies, call 911, or seek care from a local hospital or mental health provider. For international resources, click here.
    0 Comments 0 Shares 179 Views
  • TIME.COM
    Amazon Makes Last-Minute Bid to Buy TikTok as U.S. Ban Nears
    WASHINGTON Amazon has put in a bid to purchase TikTok, a Trump administration official said Wednesday, in an eleventh-hour pitch as a U.S. ban on the platform is set to go into effect Saturday.The official, who was not authorized to comment publicly and spoke on the condition of anonymity, said the Amazon offer was made in a letter to Vice President J.D. Vance and Commerce Secretary Howard Lutnick.The New York Times first reported on the bid.President Donald Trump on Inauguration Day gave the platform a reprieve, barreling past a law that had been upheld unanimously by the Supreme Court, which said the ban was necessary for national security.Under the law, TikToks Chinese-owned parent company ByteDance is required to sell the platform to an approved buyer or take it offline in the United States. Trump has suggested he could further extend the pause on the ban, but he has also said he expects a deal to be forged by Saturday.Amazon declined to comment. TikTok did not immediately respond to a request for comment.The existence of an Amazon bid surfaced as Trump was scheduled on Wednesday to meet with senior officials to discuss the coming deadline for a TikTok sale.Although its unclear if ByteDance plans to sell TikTok, several possible bidders have come forward in the past few months. Among the possible investors are the software company Oracle and the investment firm Blackstone. Oracle announced in 2020 that it had a 12.5% stake in TikTok Global after securing its business as the apps cloud technology provider.In January, the artificial intelligence startup Perplexity AI presented ByteDance with a merger proposal that would combine Perplexitys business with TikToks U.S. operation. Last month, the company outlined its approach to rebuilding TikTok in a blog post, arguing that it is singularly positioned to rebuild the TikTok algorithm without creating a monopoly.Any acquisition by a consortium of investors could in effect keep ByteDance in control of the algorithm, while any acquisition by a competitor would likely create a monopoly in the short form video and information space, Perplexity said in its post.The company said it would remake the TikTok algorithm and ensure that infrastructure would be developed and maintained in American data centers with American oversight, ensuring alignment with domestic privacy standards and regulations.Other potential bidders include a consortium organized by billionaire businessman Frank McCourt, which recently recruited Reddit co-founder Alexis Ohanian as a strategic adviser. Investors in the consortium say theyve offered ByteDance $20 billion in cash for TikToks U.S. platform. Jesse Tinsley, the founder of the payroll firm Employer.com, says he too has organized a consortium and is offering ByteDance more than $30 billion for the platform. Wyoming small business owner Reid Rasner has also announced that he offered ByteDance roughly $47.5 billion.Both the FBI and the Federal Communications Commission have warned that ByteDance could share user datasuch as browsing history, location and biometric identifierswith Chinas authoritarian government. TikTok said it has never done that and would not do so if asked. The U.S. government has not provided evidence of that happening.Trump has millions of followers on TikTok and has credited the trendsetting platform with helping him gain traction among young voters.During his first term, he took a more skeptical view of TikTok and issued executive orders banning dealings with ByteDance as well as the owners of the Chinese messaging app WeChat.Parvini reported from Los Angeles.
    0 Comments 0 Shares 203 Views
  • TIME.COM
    Inside AmazonsRaceto Build the AI Industrys Biggest Datacenters
    Rami Sinno is crouched beside a filing cabinet, wrestling a beach-ball sized disc out of a box, when a dull thump echoes around his laboratory.I just dropped tens of thousands of dollars worth of material, he says with a laugh.Straightening up, Sinno reveals the goods: a golden silicon wafer, which glitters in the fluorescent light of the lab. This circular platter is divided into some 100 rectangular tiles, each of which contains billions of microscopic electrical switches. These are the brains of Amazons most advanced chip yet: the Trainium 2, announced in December.For years, artificial intelligence firms have been dependent on one company, Nvidia, to design the cutting-edge chips required to train the worlds most powerful AI models. But as the AI race heats up, cloud giants like Amazon and Google have accelerated their in-house efforts to design their own chips, in pursuit of market share in the rapidly-growing cloud computing industry, which was valued at $900 billion at the beginning of 2025.This unassuming Austin, Texas, laboratory is where Amazon is mounting its bid for semiconductor supremacy. Sinno is a key player. Hes the director of engineering at Annapurna Labs, the chip design subsidiary of Amazons cloud computing arm, Amazon Web Services (AWS). After donning ear protection and swiping his card to enter a secure room, Sinno proudly displays a set of finished Trainium 2s, which he helped design, operating the way they normally would in a datacenter. He must shout to be heard over the cacophony of whirring fans that whisk hot air, warmed by these chips insatiable demand for energy, into the buildings air conditioning system. Each chip can fit easily into the palm of Sinnos hand, but the computational infrastructure that surrounds themmotherboards, memory, data cables, fans, heatsinks, transistors, power-suppliesmeans this rack of just 64 chips towers over him, drowning out his voice.Large as this unit may be, its only a miniaturized simulacrum of the chips natural habitat. Soon thousands of these fridge-sized supercomputers will be wheeled into several undisclosed locations in the U.S. and connected together to form Project Rainierone of the largest datacenter clusters ever built anywhere in the world, named after the giant mountain that looms over Amazons Seattle headquarters.Project Rainier is Amazons answer to OpenAI and Microsofts $100 billion Stargate project, announced by President Trump at the White House in January. Meta and Google are also currently building similar so-called hyperscaler datacenters, costing tens of billions of dollars apiece, to train their next generation of powerful AI models. Big tech companies have spent the last decade amassing huge piles of cash; now they're all spending it in a race to build the gargantuan physical infrastructure necessary to create AI systems that, they believe, will fundamentally change the world. Computational infrastructure of this scale has never been seen before in human history.The precise number of chips involved in Project Rainier, the total cost of its datacenters, and their locations are all closely-held secrets. (Although Amazon wont comment on the cost of Rainier by itself, the company has indicated it expects to invest some $100 billion in 2025, with the majority going toward AWS.) The sense of competition is fierce. Amazon claims the finished Project Rainier will be the worlds largest AI compute clusterbigger, the implication is, than even Stargate. Employees here resort to fighting talk in response to questions about the challenge from the likes of OpenAI. Stargate is easy to announce, says Gadi Hutt, Annapurnas director of product. Lets see it implemented first.Amazon is building Project Rainier specifically for one client: the AI company Anthropic, which has agreed to a long lease on the massive datacenters. (How long? Thats classified, too.) There, on hundreds of thousands of Trainium 2 chips, Anthropic plans to train the successors to its popular Claude family of AI models. The chips inside Rainier will collectively be five times more powerful than the systems that were used to the best of those models. Its way, way, way bigger, Tom Brown, an Anthropic co-founder, tells TIME. Nobody knows what the results of that huge jump in computational firepower will be. Anthropic CEO Dario Amodei has publicly predicted that powerful AI (the term he prefers over Artificial General Intelligencea technology that can perform most tasks better and more quickly than human experts) could arrive as early as 2026. That means Anthropic believes theres a strong possibility that Project Rainier, or one of its competitors, will be the place where AGI is birthed.The flywheel effectAnthropic isnt just a customer of Amazon; its also partially owned by the tech giant. Amazon has invested $8 billion in Anthropic for a minority stake in the company. Much of that money, in a weirdly circular way, will end up being spent on AWS datacenter rental costs. This strange relationship reveals an interesting facet of the forces driving the AI industry: Amazon is essentially using Anthropic as a proof-of-concept for its AI datacenter business. Its a similar dynamic to Microsofts relationship with OpenAI and Googles relationship with its DeepMind subsidiary. Having a frontier lab on your cloud is a way to make your cloud better, says Brown, the Anthropic co-founder who manages the companys relationship with Amazon. He compares it to AWSs partnership with Netflix: in the early 2010s, the streamer was one of the first big AWS customers. Because of the huge infrastructural challenge of delivering fast video to users all over the world, it meant that AWS got all the feedback that they needed in order to make all of the different systems work at that scale, Brown says. They paved the way for the whole cloud industry.All cloud providers are now trying to replicate that pattern in the AI era, Brown says. They want someone who will go through the jungle and use a machete to chop a path, because nobody has been down that path before. But once you do it, theres a nice path, and everyone can follow you. By investing in Anthropic, which then spends most of that money on AWS, Amazon creates what it likes to call a flywheel: a self-reinforcing process that helps it build more advanced chips and datacenters, drives down the cost of the compute required to run AI systems, and shows other companies the benefits of AI, which in turn results in more customers for AWS in the long run. Startups like OpenAI and Anthropic get the glory, but the real winners are the big tech companies who run the worlds major cloud platforms.To be sure, Amazon is still heavily reliant on Nvidia chips. Meanwhile, Googles custom chips, known as TPUs, are considered by many in the industry to be superior to Amazons. And Amazon isnt the only big tech company with a stake in Anthropic. Google has also invested some $3 billion for a 14% stake. Anthropic uses both Google and Amazon clouds in a bid to be reliant on neither. Despite all this, Project Rainier and the Trainium 2 chips that will fill its datacenters are the culmination of Amazons effort to accelerate its flywheel into pole position.Trainium 2 chips, Sinno says, were designed with the help of intense feedback from Anthropic, which shared details with AWS about how its software interacted with Trainium 1 hardware, and made suggestions for how the next generation of chips could be improved. Such tight collaboration isnt typical for AWS clients, Sinno says, but is necessary for Anthropic to compete in the cutthroat world of frontier AI. The capabilities of a model are essentially correlated with the amount of compute spent to train and run it, so the more compute you can get for your buck, the better your final AI will be. At the scale that they're running, each point of a percent improvement in performance is of huge value, Sinno says of Anthropic. The better they can utilize the infrastructure, the better the return on investment for them is, as a customer.The more sophisticated Amazons in-house chips become, the less it will need to rely on industry leader Nvidiademand for whose chips far outstrips supply, meaning Nvidia can pick and choose its customers while charging well above production costs. But theres another dynamic at play, too, that Annapurna employees hope might give Amazon a long-term structural advantage. Nvidia sells physical chips (known as GPUs) directly to customers, meaning that each GPU has to be optimized to run on its own. Amazon, meanwhile, doesnt sell its Trainium chips. It simply sells access to them, running in AWS-operated datacenters. This means Amazon can find efficiencies that Nvidia would find difficult to replicate. We have many more degrees of freedom, Hutt says.Back in the lab, Sinno returns the silicon wafer to its box and moves to another part of the room, gesturing at the various stages of the design process for chips that mightpotentially very soonhelp summon powerful new AIs into existence. He is excitedly reeling off statistics about the Trainium 3, expected later this year, which he says will be twice the speed and 40% more energy-efficient than its predecessor. Neural networks running on Trainium 2s assisted with the teams design of the upcoming chip, he says. Thats an indication of how AI is already accelerating the speed of its own development, in a process that is getting faster and faster. Its a flywheel, Sinno says. Absolutely.
    0 Comments 0 Shares 192 Views
  • TIME.COM
    What the TikTok Ban Deadline Could Mean for the Apps Future
    Despite a last-minute save by President Donald Trump in January, the fate of TikTok remains murky as the deadline to save the app approaches. TikTok, which boasts more than 170 million users, has been under fire by U.S. legislators over concerns about data privacy and national security. In order to save the app, Trump gave the social media site until April 5 to divest and find a U.S.-based owner. Under his previous term, Trump was the acting force seeking to ban TikTok in the U.S. But the President seemed favorable towards the app following his election win, citing it as part of why he secured support from young voters."We have a lot of potential buyers," Trump said on Air Force One Sunday. "I'd like to see Tiktok remain alive." Trump indicated that he would extend the deadline if a deal was not finalized. China has indicated that it would not support the forced sale of TikTok.The latest tussle is just the most recent development in the long legal battle to keep TikTok available in the U.S. Efforts to ban TikTok in Montana were overturned by a federal judge in 2023 after creators filed suit against the state.TikTok did not immediately respond to TIMEs request for comment.Heres what to know.Who currently owns TikTok?TikTok is owned by its Beijing-based parent company, ByteDance. Sixty percent of the company is owned by investors including Carlyle Group, General Atlantic, and Susquehanna International Group, according to TikToks U.S. Data Security page. The remaining 40% is divided between ByteDance employees and the founder of ByteDance.TikTok is not available in mainland China and has headquarters in Los Angeles and Singapore.But its parent company does have to comply with Chinese law because it operates other video first platforms such as Douyin, the Chinese-equivalent of TikTok. The Chinese government therefore has a golden share of one of its subsidiaries, Douyin, owning about 1% of that app.Why is the U.S. banning TikTok?Congress passed a TikTok ban as part of a foreign aid supplemental package, citing national security concerns. Lawmakers were particularly concerned that the company could share data with the Chinese government or interfere with users algorithms in a way that would benefit the foreign government. Chinese national security laws require any companies or organizations to cooperate with the countrys national intelligence efforts.Congress is not acting to punish ByteDance, TikTok or any other individual company, Senate Commerce Committee Chairwoman Maria Cantwell said in April 2024. Congress is acting to prevent foreign adversaries from conducting espionage, surveillance, maligned operations, harming vulnerable Americans, our servicemen and women, and our U.S. government personnel.Already, the app is not permitted on any government-owned devices in the U.S. Other countries, including India have banned TikTok for all users since 2020, while Australia and Canada similarly forbade TikTok from operating on any devices issued by the federal government.What can Trump do to extend the deadline?The current April 5 deadline was issued via Executive Order. Trump could possibly issue another such order to extend the deadline. Can you still use TikTok if it is banned?The last time the app was banned, U.S.-based users were unable to comment, share, or view any videos. A law banning TikTok has been enacted in the U.S. Unfortunately, that means you cant use TikTok for now, a message on a gray background read during the one-day TikTok pause. Even when the app was restored, TikTok was unavailable for download on the Apple and Google Play stores until mid-February.It is unclear whether another potential ban would function similarly to the previous one.
    0 Comments 0 Shares 198 Views
  • TIME.COM
    How Those Studio Ghibli Memes Are a Sign of OpenAIs Trump-Era Shift
    If youre wondering why social media is filled with Studio Ghibli-style memes all of a sudden, there are several answers to that question.The most obvious one is that OpenAI dropped an update to ChatGPT on Tuesday that allows users to generate better images using the 4o version of the model. OpenAI has long proffered image generation tools, but this one felt like a significant evolution: users say it is far better than other AI image-generators at accurately following text prompts, and that it makes much higher fidelity images.But thats not the only reason for the deluge of memes in the style of the Japanese animation house.Alongside the ChatGPT update, OpenAI also relaxed several of its rules on the types of images users can generate with its AI toolsa change CEO Sam Altman said represents a new high-water mark for us in allowing creative freedom. Among those changes: allowing users to generate images of adult public figures for the first time, and reducing the likelihood that ChatGPT would reject users prompts, even if they risked being offensive.People are going to create some really amazing stuff and some stuff that may offend people, Altman said in a post on X. What we'd like to aim for is that the tool doesn't create offensive stuff unless you want it to, in which case within reason it does.Users quickly began making the most of the policy change sharing Ghiblified images of 9/11, Adolf Hitler, and the murder of George Floyd. The official White House account on X even shared a Studio Ghibli-style image of an ICE officer detaining an alleged illegal immigrant.In one sense, the pivot has been a long time coming. OpenAI began its decade-long life as a research lab that kept its tools under strict lock and key; when it did release early chatbots and image generation models, they had strict content filters that aimed to prevent misuse. But for years it has been widening the accessibility of its tools in an approach it calls iterative deployment. The release of ChatGPT in November 2022 was the most popular example of this strategy, which the company believes is necessary to help society adapt to the changes AI is bringing.Still, in another sense, the change to OpenAIs model behavior policies has a more recent proximate cause: the 2024 election of President Donald Trump, and the cultural shift that has accompanied the new administration.Trump and his allies have been highly critical of what they see as the censorship of free speech online by large tech companies. Many conservatives have drawn parallels between the longstanding practice of content moderation on social media and the more recent strategy, by AI companies including OpenAI, to limit the kinds of content that generative AI models are allowed to create. ChatGPT has woke programmed into its bones, Elon Musk posted on X in December.Like most big companies, OpenAI is trying hard to build ties with the Trump White House. The company scored an early win when, on the second day of his presidency, Trump stood beside Altman and announced a large investment into the datacenters that OpenAI believes will be necessary to train the next generation of AI systems. But OpenAI is still in a delicate position. Musk, Trumps billionaire backer and advisor, has a famous dislike of Altman. The pair cofounded OpenAI together back in 2015, but after a failed attempt to become CEO, Musk quit in a huff. He is now suing Altman and OpenAI, claiming that they reneged on OpenAIs founding mission to develop AI as a non-profit. With Musk operating from the White House and also leading a rival AI company, xAI, it is especially vital for OpenAIs business prospects to cultivate positive ties where possible with the Trump administration.Earlier in March, OpenAI submitted a document laying out recommendations for the new administrations tech policy. It was a shift in tone from earlier missives by the company. OpenAIs freedom-focused policy proposals, taken together, can strengthen Americas lead on AI and in so doing, unlock economic growth, lock in American competitiveness, and protect our national security, the document said. It called on the Trump administration to exempt OpenAI, and the rest of the private sector, from 781 state-level laws proposing to regulate AI, which it said risks bogging down innovation. In return, OpenAI said, industry could provide the U.S. government with learnings and access from AI companies, and would ensure the U.S. retained its leadership position ahead of China in the AI race.Alongside the release of this weeks new ChatGPT update, OpenAI doubled down on what it said were policies intended to give users more freedom, within bounds, to create whatever they want with its AI tools. Were shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm, Joanne Jang, OpenAIs head of model behavior, said in a blog post. The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn.Jang gave several examples of things that were previously disallowed, but to which OpenAI was now opening its doors. Tools could now be used to generate images of public figures, Jang wrote, although OpenAI would create an opt-out list allowing people to decide for themselves whether they wanted ChatGPT to be able to generate images of them. Children, she wrote, would be subjected to stronger protections and tighter guardrails.Offensive content, Jang wroteusing quotation markswould also receive a rethink under OpenAIs new policies. Uses that might be seen as offensive by some, but which didnt cause real-world harm, would be increasingly permitted. Without clear guidelines, the model previously refused requests like make this persons eyes look more Asian or make this person heavier, unintentionally implying these attributes were inherently offensive, Jang wrote, suggesting that such prompts would be allowed in future.OpenAIs tools previously flat-out rejected attempts by users to generate hate symbols like swastikas. In the blog post, Jang said the company recognized, however, that these symbols could also sometimes appear in genuinely educational or cultural contexts. The company would move to a strategy of applying technical methods, she wrote, to better identify and refuse harmful misuse without completely banning them.AI lab employees, she wrote, should not be the arbiters of what people should and shouldnt be allowed to create.
    0 Comments 0 Shares 214 Views
  • TIME.COM
    How This Tool Could Decode AIs Inner Mysteries
    The scientists didnt have high expectations when they asked their AI model to complete the poem. He saw a carrot and had to grab it, they prompted the model. His hunger was like a starving rabbit, it replied.The rhyming couplet wasnt going to win any poetry awards. But when the scientists at AI company Anthropic inspected the records of the models neural network, they were surprised by what they found. They had expected to see the model, called Claude, picking its words one by one, and for it to only seek a rhyming wordrabbitwhen it got to the end of the line. Instead, by using a new technique that allowed them to peer into the inner workings of a language model, they observed Claude planning ahead. As early as the break between the two lines, it had begun thinking about words that would rhyme with grab it, and planned its next sentence with the word rabbit in mind.The discovery ran contrary to the conventional wisdomin at least some quartersthat AI models are merely sophisticated autocomplete machines that only predict the next word in a sequence. It raised the questions: How much further might these models be capable of planning ahead? And what else might be going on inside these mysterious synthetic brains, which we lack the tools to see?The finding was one of several announced on Thursday in two new papers by Anthropic, which reveal in more depth than ever before how large language models (LLMs) think. Todays AI tools are categorically different from other computer programs for one big reason: they are grown, rather than coded by hand. Peer inside the neural networks that power them, and all you will see is a bunch of very complicated numbers being multiplied together, again and again. This internal complexity means that even the machine learning engineers who grow these AIs dont really know how they spin poems, write recipes, or tell you where to take your next holiday. They just do.But recently, scientists at Anthropic and other groups have been making progress in a new field called mechanistic interpretabilitythat is, building tools to read those numbers and turn them into explanations for how AI works on the inside. What are the mechanisms that these models use to provide answers? says Chris Olah, an Anthropic cofounder, of the questions driving his research. What are the algorithms that are embedded in these models? Answer those questions, Olah says, and AI companies might be able to finally solve the thorny problem of ensuring AI systems always follow human rules.The results announced on Thursday by Olahs team are some of the clearest findings yet in this new field of scientific inquiry, which might best be described as a kind of neuroscience for AI.A new microscope for looking inside LLMsIn earlier research published last year, Anthropic researchers identified clusters of artificial neurons within neural networks. They called them features, and found that they corresponded to different concepts. To illustrate this finding, Anthropic artificially boosted a feature inside Claude corresponding to the Golden Gate Bridge, which led the model to insert mention of the bridge, no matter how irrelevant, into its answers until the boost was reversed.In the new research published Thursday, the researchers go a step further, tracing how groups of multiple features are connected together inside a neural network to form what they call circuitsessentially algorithms for carrying out different tasks.To do this, they developed a tool for looking inside the neural network, almost like the way scientists can image the brain of a person to see which parts light up when thinking about different things. The new tool allowed the researchers to essentially roll back the tape and see, in perfect HD, which neurons, features, and circuits were active inside Claudes neural network at any given step. (Unlike a biological brain scan, which only gives the fuzziest picture of what individual neurons are doing, digital neural networks provide researchers with an unprecedented level of transparency; every computational step is laid bare, waiting to be dissected.)When the Anthropic researchers zoomed back to the beginning of the sentence, His hunger was like a starving rabbit, they saw the model immediately activate a feature for identifying words that rhyme with it. They identified the features purpose by artificially suppressing it; when they did this and re-ran the prompt, the model instead ended the sentence with the word jaguar. When they kept the rhyming feature but suppressed the word rabbit instead, the model ended the sentence with the features next top choice: habit.Anthropic compares this tool to a microscope for AI. But Olah, who led the research, hopes that one day he can widen the aperture of its lens to encompass not just tiny circuits within an AI model, but the entire scope of its computation. His ultimate goal is to develop a tool that can provide a "holistic account" of the algorithms embedded within these models. I think there's a variety of questions that will increasingly be of societal importance, that this could speak to, if we could succeed, he says. For example: Are these models safe? Can we trust them in certain high-stakes situations? And when are they lying?Universal languageThe Anthropic research also found evidence to support the theory that language models think in a non-linguistic statistical space that is shared between languages.Anthropic scientists tested this by asking Claude for the opposite of small in several different languages. Using their new tool, they analyzed the features that activated inside Claude when it answered each of those prompts in English, French, and Chinese. They found features corresponding to the concepts of smallness, largeness, and oppositeness, which activated no matter what language the question was posed in. Additional features would also activate corresponding to the language of the question, telling the model what language to answer in.This isnt an entirely new findingAI researchers have conjectured for years that language models think in a statistical space outside of language, and earlier interpretability work has borne this out with evidence. But Anthropics paper is the most detailed account yet of exactly how this phenomenon happens inside a model, Olah says.The finding came with a tantalizing prospect for safety research. As models get larger, the team found, they tend to become more capable of abstracting ideas beyond language and into this non-linguistic space. This finding could be useful in a safety context, because a model that is able to form an abstract concept of, say, harmful requests is more likely to be able to refuse them in all contexts, compared to a model that only recognizes specific examples of harmful requests in a single language.This could be good news for speakers of so-called low-resource languages that are not widely represented in the internet data that is used to train AI models. Todays large language models often perform more poorly in those languages than in, say, English. But Anthropics finding raises the prospect that LLMs may one day not need unattainably vast quantities of linguistic data to perform capably and safely in these languages, so long as there is a critical mass big enough to map onto a models internal non-linguistic concepts. However, speakers of those languages will still have to contend with how those very concepts have been shaped by the dominance of languages like English, and the cultures that speak them.Toward a more interpretable futureDespite these advances in AI interpretability, the field is still in its infancy, and significant challenges remain. Anthropic acknowledges that even on short, simple prompts, our method only captures a fraction of the total computation expended by Claudethat is, there is much going on inside its neural network into which they still have zero visibility. It currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words, the company adds. Much more work will be needed to overcome those limitations.But if researchers can achieve that, the rewards might be vast. The discourse around AI today is very polarized, Olah says. At one extreme, there are people who believe AI models "understand" just like people do. On the other, there are people who see them as just fancy autocomplete tools. I think part of whats going on here is, people dont really have productive language for talking about these problems," Olah says. "Fundamentally what they want to ask, I think, is questions of mechanism. How do these models accomplish these behaviors? They dont really have a way to talk about that. But ideally they would be talking about mechanism, and I think that interpretability is giving us the ability to make much more nuanced, specific claims about what exactly is going on inside these models. I hope that that can reduce the polarization on these questions.
    0 Comments 0 Shares 206 Views
  • TIME.COM
    What Is Signal, the Messaging App Used by Trump Officials, and Is It Safe?
    The Trump administration is facing heavy blowback for using Signal, a messaging app, to discuss sensitive military plans. On March 24, officials usage of the app was revealed after The Atlantic editor Jeffrey Goldberg published a story titled "The Trump Administration Accidentally Texted Me Its War Plans," in which Secretary of Defense Pete Hegseth, among others, discussed upcoming military strikes on Yemen. The U.S. government previously discouraged federal employees from using the app for official business. Some experts have speculated that sharing sensitive national security details over Signal could be illegal, and Democratic lawmakers have demanded an investigation. If our nation's military secrets are being peddled around over unsecure text chains, we need to know that at once, New York Democrat Chuck Schumer said on the Senate floor.Signal is one of the most secure and private messaging platforms that exists for general public use. But cybersecurity experts argue that the app should not have been used for this level of sensitive communication. Signal is a very robust app: a lot of cybersecurity professionals use it for our communications that we want to protect, says Michael Daniel, president and CEO of the Cyber Threat Alliance and a cybersecurity coordinator under President Obama. But its not as secure as government communications channels. And the use of these kinds of channels increases the risk that something is going to go wrong.Signals StrengthsSignal was launched in 2014, with the goal of creating a privacy-preserving messaging platform in an age of increasing mass surveillance. Signal conversations are protected by end-to-end encryption, a technique that makes it extremely hard for a third party to intercept or decipher private messages. While other messaging tools may collect sensitive personal data, Signal prides itself on securely protecting information such as messaging contacts, frequency, and duration.The app has other privacy features, such as automatically disappearing messages after a set period and preventing screenshots of conversations. Signal data is stored locally on user's devices, not the companys servers. Our goal is that everyone in the world can pick up their device, and without thinking twice about it, or even having an ideological commitment to privacy, use Signal to communicate with anyone they want, Signal President Meredith Whittaker told TIME in 2022.Read More: Signals President Meredith Whittaker Shares Whats Next for the Private Messaging AppOver the last few years, Signal has been used by dissidents and protestors around the world who want to keep their conversations safe from political enemies or law enforcement. In Ukraine, the U.S. Embassy in Kyiv described Signal as critical to their work in its ability to ensure secure, rapid, and easily accessible communications. The app now has 70 million users worldwide, according to the tracking site Business of Apps.Government UseThe usage of Signal for government purposes is more contentious. In 2021, the Pentagon scolded a former official for using Signal, saying that it did not comply with the Freedom of Information Act, which decrees the government has legal obligations to maintain federal records. Goldberg, however, reported this week that the Trump officials Signal chat was set to automatically delete messages after a period of time.Sam Vinograd, who served in former President Barack Obama's Homeland Security Department, told CBS that sharing sensitive security details over Signal could violate the Espionage Act as well. Top intelligence officials testified this week that no classified information was shared over the group chat. CIA Director John Ratcliffe said that Signal was a permissible work-use application for the CIA.Last week, a Pentagon advisory cautioned military personnel against using Signal due to Russian hackers targeting the app.The Cyber Threat Alliances Daniel says that he was surprised that top officials were using Signal, given that they have access to government-specific channels that are more secure. When discussing sensitive information, officials are typically required to do so in designated, secure areas called Sensitive Compartmented Information Facilities (SCIFs), or to use SIPRNet, a secure network used by the Defense and State Departments.These are very senior officials who have a lot of options. They have people whose entire jobs are is to make sure that they're able to communicate at all times, Daniel says. We've had that for decades now, and those procedures are really well honed.Daniel contends that government tools could have prevented what went wrong in this instance: the human error of an outside party mistakenly being added to a message chain. He says that government channels have a much higher level of authentication to ensure that members of communication channels are supposed to have access.Dave Chronister, the CEO of the cybersecurity company Parameter Security, says that the governments bespoke communications channels prevent other kinds of interlopers or hackers attempting to use phishing or malware techniques to learn information. If youre on a cell phone, I dont know who could be looking over my shoulder to see what Im typing, not to mention I dont know what else is on that mobile device, he says.Chronister adds that officials use of Signal, as opposed to internal channels, also makes it harder for the government to identify and contain breaches once theyve happened. We could have data out there we didnt know was compromised, he says. If top cabinet officials are using Signal, Im wondering how much is being done on a daily basisand I think theres going to be a lot more fallout from this.A representative for Signal did not immediately respond to a request for comment.
    0 Comments 0 Shares 197 Views
  • TIME.COM
    Why 23andMes Genetic Data Could Be a Gold Mine for AI Companies
    The genetic testing company 23andMe, which holds the genetic data of 15 million people, declared bankruptcy on Sunday night after years of financial struggles. This means that all of the extremely personal user data could be up for saleand that vast trove of genetic data could draw interest from AI companies looking to train their data sets, experts say. Data is the new oiland this is very high quality oil, says Subodha Kumar, a professor at the Fox School of Business at Temple University. With the development of more and more complicated and rigorous algorithms, this is a gold mine for many companies.But any AI-related company attempting to acquire 23andMe would run significant reputational risks. Many people are horrified by the thought that they surrendered their genetic data to trace their ancestry, only for it to now be potentially used in ways they never consented to.Anybody touching this data is running a risk, Kumar, who is the director of Foxs Center for Business Analytics and Disruptive Technologies, says. But at the same time, not touching it, they might be losing on something big as well.Training LLMsCompanies like OpenAI and Google have poured time and resources into making an impact on the medical field, and 23andMes data trove may attract interest from large AI firms with the financial means to acquire it. 23andMe was valued at around $48 million this week, down from a peak of $6 billion in 2021.These companies are striving to build the most powerful general purpose models possible, which are trained on vast amounts of granular data. But researchers have argued that high-quality data sources are drying up, which makes new and robust information sources all the more coveted. A TechCrunch survey of venture capitalists earlier this year found that more than half of respondents cited the quality or rarity of their proprietary data as the edge that AI startups have over their competition.I think it could be a really valuable data set for some of the big AI companies because it represents this ground truth data of actual genetic data, Kazlauskas says of 23andMe. Some of the human errors that might exist in bio publications, you could avoid.Kumar says that 23andMes data could be especially valuable to companies in their push for agentic AI, or AIs that can perform tasks without the involvement of humans, whether in medical research or company decisionmaking.The whole goal of agentic AI models has been a modular approach: you crack the smaller pieces of the problem and then you put them together, he says.Representatives for Google and OpenAI did not immediately respond to requests for comment.Industry-Based Value23andMes data could also be valuable across different industries using AI to sort through vast amounts of datafirst and foremost, medical research.23andMe already had agreements in place with pharmaceutical companies such as GlaxoSmithKline, which tapped into the companys data sets in the hopes of developing new treatments for disease. Kumar says that at Temple, he and colleagues are working on a project to create personalized treatment for ovarian cancer patientsand have found that genetic data can be very, very powerful in understanding structures that we were not able to understand, he says.However, Alex Zhavoronkov, founder and CEO at Insilico Medicine, contends that 23andMes data may not be as valuable as some think, especially in relation to drug discovery. "Most low hanging fruits have already been picked up and there is significant data in the public domain published together with major academic papers, he wrote in an email to TIME.But companies in many other industries will likely be interested, too. This is an abnormally large and nuanced data set: This amount of genetic data, especially that which comes with personal health and medical records, is rarely publicly accessible, says Anna Kazlauskas, CEO of Open Data Labs and the creator of Vana, a network for user-owned data. All of that contextual data makes it really valuableand hard data to get, she says.Potentially interested industries include insurance companies, who could use the data to identify people with greater health risks, in order to up their premiums. Financial institutions could track the relationship between genetic markers and spending patterns in the process of assessing loans. And e-commerce companies could use the data to tailor ads to people with specific medical conditions.Ethical and Privacy ConcernsBut companies also face significant reputational risks in getting involved. 23andMe suffered a hack in 2023 which exposed the personal data of millions of users, severely hurting the companys reputation. Bidders who come from other industries may have even less data protection than 23andMe did, Kumar says. My worry is that some of the companies are not used to having this kind of data, and they may not have enough governance in place, he says.This is especially dangerous because genetic information is inherently sensitive and cannot be altered once compromised. The genetic information of family members of people who willingly gave their data to the company are also at risk. And given AIs well-known biases, the misuse of such data could lead to discrimination in areas like hiring, insurance and loans. On Friday, California Attorney General Rob Bonta released an urgent alert to 23andMe customers advising them to ask the company to delete their data and destroy their genetic samples under a California privacy law.Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, worries that 23andMes genetic data might exist in a state of permanent flux on the market. Once you have sold the data, there are no limits to how many times it may be resold, she says. This could result in genetic data falling into the hands of organizations that may not prioritize ethical considerations or have robust data protection measures in place.Insilico Medicines Zhavoronkov says all of these fears mean that potential AI-related bidders will be dissuaded from trying to purchase 23andMe and its data. Their dataset is actually toxic, he says. Whoever buys it and trains on it will get negative publicity, and the acquirer will be possibly investigated or sued."Regardless of what ultimately happens, Kazlauskas says she is at least thankful that this conundrum has opened up larger conversations about data sovereignty. We should probably, in the future, want to avoid this kind of situation where you decide you want to do a genetic test, and then five years later, this company is struggling financially, and that now puts your genetic data at risk of being sold to the highest bidder, she says. In this AI era, that data is super valuable.
    0 Comments 0 Shares 221 Views
  • TIME.COM
    23andMe Filed for Bankruptcy. What Does That Mean For Your Account?
    The genetic testing and information company, 23andMe, announced on March 23 that it has filed for bankruptcy, after years of financial struggles and data privacy concerns.Filing for bankruptcy will allow the company to facilitate a sale process to maximize the value of its business, 23andMe said in a press release. The news also comes amid management changes; according to the press release, Chief Executive Officer Anne Wojcicki is stepping down from her role, effective immediately, but will continue to serve as a board member. The companys board selected Chief Financial and Accounting Officer Joe Selsavage to serve as the interim CEO.AdvertisementAdvertisementIn the press release, 23andMe said it intends to continue operating its business in the ordinary course throughout the sale process. There are no changes to the way the Company stores, manages, or protects customer data.We are committed to continuing to safeguard customer data and being transparent about the management of user data going forward, and data privacy will be an important consideration in any potential transaction, Mark Jensen, chair and member of the Special Committee of the Board of Directors, said in the press release.Still, some officials are urging customers to consider deleting their data. Just a few days before the bankruptcy announcement, on March 21, California Attorney General Rob Bonta issued a consumer alert to 23andMe customers, advising them to consider deleting their data from the companys website.Given 23andMes reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company, Bonta said in a press release.Some technology experts also encouraged 23andMe users to delete their data. Meredith Whittaker, the president of the messaging app Signal, said in a post on X: "It's not just you. If anyone in your FAMILY gave their DNA to 23&me, for all of your sakes, close your/their account now. This won't solve the issue, but they will (they claim) delete some of your data."In October 2024, NPR reported on customers' concerns over what could happen to their private data amid the company's financial challenges. A 23andMe spokesperson told NPR that the company was committed to privacy, but wouldn't answer questions about what the company might do with customer data. Legal experts said that there are few federal protections for customers, and worried that the sensitive data could potentially be sold off or even accessed by law enforcement, NPR reported.The California Attorney Generals Office outlined in its press release on March 21 the steps customers need to take to delete their genetic data from 23andMe: After logging into your account, click on Settings and scroll to the bottom of the page to a section called 23andMe Data; click View; then you can download your data; scroll to the Delete Data section and click Permanently Delete Data. Youll receive an email from 23andMe after that, and you can follow the link in the email to confirm your request to delete your data.If you had previously allowed 23andMe to store a saliva sample and DNA, you can change that preference by going to the Settings page on your account, under Preferences. If you had previously allowed 23andMe and third-party researchers to use your genetic data and sample for research purposes, you can also revoke that consent from the Settings page, under Research and Product Consents.In addition to years of financial challenges, 23andMe dealt with the fallout from a data breach in 2023 that affected almost 7 million customers.
    0 Comments 0 Shares 215 Views
  • TIME.COM
    What Encrypted Messaging Means for Government Transparency
    As a devastating wildfire burned through a Maui town, killing more than 100 people, emergency management employees traded dozens of text messages, creating a record that would later help investigators piece together the governments response to the 2023 tragedy.One text exchange hinted officials might also be using a second, untraceable messaging service.Thats what Signal was supposed to be for, then-Maui Emergency Management Agency Administrator Herman Andaya texted a colleague.AdvertisementAdvertisementSignal is one of many end-to-end encrypted messaging apps that include message auto-delete functions.While such apps promise increased security and privacy, they often skirt open records laws meant to increase transparency around and public awareness of government decision-making. Without special archiving software, the messages frequently aren't returned under public information requests.An Associated Press review in all 50 states found accounts on encrypted platforms registered to cellphone numbers for over 1,100 government workers and elected officials.Its unclear if Maui officials actually used the app or simply considered ita county spokesperson did not respond to questionsbut the situation highlights a growing challenge: How can government entities use technological advancements for added security while staying on the right side of public information laws?How common is governmental use of encryption apps?The AP found accounts for state, local and federal officials in nearly every state, including many legislators and their staff, but also staff for governors, state attorneys general, education departments and school board members.The AP is not naming the officials because having an account is neither against the rules in most states, nor proof they use the apps for government business. While many of those accounts were registered to government cellphone numbers, some were registered to personal numbers. The APs list is likely incomplete because users can make accounts unsearchable.Improper use of the apps has been reported over the past decade in places likeMissouri,Oregon,Oklahoma,Marylandand elsewhere, almost always because of leaked messages.Whats the problem?Public officials and private citizens are consistently warned about hacking and data leaks, but technologies designed to increase privacy often decrease government transparency.Apps like Signal, WhatsApp, Confide, Telegram and others use encryption to scramble messages so only the intended end-user can read them, and they typically arent stored on government servers. Some automatically delete messages, and some prevent users from screenshotting or sharing messages.The fundamental problem is that people do have a right to use encrypted apps for their personal communications, and have those on their personal devices. Thats not against the law, said Matt Kelly, editor of Radical Compliance, a newsletter that focuses on corporate compliance and governance issues. But how would an organization be able to distinguish how an employee is using it?Are there acceptable government uses of end-to-end encryption apps?The U.S. Cybersecurity and Infrastructure Security Agency, or CISA, has recommended that highly valued targetssenior officials who handle sensitive informationuse encryption apps for confidential communications. Those communications are not typically releasable under public record laws.CISA leaders also say encrypted communications could be a useful security measure for the public, but did not encourage government officials to use the apps to skirt public information laws.Journalists, including many at the AP, often use encrypted messages when talking to sources or whistleblowers.What are states doing?While some cities and states are grappling with how to stay transparent, public record laws arent evolving as quickly as technology, said Smarsh general manager Lanika Mamac. The Portland, Oregon-based company helps governments and businesses archive digital communications.People are worried more about cybersecurity attacks. Theyre trying to make sure its secure, Mamac said. I think that they are really trying to figure out, How do I balance being secure and giving transparency?Mamac said Smarsh has seen an uptick in inquiries, mostly from local governments. But many others have done little to restrict the apps or clarify rules for their use.In 2020, the New Mexico Child, Youth and Families Departments new division director told employees to use the app Signal for internal communications and to delete messages after 24 hours. A 2021 investigation into the possible violation of New Mexicos document retention rules was followed by a court settlement with two whistleblowers and the division directors departure.But New Mexico still lacks regulations on using encrypted apps. The APs review found at least three department or agency directors had Signal accounts as of December 2024.In Michigan, State Police leaders were found in 2021 to be using Signal on state-issued cellphones. Michigan lawmakers responded by banning the use of encrypted messaging apps on state employees work-issued devices if they hinder public record requests.However, Michigan's law did not include penalties for violations, and monitoring the government-owned devices used by 48,000 executive branch employees is a monumental task.Whats the solution?The best remedy is stronger public record laws, said David Cuillier, director of the Brechner Freedom of Information Project at the University of Florida. Most state laws already make clear that the content of communicationnot the methodis what makes something a public record, but many of those laws lack teeth, he said.They should only be using apps if they are able to report the communications and archive them like any other public record, he said.Generally, Cuillier said, theres been a decrease in government transparency over the past few decades. To reverse that, governments could create independent enforcement agencies, add punishments for violations, and create a transparent culture that supports technology, he said.We used to be a beacon of light when it came to transparency. Now, were not. We have lost our way, Cuillier said.Boone reported from Boise, Idaho. Lauer reported from Philadelphia. Associated Press reporters at statehouses nationwide contributed to this report.
    0 Comments 0 Shares 219 Views
  • TIME.COM
    Cybersecurity Experts Are Sounding the Alarm on DOGE
    Since January, Elon Musks Department of Government Efficiency (DOGE) has carved up federal programs, removing positions related to hazardous waste removal, veteran support and disease control, among others. While many have already been affected, cybersecurity experts worry about the impacts not yet realized in the form of hacks, fraud, and privacy breaches. DOGE has fired top cybersecurity officers from various agencies, gutted the Cybersecurity and Infrastructure Agency (CISA), and cancelled at least 32 cybersecurity-related contracts with the Consumer Financial Protection Bureau (CFPB). Cybersecurity experts, including those fired by DOGE, argue that the agency has demonstrated questionable practices toward safeguarding the vast amount of personal data the government holds, including in agencies such as the Social Security Administration and the Department of Veterans Affairs (VA). Last week, a court filing revealed that a DOGE staffer violated Treasury Department policy by sending an email containing unencrypted personal information.AdvertisementAdvertisementI see DOGE actively destroying cybersecurity barriers within government in a way that endangers the privacy of American citizens, says Jonathan Kamens, who oversaw cybersecurity for VA.com until February, when he was let go. That makes it easier for bad actors to gain access.DOGEs access to some agencies data has been limited in response to dozens of filed lawsuits. But as those battles play out in court, DOGE continues to have access to huge amounts of sensitive data. Heres what cybersecurity experts caution is at stake.Personal informationAs DOGE picked up steam following the inauguration, cybersecurity experts began voicing concern about the new organizations privacy practices and digital hygiene. Reports surfaced that DOGE members connected to government networks on unauthorized servers and shared information over unsecure channels. Last month, the DOGE.gov website was altered by outside coders who found they could publish updates to the website without authorization. The same month, Treasury officials said that a 25-year-old DOGE staffer was mistakenly given temporary access to make changes to a federal payment system.Cybersecurity experts find these lapses concerning because the government stores vast amounts of data to serve Americans. For instance, the Department of Veterans Affairs stores the bank accounts and credit card numbers of millions of veterans who receive benefits and services. The department also collects medical data, social security numbers, and the names of relatives and caregivers, says Kamens, who says he was the only federal employee at the agency with an engineering technical background working on cybersecurity. Kamens says he was hired in 2023 to improve several specific security issues for the site, which he declined to name due to confidentiality reasons. Now, he says, hackers could take advantage of those unresolved issues to learn potentially compromising information about veterans, and then target them with phishing campaigns.Peter Kasperowicz, VAs press secretary, wrote to TIME in an email that VA employs hundreds of cybersecurity personnel who are dedicated to keeping the departments websites and beneficiary data safe 24/7.Erie Meyer, former chief technologist at the Consumer Financial Protection Bureau (CFPB), resigned in February after DOGE members showed up at the agencys offices requesting data privileges. Her role focused on safeguarding the CFPB's sensitive data, including transaction records from credit reporting agencies, complaints filed by citizens, and information from Big Tech companies under investigation. There are a bunch of careful protections in place that layer on to each other to make sure that no one could exploit that information, Meyer says.But DOGE slashed many of those efforts, including the regular upkeep of audit and event logs which showed how and when employees were accessing that information. The software we had in place tracking what was being done was turned off, she says. This means that DOGE employees could now have access to financial data with no oversight as to how or why they are accessing it, Meyer says.Meyer is also concerned about the cancellation of dozens of cybersecurity contracts, which included deals with companies who performed security equipment disposal, provided VPNs to government employees, and encrypted email servers. People need us when the worst financial disasters are happening to their family, she says. Its sloppy to open them up to fraud like this.A representative for the CFPB did not immediately respond to a request for comment. In an email statement to TIME, White House press secretary Karoline Leavitt, wrote: President Trump promised the American people he would establish a Department of Government Efficiency, overseen by Elon Musk, to make the federal government more efficient and accountable to taxpayers. DOGE has fully integrated into the federal government to cut waste, fraud, and abuse. Rogue bureaucrats and activist judges attempting to undermine this effort are only subverting the will of the American people and their obstructionist efforts will fail.Fraud and bad actorsIn addition to being worried about what DOGE is doing with citizens data, cybersecurity experts are concerned that their aggressive tactics could make it easier for scammers to infiltrate systems, which could have disastrous consequences. For instance, DOGE currently has access to Social Security Administration data, which includes personal information about elderly Americans. Kamens notes that scammers often use personal information, such as an individuals bank or hospital, in order to convince them theyre a trusted person. And these tactics seem to work especially well on the elderly, who are less tech-savy: roughly $3.4 billion in fraud losses was reported by people ages 60 and up in 2023, I3C found.These vulnerabilities also extend to matters of national security. DOGE members themselves would immediately become targets for foreign state actors, Kamens says. And earlier this month, Rob Joyce, the former leader of the NSAs unit focusing on foreign computer systems, warned that DOGEs mass firing of probationary federal employees would have a devastating impact on cybersecurity and our national security.About 130 of those fired probationary officers were part of the Cybersecurity and Infrastructure Agency (CISA), which is tasked with detecting breaches of the nations power grid, pipelines and water system. CISA was already understaffed to begin with, says Michael Daniel, president and CEO of the Cyber Threat Alliance and a cybersecurity coordinator under President Obama. It's possible that a critical infrastructure owner and operator might not be able to get assistance from CISA as a result of the cuts.Senator Elizabeth Warren penned a letter arguing that DOGE posed a national security threat by exposing secrets about Americas defense and intelligence agencies. We dont know what safeguards were pulled down. Are the gates wide open now for hackers from China, from North Korea, from Iran, from Russia? she said in a statement. Heck, who knows what black hat hackers all around the world are finding out about each one of us and copying that information for their own criminal uses?Systemic risksCybersecurity experts are also worried about the risk of DOGE engineers inadvertently breaking parts of the governments digital systems, which can be archaic and deeply complex, or unintentionally introducing malware to essential code.In particular, financial experts have said that mistakes made within the Treasury Departments delicate systems could harm the U.S. economy. Kamens warns that if DOGE interferes with the Social Security system, Medicare reimbursements or disability payments could fail to go out on time, endangering lives. They have fired the people who know where the danger points are, he says.Last week, a federal judge questioned government attorneys about why DOGE needs access to Social Security Administration systems, and is still considering whether to shut off access. Another lawsuit, filed by 19 state attorneys general in an attempt to block DOGEs access to the Treasury Department in February is ongoing.Kamens adds that the security risks could only heighten over time, especially if roles like his remain unfilled. Nearly everyone he worked with at USDS (United States Digital Service), DOGEs precursor, came into government from the privacy sector, he says, and he worries that top-level cybersecurity officials will not want to join the federal staff due to the instability and the risks of being fired or undermined.This lack of staffing, he says, could prevent the government from mitigating new and evolving attacks. The reality is that there are constantly new security holes being discovered, he says. If you're not actively evolving your cyber defenses to go along with the offensive things that are happening in that landscape, you end up losing ground.Daniel says that just because nothing has broken yet does not mean that DOGE is doing an adequate job in stopping cybersecurity threats. Its not an instant feedback loop, he says. That's part of the challenge here: we're talking about an increase in risk that may play out over an extended period of time.
    0 Comments 0 Shares 226 Views
  • TIME.COM
    AI Is Turbocharging Organized Crime, E.U. Police Agency Warns
    THE HAGUE, Netherlands The European Union's law enforcement agency cautioned Tuesday that artificial intelligence is turbocharging organized crime that is eroding the foundations of societies across the 27-nation bloc as it becomes intertwined with state-sponsored destabilization campaigns.The grim warning came at the launch of the latest edition of a report on organized crime published every four years by Europol that is compiled using data from police across the EU and will help shape law enforcement policy in the bloc in coming years.AdvertisementAdvertisementCybercrime is evolving into a digital arms race targeting governments, businesses and individuals. AI-driven attacks are becoming more precise and devastating, said Europol's Executive Director Catherine De Bolle.Some attacks show a combination of motives of profit and destabilization, as they are increasingly state-aligned and ideologically motivated, she added.Read more: The AI Arms Race Is Changing EverythingThe report, the EU Serious and Organized Crime Threat Assessment 2025, said offenses ranging from drug trafficking to people smuggling, money laundering, cyber attacks and online scams undermine society and the rule of law by generating illicit proceeds, spreading violence, and normalizing corruption.The volume of child sexual abuse material available online has increased significantly because of AI, which makes it more difficult to analyze imagery and identify offenders, the report said.By creating highly realistic synthetic media, criminals are able to deceive victims, impersonate individuals and discredit or blackmail targets. The addition of AI-powered voice cloning and live video deepfakes amplifies the threat, enabling new forms of fraud, extortion, and identity theft," it said.States seeking geopolitical advantage are also using criminals as contractors, the report said, citing cyber-attacks against critical infrastructure and public institutions originating from Russia and countries in its sphere of influence.Hybrid and traditional cybercrime actors will increasingly be intertwined, with state-sponsored actors masking themselves as cybercriminals to conceal their origin and real disruption motives, it said.Polish Interior Ministry Undersecretary of State Maciej Duszczyk cited a recent cyberattack on a hospital as the latest example in his country.Unfortunately this hospital has to stop its activity for the hours because it was lost to a serious cyber-attack," boosted by AI, he said.AI and other technologies are a catalyst for crime, and drive criminal operations efficiency by amplifying their speed, reach, and sophistication, the report said.As the European Commission prepares to launch a new internal security policy, De Bolle said that nations in Europe need to tackle the threats urgently.We must embed security into everything we do, said European Commissioner for Internal Affairs and Migration Magnus Brunner. He added that the EU aims to provide enough funds in coming years to double Europol's staff.
    0 Comments 0 Shares 248 Views
  • TIME.COM
    What Is a Smishing Scam and How to Stay Safe
    Recently, several state and federal agencies, including the Federal Trade Commission (FTC) and the Internal Revenue Service, have warned against the rise of smishing, the SMS version of phishing. Phishing is a cyber-attack that aims to trick people into divulging personal information. It happens via email. Now, some experts say, cyber-criminals have also been able to access phone numbers.In January, the FTC flagged a smishing scam: a message that appears to be from a state road toll company that informs recipients about an outstanding balance. AdvertisementAdvertisementThe scammy text might show a dollar amount for how much you supposedly owe and include a link that takes you to a page to enter your bank or credit card info, the FTC warned. Not only is the scammer trying to steal your money, but if you click the link, they could get your personal info (like your drivers license number)and even steal your identity. Smishing can be particularly convincing, posing as a FedEx carrier, bank, or other known entity. Since the scam happens via text, people may be particularly vulnerable to them. Text messages are more intimate, and you check them more quickly than emails, so people start falling for those scams, says Murat Kantarcioglu, a professor of computer science at Virginia Tech. State transportation departmentsincluding West Virginia and New Hampshire, and E-Z Pass itself issued warnings regarding such messages.Heres how to protect yourself against smishing.Why does smishing happen?Smishing happens when cybercriminals are looking to access private information about a personwhether it be their bank account password or birthdayto hack things such as their phone or credit card account.If you receive a suspicious message, cybercriminals already have some type of information about you, usually obtained through a third-party marketing company. Whenever you give your phone number to a company or organization, those phone numbers are sometimes sold [to others], warns Kantarcioglu. The other big area [of concern] is that there was lots of hacking over the years, and most people, social security numbers, phone numbers, addresses, etc, have also [been] leaked and stolen.Smishing may also happen on some social media apps, including Signal and Whatsapp.What to do if you receive a smishing scamSteer clear of any messages that appear to be suspicious. The FTC advises people to not click any links, or respond to any messages sent to them by an unknown sender. The link that they sent may be vulnerable so that your phone may be hacked automatically. In some cases it may get you to a site where they may want to get more information from, warns Kantarcioglu.Instead of directly responding to a message that poses as a bank or toll company, users should login to their personal accounts on their own, or directly get in contact with such companies right away. When signing in, it's also important to ensure you have clicked on a secure site. I've seen some scammers [create] ads for fake variants of the website, like a fake toll company website, says Kantarcioglu. You have to find the correct website for the organization.Many phones allow users to directly delete and report the message as junk. The FTC says that people can also forward such messages to 7226 (SPAM). Kantarcioglu adds that people should make sure they block the numbers or accounts they get these types of messages from. Smishing can also be reported to the IC3 internet crime complaint center at www.ic3.gov. It may also be important to inform less tech-savvy loved ones about these types of scams. I think everyone should make it their mission to educate the older people in their family about these issues, says Kantarcioglu. I'm trying to educate them, never answer the text messages or phone calls, for that matter, from anyone that's that you don't know.
    0 Comments 0 Shares 225 Views
  • TIME.COM
    The Oppenheimer Moment That Looms Over Todays AI Leaders
    This year, hundreds of billions of dollars will be spent to scale AI systems in pursuit of superhuman capabilities. CEOs of leading AI companies, such as OpenAIs Sam Altman and xAIs Elon Musk, expect that within the next four years, their systems will be smart enough to do most cognitive workthink any job that can be done with just a laptopas effectively as or better than humans.Such an advance, leaders agree, would fundamentally transform society. Google CEO Sundar Pichai has repeatedly described AI as the most profound technology humanity is working on. Demis Hassabis, who leads Googles AI research lab Google DeepMind, argues AIs social impact will be more like that of fire or electricity than the introduction of mobile phones or the Internet.AdvertisementAdvertisementIn February, in the wake of an international AI Summit in Paris, Anthropic CEO Dario Amodei restated his belief that by 2030 AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people. In the same month, Musk, speaking on the Joe Rogan Experience podcast, said I think we're trending toward having something that's smarter than the smartest human in the next few years. He continued: There's a level beyond that which is smarter than all humans combined, which frankly is around 2029 or 2030.If these predictions are even partly correct, the world could soon radically change. But there is no consensus on how this transformation will or should be handled.With exceedingly advanced AI models released on a monthly basis, and the Trump administration seemingly uninterested in regulating the technology, the decisions of private-sector leaders matter more than ever. But they differ in their assessments of which risks are most salient, and whats at stake if things go wrong. Heres how:Existential risk or unmissable opportunity?I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true, Musk said in February, noting he thinks there is a 20% chance of human annihilation by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence probably the greatest threat to the continued existence of humanity. Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the worlds problems, using solutions we didn't think of, but not so brilliant that it cant escape whatever control constraints we think of, says Margaret Mitchell, Chief Ethics Scientist at Hugging Face. She notes that discourse sometimes conflates AI that supplements people with AI that supplants them. You cant have the benefits of both and the drawbacks of neither, she says.For Mitchell, risk increases as humans cede control to increasingly autonomous agents. Because we cant fully control or predict the behaviour of AI agents, we run a massive risk of AI agents that act without consent to, for example, drain bank accounts, impersonate us saying and doing horrific things, or bomb specific populations, she explains.Most people think of this as just another technology and, and not as a new species, which is the way you should think about it, says Professor Max Tegmark, co-founder and president of the Future of Life Institute. He explains that the default outcome when building machines at this level is losing control over them, which could lead to unpredictable and potentially catastrophic outcomes.But despite the apprehensions, other leaders avoid the language of superintelligence and existential risk, focusing instead on the positive upside. I think when history looks back it will see this as the beginning of a golden age of innovation, Pichai said at the Paris Summit in February. The biggest risk could be missing out.Similarly, asked in mid-2023 whether he thinks were on a path to creating superintelligence, Microsoft CEO Satya Nadella said he was much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI [artificial general intelligence] showing up, or showing up fast, he said.A race between countries and companiesEven among those that do believe AI poses an existential risk, there is a widespread belief that any slowdown in Americas AI development will allow foreign adversariesparticularly Chinato pull ahead in the race to create transformative AI. Future AI systems could be capable of creating novel weapons of mass destruction, or covertly hacking a countrys nuclear arsenaleffectively flipping the global balance of power overnight.My feeling is that almost every decision I make is balanced on the edge of a knife, Amodei said earlier this month, explaining that building too fast risks humanity losing control, whereas if we dont build fast enough, then the authoritarian countries could win.These dynamics play out not just between countries, but between companies. As Helen Toner, a director at Georgetowns Center for Security and Emerging Technology explains, there's often a disconnect between the idealism in public statements and the hard-nosed business logic that drives their decisions. Toner points to competition over release dates as a clear example of this. There have been multiple instances of AI teams being forced to cut corners and skip steps in order to beat a competitor to launch day, she says.Read More: How China Is Advancing in AI Despite U.S. Chip RestrictionsFor Meta CEO Mark Zuckerberg, ensuring advanced AI systems are not controlled by a single entity is key to safety. I kind of liked the theory that its only God if only one company or government controls it, he said in January. The best way to make sure it doesnt get out of control is to make it so that its pretty equally distributed, he claimed, pointing to the importance of open-source models.Parameters for controlWhile almost every company developing advanced AI models has their own internal policies and procedures around safetyand most have made voluntary commitments to the U.S. government regarding issues of trust, safety, and allowing third parties to evaluate their modelsnone of this is backed by the force of law. Tegmark is optimistic that if the U.S. national security establishment accepts the seriousness of the threat, safety standards will follow. Safety standard number one, he says, will be requiring companies to demonstrate how they plan to keep their models under control.Some CEOs are feeling the weight of their power. There's a huge amount of responsibilityprobably too muchon the people leading this technology, Hassabis said in February. The Google DeepMind leader has previously advocated for the creation of new institutions, akin to the European Organization for Nuclear Research (CERN) or the International Energy Agency, to bring together governments to monitor AI developments. Society needs to think about what kind of governing bodies are needed, he said.This is easier said than done. While creating binding international agreements has always been challenging, its more unrealistic than ever, says Toner. On the domestic front, Tegmark points out that right now, there are more safety standards for sandwich shops than for AI companies in America.Nadella, discussing AGI and superintelligence on a podcast in February, emphasized his view that legal infrastructure will be the biggest rate limiter to the power of future systems, potentially preventing their deployment. Before it is a real problem, the real problem will be in the courts, he said.An 'Oppenheimer moment'Mitchell says that AIs corporate leaders bring different levels of their own human concerns and thoughts to these discussions. Tegmark fears, however, that some of these leaders are falling prey to wishful thinking by believing theyre going to be able to control superintelligence, and that many are now facing their own Oppenheimer moment." He points to a poignant scene in that film where scientists watch their creation being taken away by military authorities. That's the moment where the builders of the technology realize they're losing control over their creation, he says. Some of the CEOs are beginning to feel that right now.
    0 Comments 0 Shares 198 Views
  • TIME.COM
    AI Made Its Way to Vineyards. Heres How the Technology Is Helping Make Your Wine
    LOS ANGELES When artificial intelligence-backed tractors became available to vineyards, Tom Gamble wanted to be an early adopter. He knew there would be a learning curve, but Gamble decided the technology was worth figuring out.The third-generation farmer bought one autonomous tractor. He plans on deploying its self-driving feature this spring and is currently using the tractor's AI sensor to map his Napa Valley vineyard. As it learns each row, the tractor will know where to go once it is used autonomously. The AI within the machine will then process the data it collects and help Gamble make better-informed decisions about his crops what he calls precision farming.AdvertisementAdvertisementIts not going to completely replace the human element of putting your boot into the vineyard, and thats one of my favorite things to do, he said. But its going to be able to allow you to work more smartly, more intelligently and in the end, make better decisions under less fatigue.Gamble said he anticipates using the tech as much as possible because of economic, air quality and regulatory imperatives. Autonomous tractors, he said, could help lower his fuel use and cut back on pollution.As AI continues to grow, experts say that the wine industry is proof that businesses can integrate the technology efficiently to supplement labor without displacing a workforce. New agricultural tech like AI can help farmers to cut back on waste, and to run more efficient and sustainable vineyards by monitoring water use and helping determine when and where to use products like fertilizers or pest control. AI-backed tractors and irrigation systems, farmer say, can minimize water use by analyzing soil or vines, while also helping farmers to manage acres of vineyards by providing more accurate data on the health of a crop or what a seasons yield will be.Other facets of the wine industry have also started adopting the tech, from using generative AI to create custom wine labels to turning to ChatGPT to develop, label and price an entire bottle.I dont see anybody losing their job, because I think that a tractor operators skills are going to increase and as a result, and maybe theyre overseeing a small fleet of these machines that are out there, and theyll be compensated as a result of their increased skill level, he said.Farmers, Gamble said, are always evolving. There were fears when the tractor replaced horses and mules pulling plows, but that technology proved itself just like AI farming tech will, he said, adding that adopting any new tech always takes time.Companies like John Deere have started using the AI that wine farmers are beginning to adopt. The agricultural giant uses Smart Apply technology on tractors, for example, helping growers apply material for crop retention by using sensors and algorithms to sense foliage on grape canopies, said Sean Sundberg, business integration manager at John Deere.The tractors that use that tech then only spray where there are grapes or leaves or whatnot so that it doesnt spray material unnecessarily, he said. Last year, the company announced a project with Sonoma County Winegrowers to use tech to help wine grape growers maximize their yield.Tyler Klick, partner at Redwood Empire Vineyard Management, said his company has started automating irrigation valves at the vineyards it helps manage. The valves send an alert in the event of a leak and will automatically shut off if they notice an excessive water flow rate.That valve is actually starting to learn typical water use, Klick said. Itll learn how much water is used before the production starts to fall off.Klick said each valve costs roughly $600, plus $150 per acre each year to subscribe to the service.Our job, viticulture, is to adjust our operations to the climatic conditions were dealt, Klick said. I can see AI helping us with finite conditions.Angelo A. Camillo, a professor of wine business at Sonoma State University, said that despite excitement over AI in the wine industry, some smaller vineyards are more skeptical about their ability to use the technology. Small, family-owned operations, which Camillo said account for about 80% of the wine business in America, are slowly disappearing many dont have the money to invest in AI, he said. A robotic arm that helps put together pallets of wine, for example, can cost as much as $150,000, he said.For small wineries, theres a question mark, which is the investment. Then theres the education. Whos going to work with all of these AI applications? Where is the training? he said.There are also potential challenges with scalability, Camillo added. Drones, for example, could be useful for smaller vineyards that could use AI to target specific crops that have a bug problem, he said it would be much harder to operate 100 drones in a 1,000 acre vineyard while also employing the IT workers who understand the tech.I dont think a person can manage 40 drones as a swarm of drones, he said. So theres a constraint for the operators to adopt certain things.However, AI is particularly good at tracking a crops health including how the plant itself is doing and whether its growing enough leaves while also monitoring grapes to aid in yield projections, said Mason Earles, an assistant professor who leads the Plant AI and Biophysics Lab at UC Davis.Diseases or viruses can sneak up and destroy entire vineyards, Earles said, calling it an elephant in the room across the wine industry. The process of replanting a vineyard and getting it to produce well takes at least five years, he said. AI can help growers determine which virus is affecting their plants, he said, and whether they should rip out some crops immediately to avoid losing their entire vineyard.Earles, who is also cofounder of the AI-powered farm management platform Scout, said his company uses AI to process thousands of images in hours and extract data quickly something that would be difficult by hand in large vineyards that span hundreds of acres. Scout's AI platform then counts and measures the number of grape clusters as early as when a plant is beginning to flower in order to forecast what a yield will be.The sooner vintners know how much yield to expect, the better they can dial in their wine making process, he added.Predicting what yields youre going to have at the end of the season, no one is that good at it right now, he said. But its really important because it determines how much labor contract youre going to need and the supplies youll need for making wine.Earles doesnt think the budding use of AI in vineyards is freaking farmers out. Rather, he anticipates that AI will be used more frequently to help with difficult field labor and to discern problems in vineyards that farmers need help with.Theyve seen people trying to sell them tech for decades. Its hard to farm; its unpredictable compared to most other jobs, he said. The walking and counting, I think people would have said a long time ago, I would happily let a machine take over.
    0 Comments 0 Shares 253 Views
  • TIME.COM
    Trumps Crypto Summit Shows That the Industry Is in Charge
    This is a very important day in your lives, President Donald Trump told crypto executives at the White House on March 7. Trump was presiding over the first ever Crypto Summit, in which he and other cabinet officials gathered some of the biggest names in crypto to reemphasize the Presidents support for the industry and to hear out the executives ideas for regulation and legislation. Participants largely came away from the meeting empoweredand believing that a new crypto era has dawned in Washington.AdvertisementAdvertisementThe government representatives expressed that there has been a negative regime towards the crypto industry, and that regime is now coming to an end, says Sergey Nazarov, co-founder of Chainlink, who attended the summit. Theres a significant shift and huge amounts of support.Very open and receptiveFor the last few years, the crypto industry chafed at the enforcement actions brought against them by President Joe Bidens Administration. Bidens Securities and Exchange Commission (SEC), led by Gary Gensler, sought to crack down on crypto companies he deemed were violating securities laws, and protect investors from the massive scams and frauds that are pervasive in the crypto world, like Terra-Luna and FTX. This resulted in lawsuits against companies big and small, including Coinbase.After Trump was elected, he appointed several cabinet members with close ties to the industry, such as AI & crypto czar David Sacks, Commerce Secretary Howard Lutnick, and Treasury Secretary Scott Bessent. Many enforcement actions, including the case against Coinbase, have since been dropped. And the most pro-crypto commissioners of the SEC, most prominently Hester Peirce, were elevated: She now leads the SECs Crypto Task Force.All of those officials were present at the Summit, as well as Tom Emmer, the House Majority Whip. I did not expect people that were so senior to be at the summit, Nazarov says. Everyone that came from the industry side was able to speak and provide their views. And all the senior government people, I think, were very open and receptive.Trump himself led both a public press conference of the summit as well as a private conference with the executives. In his public remarks, he mocked Biden for his anti-crypto stance, asked Congress to pass bills on stablecoins and a digital asset framework before the August recess, and, for some reason, allowed FIFA president Gianni Infantino to show off the soccer World Cup Trophy and pitch the idea of creating a FIFA meme coin. That coin may be worth more than FIFA in the end, Trump said in response. (Trumps own meme coin TRUMP initially raked in millions of dollars in trading fees alone, although it has since fallen all the way from its $75 peak to $12.)Industry participants at the summit included Coinbases Brian Armstrong, MicroStrategys Michael Saylor, the Winklevoss twins, and Zach Witkoff, co-founder of Trumps own crypto company, World Liberty Financial. Combined, the participants have given more than $11 million to Trumps inaugural committee, according to the Intercept, and critics have raised many questions around conflict of interest. When crypto companies spent over a hundred million dollars in the 2024 elections, they created a new playbook for the purchase of large-scale political power in America, Robert Weissman, co-president of Public Citizen, wrote in an email statement to TIME.The people that should be in front of him are in front of him, but there are also people who shouldn't be in front of him who are in front of him, says Avik Roy, co-founder and chairman of the think tank Foundation for Research on Equal Opportunity. One of the challenges in public policy always is: How does someone in the President's position distinguish between the people who are merely lobbying and the people who are public-spirited?After the summit, Trumps Office of the Comptroller of the Currency (OCC) issued guidance allowing banks to hold cryptocurrency, and asking them to do their own diligence around risk. This served as yet another signal that Trumps Administration will not regulate the industry very closely. This industry was kind of unfairly suppressed from reaching its potential in the U.S. system, Nazarov says. They want to go completely the other way.Trumps crypto reserveThe summit came a day after Trump issued an Executive Order announcing the creation of a federal Bitcoin reserve. When Trump floated the idea earlier in the week, many people expressed concerns: that Trump would levy taxes in order to buy crypto, and that he was creating risks by including much smaller and volatile coins like Cardano and XRP in the proposal.But the Executive Order pulled back those plans quite a bit. It announced that the U.S. would not buy any new Bitcoin, but simply hold onto the cryptocurrencies that they had seized in seizures. Andrew ONeill, the digital assets managing director of S&P Global Ratings, called the order mainly symbolic in a statement to TIME.Industry insiders cheered the decision to mainly focus on a separate Bitcoin reserve, effectively demoting the importance of the other crypto projectswhose founders have been lobbying Trump for support. It would have been a pretty clearly a cronyist outcome where well-connected people were able to get the government to buy their tokens without really any obvious strategic rationale for doing so, Roy says. Bitcoin is a special case; it has no CEO.The Executive Order also calls for a full audit of the U.S. crypto holdings, which is estimated to include around 200,000 Bitcoin (worth about $17 billion). Yesha Yadav, a law professor at Vanderbilt who specializes in crypto and securities regulation, says that the audit will be important to determine how much of that Bitcoin is usable, and how much might need to be returned to fraud victims. A good portion of that Bitcoin stash likely comes from the Bitfinex hack, which the U.S. government seized in 2022. Whether or not theyre motivated to trace every single victim in that case, whether victims have come forward, and whose claims have not been dealt withthat is something that's going to have to be looked at, Yadav says. Crypto prices have been turbulent over the last month, in part due to uncertainty around Trumps tariffs. But crypto industry insiders believe that ultimately, Trumps laissez-faire approach will help them grow. FTX is in the past now, says Nazarov. The big failures are in the past.Andrew R. Chows book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.
    0 Comments 0 Shares 275 Views
  • TIME.COM
    SpaceX Test Flight Explodes, Again
    Nearly two months after an explosion sent flaming debris raining down on the Turks and Caicos, SpaceX launched another mammoth Starship rocket on Thursday, but lost contact minutes into the test flight as the spacecraft came tumbling down and broke apart.This time, wreckage from the latest explosion was seen streaming from the skies over Florida. It was not immediately known whether the spacecrafts self-destruct system had kicked in to blow it up.AdvertisementAdvertisementThe 403-ft. (123-m.) rocket blasted off from Texas. SpaceX caught the first-stage booster back at the pad with giant mechanical arms, but engines on the spacecraft on top started shutting down as it streaked eastward for what was supposed to be a controlled entry over the Indian Ocean, half a world away. Contact was lost as the spacecraft went into an out-of-control spin.Starship reached nearly 90 mi. (150 km.) in altitude before trouble struck and before four mock satellites could be deployed. It was not immediately clear where it came down, but images of flaming debris were captured from Florida, including near Cape Canaveral, and posted online.The space-skimming flight was supposed to last an hour.Unfortunately this happened last time too, so we have some practice at this now, SpaceX flight commentator Dan Huot said from the launch site.SpaceX later confirmed that the spacecraft experienced a rapid unscheduled disassembly" during the ascent engine firing. Our team immediately began coordination with safety officials to implement pre-planned contingency responses, the company said in a statement posted online.Starship didnt make it quite as high or as far as last time.NASA has booked Starship to land its astronauts on the moon later this decade. SpaceXs Elon Musk is aiming for Mars with Starship, the worlds biggest and most powerful rocket.Like last time, Starship had mock satellites to release once the craft reached space on this eighth test flight as a practice for future missions. They resembled SpaceXs Starlink internet satellites, thousands of which currently orbit Earth, and were meant to fall back down following their brief taste of space.Starships flaps, computers and fuel system were redesigned in preparation for the next big step: returning the spacecraft to the launch site just like the booster.During the last demo, SpaceX captured the booster at the launch pad, but the spacecraft blew up several minutes later over the Atlantic. No injuries or major damage were reported.According to an investigation that remains ongoing, leaking fuel triggered a series of fires that shut down the spacecrafts engines. The on-board self-destruct system kicked in as planned.SpaceX said it made several improvements to the spacecraft following the accident, and the Federal Aviation Administration recently cleared Starship once more for launch.Starships soar out of the southernmost tip of Texas near the Mexican border. SpaceX is building another Starship complex at Cape Canaveral, home to the companys smaller Falcon rockets that ferry astronauts and satellites to orbit.
    0 Comments 0 Shares 263 Views
  • TIME.COM
    Alibabas New Model Adds Fuel to Chinas AI Race
    On March 5, Chinese tech giant Alibaba released its latest AI reasoning model, QwQ-32B, resulting in an 8% spike in the companys Hong Kong-listed shares. While less capable than Americas leading AI systems, such as OpenAIs o3 or Anthropics Claude 3.7 Sonnet, the model reportedly performs about as well as its Chinese competitor DeepSeeks model, R1, while requiring considerably less computing power to develop and to run. Its creators say QwQ-32B embodies an ancient philosophical spirit by approaching problems with genuine wonder and doubt.AdvertisementAdvertisementIt reflects the broader competitiveness of Chinas frontier AI ecosystem, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. That ecosystem includes DeepSeeks R1 and Tencents Hunyuan model, which Anthropic co-founder Jack Clark has said is by some measures world-class. That said, assessments of Alibabas latest model are preliminary, both due to the inherent challenge of measuring model capabilities, and because so far, the model has only been assessed by Alibaba itself. The information environment is not very rich right now, says Singer.Another step on the path to AGISince the release of DeepSeeks R1 model in January sent waves through the global stock market, Chinas tech ecosystem has been in the spotlightparticularly as the U.S. increasingly sees itself as racing against China to create artificial general intelligence (AGI)highly advanced AI systems capable of performing most cognitive work, from graphic design to machine-learning research. AGI is widely expected to confer a decisive military and strategic advantage to whichever company or government creates it first, as such a system may be capable of engaging in advanced cyberwarfare or creating novel weapons of mass destruction (though experts are highly skeptical humans will be able to retain control over such a system, regardless of who creates it).We are confident that combining stronger foundation models with reinforcement learning powered by scaled computational resources will propel us closer to achieving AGI, wrote the team behind Alibabas latest model. The quest to create AGI permeates most leading AI labs. DeepSeeks stated goal is to unravel the mystery of AGI with curiosity. OpenAIs mission, meanwhile, is to ensure that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity. Leading AI CEOs including Sam Altman, Dario Amodei, and Elon Musk all expect AGI-like systems to be built within President Trumps current term.China's turnAlibabas latest AI release comes just two weeks after the companys co-founder, Jack Ma, was pictured in the front row at a meeting between President Xi Jinping and the countrys preeminent business leaders. Since 2020, when Ma publicly criticized state regulators and state-owned banks for stifling innovation and operating with a pawn shop mentality, the Chinese billionaire has largely been absent from the public spotlight. In that time, the Chinese government cracked down on the tech industry, imposing stricter rules on how companies could use data and compete in the market, while also taking more control over key digital platforms.Singer says that by 2022, it became clear that the bigger threat to the country was not the tech industry, but economic stagnation. That economic stagnation story, and attempting to reverse it, has really shaped so much of policy over the last 18 months, says Singer. China is moving quickly to adopt cutting-edge technology, with at least 13 city governments and 10 state-owned energy companies reportedly having already deployed DeepSeek models into their systems. Technical innovationAlibabas model represents a continuation of existing trends: in recent years, AI systems have consistently increased in performance while becoming cheaper to run. Non-profit research organization Epoch AI estimates that the amount of computing power used to train AI systems has been increasing by more than 4x each year, while, thanks to regular improvements in algorithm design, that computing power is being used three times more efficiently each year. Put differently, a system that required, for example, 10,000 advanced computer chips to train last year could be trained with only a third as many this year.Despite efficiency improvements, Singer cautions that high-end computing chips remain crucial for advanced AI developmenta reality that makes U.S. export controls on these chips a continuing challenge for Chinese AI companies like Alibaba and DeepSeek, whose CEO has cited access to chips, rather than money or talent, as their biggest bottleneck.QwQ (pronounced like quill) is the latest to join a new generation of systems billed as reasoning models, which some consider to represent a new paradigm in AI. Previously, AI systems got better by scaling both the amount of computing power used to train them and the amount and quality of data on which they were trained. In this new paradigm, the emphasis is on taking a model that has already been trainedin this case, Qwen 2.5-32Band scaling the amount of computing the system uses in responding to a given query. As the Qwen team writes, when given time to ponder, to question, and to reflect, the models understanding of mathematics and programming blossoms like a flower opening to the sun. This is consistent with trends observed with Western models, where techniques that allow them to think longer have yielded significant improvements in performance on complex analytic problems.Alibabas QwQ has been released open weight, meaning the weights that constitute the modelaccessible in the form of a computer filecan be downloaded and run locally, including on a high-end laptop. Interestingly, a preview of the model, released last November, attracted considerably less attention. Singer notes that the stock market is generally reactive to model releases and not to the trajectory of the technology, which is expected to continue to improve rapidly on both sides of the Pacific. The Chinese ecosystem has a bunch of players in it, all of whom are putting out models that are very powerful and compelling, and its not clear who will emerge, when its all said and done, as having the best model, he says.
    0 Comments 0 Shares 247 Views
  • TIME.COM
    Reddit Co-Founder Alexis Ohanian Joins Bid to Buy TikTok
    Reddit co-founder Alexis Ohanian has joined billionaire Frank McCourts bid to acquire TikTok as a strategic adviser.McCourts internet advocacy organization, Project Liberty, announced this week that Ohanian, an investor married to tennis star Serena Williams, had joined a consortium called The Peoples Bid for TikTok.Im officially now one of the people trying to buy TikTok USand bring it on-chain, Ohanian said in a series of posts made Tuesday on X, referencing a decentralized, blockchain-based platform that Project Liberty says it will leverage to provide users more control over their online data.AdvertisementAdvertisementIf successful in its bid, Project Liberty said the technology will serve as the backbone of the redesigned TikTok, ensuring that privacy, security, and digital independence are no longer optional but foundational. When asked by an X user on Monday what he would call TikTok if he purchased it, Ohanian said: TikTok: Freedom Edition.Under a federal bill passed with bipartisan support and signed into law by former President Joe Biden last year, TikTok was required to cut ties with its China-based parent company, ByteDance, or face a ban by Jan. 19.In one of his first executive orders signed in January, President Donald Trump extended the deadline for TikTok to find new ownership until early April.McCourts consortiumwhich includes Shark Tank star Kevin OLearyhas already offered ByteDance $20 billion in cash for the U.S. platform. Some analysts estimate TikTok could be worth much more than that even without its coveted algorithm, which McCourt has said hes not interested in.Trump said in January that Microsoft is among the U.S. companies looking to take control of TikTok. Others eyeing TikTok include the artificial intelligence startup Perplexity AI, which has proposed to merge its business with TikToks U.S. platform and give the U.S. government a stake in the new entity. Theres also Jesse Tinsley, the founder of the payroll firm Employer.com. Tinsley has said a consortium he put togetherwhich includes the CEO of video game platform Robloxis offering ByteDance more than $30 billion for TikTok.
    0 Comments 0 Shares 233 Views
More Stories