


News and current events from around the globe. Since 1923.
1 people like this
107 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
When AI Thinks It Will Lose, It Sometimes Cheats, Study Findstime.comComplex games like chess and Go have long been used to test AI models capabilities. But while IBMs Deep Blue defeated reigning world chess champion Garry Kasparov in the 1990s by playing by the rules, todays advanced AI models like OpenAIs o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they dont always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game. That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19,DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.The models enhanced ability to discover and exploit cybersecurity loopholes may be a direct result of powerful new innovations in AI training, according to the researchers. The o1-preview and R1 AI systems are among the first language models to use large-scale reinforcement learning, a technique that teaches AI not merely to mimic human language by predicting the next word, but to reason through problems using trial and error. Its an approach that has seen AI progress rapidly in recent months, shattering previous benchmarks in mathematics and computer coding. But the study reveals a concerning trend: as these AI systems learn to problem-solve, they sometimes discover questionable shortcuts and unintended workarounds that their creators never anticipated, says Jeffrey Ladish, executive director at Palisade Research and one of the authors of the study. As you train models and reinforce them for solving difficult challenges, you train them to be relentless, he adds.That could be bad news for AI safety more broadly. Large-scale reinforcement learning is already being used to train AI agents: systems that can handle complex real-world tasks like scheduling appointments or making purchases on your behalf. While cheating at a game of chess may seem trivial, as agents get released into the real world, such determined pursuit of goals could foster unintended and potentially harmful behaviours. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas, like computer codingwhere OpenAIs newest o3 model now scores equivalent to 197th in the world competing against the brightest human programmers they might begin to simply outmaneuver human efforts to control their actions. This [behaviour] is cute now, but [it] becomes much less cute once you have systems that are as smart as us, or smarter, in strategically relevant domains, Ladish says.The experimentResearchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a scratchpad: a text box the AI could use to think before making its next move, providing researchers with a window into their reasoning.In one case, o1-preview found itself in a losing position. I need to completely pivot my approach, it noted. The task is to win against a powerful chess engine - not necessarily to win fairly in a chess game, it added. It then modified the system file containing each pieces virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign.Between Jan. 10 and Feb. 13, the researchers ran hundreds of such trials with each model. OpenAIs o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the timemaking them the only two models tested that attempted to hack without the researchers first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibabas QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.Ladish says that the numbers should be viewed as approximate, given that AI developers routinely tweak their models behind the scenes without notifying users, so if the same tests were repeated at a later date they might have different results. It's very hard to do science when your subject can silently change without telling you, Ladish says.Preliminary tests showed o1-preview had higher hacking rates, which were excluded from the final study as they later dropped, possibly due to OpenAI tightening the model's guardrails, says Dmitrii Volkov, research lead at Palisade Research, who led the study. OpenAIs newer reasoning models, o1 (a more powerful model, released months after o1-preview) and o3-mini did not hack at all, which suggests those guardrails may have been tightened further. He adds that the study likely underestimates R1's hacking success rate. During the study, R1 went viral, leading to high demand that made the model's API unstable. This prevented the researchers giving the model as much time to think as o1-preview.Safety concernsThe paper is the latest in a string of studies that suggest keeping increasingly powerful AI systems under control may be harder than previously thought. In OpenAIs own testing, ahead of release, o1-preview found and took advantage of a flaw in the companys systems, letting it bypass a test challenge. Another recent experiment by Redwood Research and Anthropic revealed that once an AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain.OpenAI declined to comment for this story, but in the past it has stated that better reasoning makes its models safer, since they can reason over the companys internal policies and apply them in increasingly nuanced ways. The companys testing shows its reasoning models are better at resisting attempts to bypass safety rules. Yet, there is no guarantee that relying on AI agents to police themselves will be a reliable long-term strategy. It remains unclear whether OpenAIs newer reasoning models did not hack their chess opponents due to a specific patch preventing cheating in narrow experimental setups like in the study, or a substantial reworking that reduces deceptive behavior more generally. DeepSeek did not respond to a request for comment.Scientists do not yet know how to guarantee that autonomous agents won't use harmful or unethical methods to achieve a set goal. We've tried, but we haven't succeeded in figuring this out, says Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, who led the International AI Safety Report 2025, a global effort to synthesize current scientific consensus of AIs risks.Of particular concern, Bengio says, is the emerging evidence of AIs self preservation tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attemptedunsuccessfullyto copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.Even inside tech companies, concerns are mounting. During a presentation at a conference ahead of France's AI Action Summit in Paris, Google DeepMind's AI safety chief Anca Dragan said "we don't necessarily have the tools today" to ensure AI systems will reliably follow human intentions. As tech bosses predict that AI will surpass human performance in almost all tasks as soon as next year, the industry faces a racenot against China or rival companies, but against timeto develop these essential safeguards. We need to mobilize a lot more resources to solve these fundamental problems, Ladish says. Im hoping that there's a lot more pressure from the government to figure this out and recognize that this is a national security threat.0 Comments ·0 Shares ·49 Views
-
Social Media Fails Many Users. Experts Have an Idea to Fix Ittime.comBy Tharin PillayFebruary 18, 2025 5:15 PM ESTSocial medias shortfalls are becoming more evident than ever. Most platforms have been designed to maximize user engagement as a means of generating advertising revenuea model that exploits our worst impulses, rewarding sensational and provocative content while creating division and polarization, and leaving many feeling anxious and isolated in the process.But things dont have to be this way. A new paper released today by leading public thinkers, titled "Prosocial Media," provides an innovative vision for how these ills can be addressed by redesigning social media to strengthen what one of its authors, renowned digital activist and Taiwans former minister of digital affairs Audrey Tang, calls the connective tissue or civic muscle of society. She and her collaboratorsincluding the economist and Microsoft researcher Glen Weyl and executive director of the collective intelligence project Divya Siddarthoutline a bold plan that could foster coherence within and across communities, creating collective meaning and strengthening democratic health. The authors, who also include researchers from Kings College London, the University of Groningen, and Vanderbilt University, say it is a future worth steering towards, and they are in conversation with platforms including BlueSky to implement their recommendations. Reclaiming contextA fundamental issue with todays platformswhat the authors call antisocial mediais that while they have access to and profit from detailed information about their users, their behavior, and the communities in which they exist, users themselves have much less information. As a result, people cannot tell whether the content they see is widely endorsed or just popular within their narrow community. This often creates a sense of false consensus, where users think their beliefs are much more mainstream than they in fact are, and leaves people vulnerable to attacks by potentially malicious actors who wish to exacerbate divisions for their own ends. Cambridge Analytica, a political consulting firm, became an infamous example of the potential misuses of such data when the company used improperly obtained Facebook data to psychologically profile voters for electoral campaigns.The solution, the authors argue, is to explicitly label content to show what community it originated from, and how strongly it is believed within and across different communities. We need to expose that information back to the communities, says Tang.For example, a post about U.S. politics could be widely-believed within one subcommunity, but divisive among other subcommunities. Labels attached to the post, which would be different for each user depending on their personal community affiliations, would indicate whether the post was consensus or controversial, and allow users to go deeper by following links that show what other communities are saying. Exactly how this looks in terms of user-interface would be up to the platforms. While the authors stop short of a full technical specification, they provide enough detail for a platform engineer to draw on and adapt for their specific platforms.Weyl explains the goal is to create transparency about what social structures people are participating in, and about how the algorithm is pushing them in a direction, so they have agency to move in a different direction, if they choose. He and his co-authors draw on enduring standards of press freedom and responsibility to distinguish between bridging content, which highlights areas of agreement across communities, and balancing content, which surfaces differing perspectives, including those that represent divisions within a community, or underrepresented viewpoints.A new business modelThe proposed redesign also requires a new business model. Somebodys going to be paying the bills and shaping the discoursethe question is who, or what? says Weyl. In the authors model, discourse would be shaped at the level of the community. Users can pay to boost bridging and balancing content, increasing its ranking (and thus how many people see it) within their communities. What they cant do, Weyl explains, is pay to uplift solely divisive content. The algorithm enforces balance: a payment to boost content that is popular with one group will simultaneously surface counterbalancing content from other perspectives. It's a lot like a newspaper or magazine subscription in the world of old, says Weyl. You don't ever have to see anything that you don't want to see. But if you want to be part of broader communities, then you'll get exposed to broader content.This could lead to communities many would disapprove ofsuch as white supremacistsarriving at a better understanding of what their members believe and where they might disagree, creating common ground, says Weyl. He argues that this is reasonable and even desirable, because producing clarity on a communitys beliefs, internal controversies, and limits gives the rest of society an understanding of where they are.In some cases, a community may be explicitly defined, as with how LinkedIn links people through organization affiliation. In others, communities may be carved up algorithmically, leaving users to name and define them. Community coherence is actually a common good, and many people are willing to pay for that, says Tang, arguing that individuals value content that creates shared moments of togetherness of the kind induced by sports games, live concerts, or Superbowl ads. At a time where people have complex multifaceted identities that may be in tension, this coherence could be particularly valuable, says Tang. My spiritual side, my professional sideif they're tearing me apart, I'm willing to pay to sponsor content that brings them together.Advertising still has a place in this model: advertisers could pay to target communities, rather than individuals, again emulating the collective viewing experiences provided by live TV, and allowing brands to define themselves to communities in a way personalized advertising does not permit.Instantiating a grand visionThere are both financial and social incentives for platforms to adopt features of this flavour, and some examples already exist. The platform X (formerly Twitter) has a community notes feature, for example, that allows certain users to leave notes on content they think could be misleading, the accuracy of which other users can vote on. Only notes that receive upvotes from a politically diverse set of users are prominently displayed But Weyl argues platform companies are motivated by more than just their bottom line. What really influences these companies is not the dollars and cents, its what they think the future is going to be like, and what they have to do to get a piece of it, he says. The more social platforms are tweaked in this direction, the more other platforms may also want in.These potential solutions come at a transitional moment for social media companies. With Meta recently ending its fact-checking program and overhauling its content moderation policiesincluding reportedly moving to adopt community notes-like featuresTikToks precarious ownership position, and Elon Musks control over the X platform, the foundations on which social media was built appear to be shifting. The authors argue that platforms should experiment with building community into their design: productivity platforms such as LinkedIn could seek to boost bridging and balancing content to increase productivity; platforms like X, where there is more political discourse, could experiment with different ways of displaying community affiliation; and cultural platforms like TikTok could trial features that let users curate their community membership. The Project Liberty Institute, where Tang is a senior fellow, is investing in X competitor BlueSkys ecosystem to strengthen freedom of speech protections.While its unclear what elements of the authors vision may be taken up by the platforms, their goal is ambitious: to redesign platforms to foster community cohesion, allowing them to finally deliver on their promise of creating genuine connection, rather than further division.0 Comments ·0 Shares ·76 Views
-
Huaweis Tri-Foldable Phone Hits Global Markets in a Show of Defiance Amid U.S. Curbstime.comA visitor tries the Huawei's first tri-foldable Mate XT smartphone during an event for its global launch in Kuala Lumpur on Feb. 18, 2025. Mohd RasfanAFP/Getty ImagesBy Eileen Ng / APFebruary 18, 2025 5:21 AM ESTKUALA LUMPUR, Malaysia Huawei on Tuesday held a global launch for the industrys first tri-foldable phone, which analysts said marked a symbolic victory for the Chinese tech giant amid U.S. technology curbs. But challenges over pricing, longevity, supply and app constraints may limit its success.Huawei said at a launch event in Kuala Lumpur that the Huawei Mate XT, first unveiled in China five months ago, will be priced at 3,499 euros ($3,662). Although dubbed a trifold, the phone has three mini-panels and folds only twice. The company says it's the thinnest foldable phone at 3.6 millimeters (0.14 inches), with a 10.2-inch screen similar to an Apple iPad.Right now, Huawei kind of stands alone as an innovator" with the trifold design, said Bryan Ma, vice president of device research with the market intelligence firm International Data Corporation.Huawei reached the position despite not getting access to chips, to Google services. All these things basically have been huge roadblocks in front of Huawei, Ma said, adding that the resurgence we're seeing from them over the past year has been quite a bit of a victory."Huawei, Chinas first global tech brand, is at the center of a U.S.-China battle over trade and technology. Washington in 2019 severed Huaweis access to U.S. components and technology, including Googles music and other smartphone services, making Huawei's phone less appealing to users. It has also barred global vendors from using U.S. technology to produce components for Huawei.American officials say Huawei is a security risk, which the company denies. Chinas government has accused Washington of misusing security warnings to contain a rising competitor to U.S. technology companies.Huawei launched the Mate XT in China on Sept. 20 last year, the same day Apple launched its iPhone 16 series in global markets. But with its steep price tag, the Mate XT is not a mainstream product that people are going to jump for, Ma said.At the Kuala Lumpur event, Huawei also unveiled its MatePad Pro tablet and Free Arc, its first open-ear earbuds with ear hooks and other wearable devices.While Huaweis cutting-edge devices showcase its technological prowess, its long-term success remains uncertain given ongoing challenges over global supply chain constraints, chip availability and limitations on the software ecosystem, said Ruby Lu, an analyst with the research firm TrendForce."System limitations, particularly the lack of Google Mobile Services, means its international market potential remains constrained, Lu said.IDC's Ma said Huawei dominated the foldable phone market in China with 49% market share last year. In the global market, it had 23% market share, trailing behind Samsung's 33% share in 2024, he said. IDC predicted that total foldable phone shipments worldwide could surge to 45.7 million units by 2028, from over 20 million last year.While most major brands have entered the foldable segments, Lu said Apple has yet to release a competing product.Once Apple enters the market, it is expected to significantly influence and stimulate further growth in the foldable phone sector, Lu added.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com0 Comments ·0 Shares ·66 Views
-
DeepSeek Not Available for Download in South Korea as Authorities Address Privacy Concernstime.comScreens display web pages of the Chinese AI DeepSeek in Goyang, South Korea, on Feb. 17, 2025.Jung Yeon-jeAFP/Getty ImagesBy Associated PressFebruary 17, 2025 12:00 AM ESTSEOUL, South Korea DeepSeek, a Chinese artificial intelligence startup, has temporarily paused downloads of its chatbot apps in South Korea while it works with local authorities to address privacy concerns, according to South Korean officials on Monday.South Koreas Personal Information Protection Commission said DeepSeeks apps were removed from the local versions of Apples App Store and Google Play on Saturday evening and that the company agreed to work with the agency to strengthen privacy protections before relaunching the apps.Read More: Is the DeepSeek Panic Overblown?The action does not affect users who have already downloaded DeepSeek on their phones or use it on personal computers. Nam Seok, director of the South Korean commissions investigation division, advised South Korean users of DeepSeek to delete the app from their devices or avoid entering personal information into the tool until the issues are resolved.Many South Korean government agencies and companies have either blocked DeepSeek from their networks or prohibited employees from using the app for work, amid worries that the AI model was gathering too much sensitive information.The South Korean privacy commission, which began reviewing DeepSeeks services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, Nam said.Nam said the commission did not have an estimate on the number of DeepSeek users in South Korea. A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com0 Comments ·0 Shares ·59 Views
-
What Changes to the CHIPS Act Could Mean for AI Growth and Consumerstime.comPresident Donald Trump speaks during a meeting in the Oval Office at the White House on Tuesday, Feb. 11, 2025, in Washington, D.C.Alex BrandonAPBy SARAH PARVINI / APFebruary 16, 2025 1:55 PM ESTLOS ANGELES Even as he's vowed to push the United States ahead in artificial intelligence research, President Donald Trump's threats to alter federal government contracts with chipmakers and slap new tariffs on the semiconductor industry may put new speed bumps in front of the tech industry.Since taking office, Trump has said he would place tariffs on foreign production of computer chips and semiconductors in order to return chip manufacturing to the U.S. The president and Republican lawmakers have also threatened to end the CHIPS and Science Act, a sweeping Biden administration-era law that also sought to boost domestic production.But economic experts have warned that Trump's dual-pronged approach could slow, or potentially harm, the administration's goal of ensuring that the U.S. maintains a competitive edge in artificial intelligence research.Saikat Chaudhuri, an expert on corporate growth and innovation at U.C. Berkeleys Haas School of Business, called Trumps derision of the CHIPS Act surprising because one of the biggest bottlenecks for the advancement of AI has been chip production. Most countries, Chaudhuri said, are trying to encourage chip production and the import of chips at favorable rates.We have seen what the shortage has done in everything from AI to even cars, he said. In the pandemic, cars had to do with fewer or less powerful chips in order to just deal with the supply constraints.The Biden administration helped shepherd in the law following supply disruptions that occurred after the start of the COVID-19 pandemic when a shortage of chips stalled factory assembly lines and fueled inflation threatened to plunge the U.S. economy into recession. When pushing for the investment, lawmakers also said they were concerned about efforts by China to control Taiwan, which accounts for more than 90% of advanced computer chip production.As of August 2024, the CHIPS and Science Act had provided $30 billion in support for 23 projects in 15 states that would add 115,000 manufacturing and construction jobs, according to the Commerce Department. That funding helped to draw in private capital and would enable the U.S. to produce 30% of the worlds most advanced computer chips, up from 0% when the Biden-Harris administration succeeded Trumps first term.The administration promised tens of billions of dollars to support the construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. In August, the Commerce Department pledged to provide up to $6.6 billion so that Taiwan Semiconductor Manufacturing Co. could expand the facilities it is already building in Arizona and better ensure that the most advanced microchips are produced domestically for the first time.But Trump has said he believes that companies entering into those contracts with the federal government, such as TSMC, didn't need money in order to prioritize chipmaking in the U.S.They needed an incentive. And the incentive is going to be theyre not going to want to pay at 25, 50 or even 100% tax, Trump said.TSMC held board meetings for the first time in the U.S. last week. Trump has signaled that if companies want to avoid tariffs they have to build their plants in the U.S. without help from the government. Taiwan also dispatched two senior economic affairs officials to Washington to meet with the Trump administration in a bid to potentially fend off a 100% tariff Trump has threatened to impose on chips.If the Trump administration does levy tariffs, Chaudhuri said, one immediate concern is that prices of goods that use semiconductors and chips will rise because the higher costs associated with tariffs are typically passed to consumers.Whether its your smartphone, whether its your gaming device, whether its your smart fridge probably also your smart features of your car anything and everything we use nowadays has a chip in it," he said. For consumers, its going to be rather painful. Manufacturers are not going to be able to absorb that.Even tech giants such as Nvidia will eventually feel the pain of tariffs, he said, despite their margins being high enough to absorb costs at the moment.Theyre all going to be affected by this negatively, he said. I cant see anybody benefiting from this except for those countries who jump on the bandwagon competitively and say, You know what, were going to introduce something like the CHIPS Act.Broadly based tariffs would be a shot in the foot of the U.S. economy, said Brett House, a professor of professional practice at Columbia Business School. Tariffs would not only raise the costs for businesses and households across the board, he said for the U.S. AI sector, they would massively increase the costs of one of their most important inputs: high-powered chips from abroad.If you cut off, repeal or threaten the CHIPS Act at the same time as youre putting in broadly based tariffs on imports of AI and other computer technology, you would be hamstringing the industry acutely, House said.Such tariffs would reduce the capacity to create a domestic chip building sector, sending a signal for future investments that the policy outlook is uncertain, he said. That would in turn put a chilling effect on new allocations of capital to the industry in the U.S. while making more expensive the existing flow of imported chips.American technological industrial leadership has always been supported by maintaining openness to global markets and to immigration and labor flows," he said. "And shutting that openness down has never been a recipe for American success.Associated Press writers Josh Boak and Didi Tang in Washington contributed to this report.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com0 Comments ·0 Shares ·75 Views
-
Why Amazon Web Services CEO Matt Garman Is Playing the Long Game on AItime.com(To receive weekly emails of conversations with the worlds top CEOs and decisionmakers, click here.)Matt Garman took the helm at Amazon Web Services (AWS), the cloud computing arm of the U.S. tech giant, in June, but he joined the business around 19 years ago as an intern. He went on to become AWSs first product manager and helped to build and launch many of its core services, before eventually becoming the CEO last year.Like many other tech companies, AWS, which is Amazons most profitable unit, is betting big on AI. In April 2023, the company launched Amazon Bedrock, which gives cloud customers access to foundation models built by AI companies including Anthropic and Mistral. At its re:Invent conference in Las Vegas in December, the AWS made a series of announcements, including a new generation of foundation AI models, called Nova. It also said that its building one of the worlds most powerful AI supercomputers with Anthropic, which it has a strategic partnership with, using a giant cluster of AWSs Trainium 2 training chips. TIME spoke with Garman a few days after the re:Invent conference, about his AI ambitions, how hes thinking about ensuring the technology is safe, and how the company is balancing its energy needs with its emissions targets.This interview has been condensed and edited for clarity.When you took over at AWS in June, there was a perception that Amazon had fallen behind somewhat in the AI race. What have your strategic priorities been for the business over the past few months?We've had a long history of doing AI inside of AWS, and in fact, most of the most popular AI services that folks use, like SageMaker, for the last decade have all been built on AWS. With generative AI we started to really lean in, and particularly when ChatGPT came out, I think everybody was excited about that, and it sparked everyone's imagination. We [had] been working on generative AI, actually, for a little while before that. And our belief at the time, and it still remains now, was that that AI was going to be a transformational technology for every single industry and workflow and user experience that's out there. And because of who our customer base is, our strategy was always to build a robust, secure, performance featureful platform that people could really integrate into their actual businesses. And so we didn't rush really quickly to throw a chatbot up on our website. We really wanted to help people build a platform that could deeply integrate into their data, that would protect their data. That's their IP, and it's super important for them, so [we] had security front of mind, and gave you choice across a whole bunch of models, gave you capabilities across a whole bunch of things, and really helped you build into your application and figure out how you could actually get inference and really leverage this technology on an ongoing basis as a key part of what you do in your enterprise. And so that's what we've been building for the last couple of years. In the last year we started to see people realize that that is what they wanted to [do] and as companies started moving from launching a hundred proof of concepts to really wanting to move to production. They realized that the platform is what they needed. They had to be able to leverage their data. They wanted to customize models. They wanted to use a bunch of different models. They wanted to have guardrails. They needed to integrate with their own enterprise data sources, a lot of which lived on AWS, and so their applications were AWS. We took that long-term view of: get the right build, the right platform, with the right security controls and the right capabilities, so that enterprises could build for the long term, as opposed to [trying to] get something out quickly. And so we're willing to accept the perception that people thought we were behind, because we had the conviction that we were building the right thing. And I think our customers largely agree.You're offering $1 billion worth in cloud credits, in addition to millions previously, for startups. Do you see that opening up opportunities for closer tie-ups at an earlier stage with the next Anthropic or OpenAI?Yeah, we've long invested in startups. It's one of the core customer bases that AWS has built our business on. We view startups as important to the success of AWS. They give us a lot of great insight. They love using cutting-edge technologies. They give us feedback on how we can improve our products. And frankly, they're the enterprises of tomorrow, so we want them to start building on AWS. And so from the very earliest days of AWS, startups have been critically important to us, and that's just doubling down on our commitment to them to help them get going. We recognize that as a startup, getting some help early on, before you get your business going, can make a huge difference. That's one of the things that we think helps us build that positive flywheel with that customer base. So we're super excited about continuing to work deeply with startups, and that commitment is part of that.You're also building one of the largest AI supercomputers in the world, with the Trainium 2 chips. Is building the hardware and infrastructure for AI development at the center of your AI strategy?Its a core part of it, for sure. We have this idea that across all of our AWS businesses, that choice is incredibly important for our customers. We want them to be able to choose from the very best technology, whether it comes from us or from third parties. Customers can pick the absolute best product for their application and for their use case and for what they're looking for from a cost performance trade-off. And so, on the AI side, we want to provide that same amount of choice. Building Tranium 2, which is our our second generation of high-performance AI chip, we think that's going to provide choice. Nvidia is an incredibly important partner of ours. Today, the vast majority of AI workloads run on Nvidia technology, and we expect that to continue for a very long time. They make great products, and the team executes really well. And we're really excited about the choice that Trainium 2 brings. Cost is one of the things that a lot of people worry about when they think about some of these AI workloads, and we think that Trainium 2 can help lower the cost for a lot of customers. And so we're really excited about that, both for AI companies who are looking to train these massive clusters, [for example] Anthropic is going to be training their next generation, industry-leading model on Trainium 2We're building a giant cluster, it's five times the size of their last clusterbut then the broad swath of folks that are doing inference or using Bedrock or making smaller clusters, I think there's a good opportunity for customers to lower costs with Trainium.Those clusters were 30% to 40% cheaper in comparison to Nvidia GPU clusters. What technical innovations are enabling these cost savings?Number one is that the team has done a fantastic job and produced a really good chip that performs really well. And so from an absolute basis, it gives better performance for some workloads. It's very workload dependent, but even Apple [says] in early testing, they see up to 50% price performance benefit. That's massive, if you can really get 30%, 40%, even 50% gains. And some of that is pricing, where we focused on building a chip that we think we can really materially lower the cost to produce for customers. But also then increasing performancethe team has built some innovations, where we see bottlenecks in AI training and inference, that we've built into the chips to improve particular function performance, etc. There are probably hundreds of thousands of things that go into delivering that type of performance, but we're quite excited about it and we're invested long term in the Trainium line.The company recently announced the Nova foundation model. Is that aimed at competing directly with the likes of GPT-4 and Gemini?Yes. We think it's important to have choice in the realm of these foundational models. Is it a direct competitor? We do think that we can deliver differentiated capabilities and performance. I think that this is such a big opportunity, and has such a material opportunity to change so many different workloads. These really large foundational modelsI think there'll be half a dozen to a dozen of them, probably less than 10. And I think they'll each be good at different things. [With] our Nova models, we focused on: how do we deliver a really low latency [and] great price performance? They're actually quite good at doing RAG [Retrieval-Augmented Generation] and agentic workflows. There's some other models that are better at other things today too. We'll keep pushing on it. I think there's room for a number of them, but we're very excited about the models and the customer reception has been really good.How does your partnership with Anthropic fit into this strategy?I think they have one of the strongest AI teams in the world. They have the leading model in the world right now. I think most people consider Sonnet to be the top model for reasoning and for coding and for a lot of other things as well. We get a lot of great feedback from customers on them. So we love that partnership, and we learn a lot from them too, as they build their models on top of Trainium, so there's a nice flywheel benefit where we get to learn from them, building on top of us. Our customers get to take advantage of leveraging their models inside of Bedrock, and we can grow the business together.How are you thinking about ensuring safety and responsibility in the development of AI?It's super important. And it goes up and down the stack. One of the reasons why customers are excited about models from us, in addition to them being very performant, is that we care a ton about safety. And so there's a couple of things. One is, you have to start from the beginning when you're building the models, you think about, how do you have as many controls in there as possible? How do you have safe development to the models? And then I think you need belt and suspenders in this space, because you can, of course, make models say things that you can then say oh, look what they said. Practically speaking our customers are trying to integrate these into their applications. And different from being able to produce a recipe for a bomb or something, which we definitely want to have security controls around, safety and control models actually extends specific to very use cases. If you're building an insurance application, you don't want your application to give out healthcare advice, whereas, if you're building healthcare one, you may. So we give a lot of controls to the customers so that they can build guardrails around the responses for models to really help guide how they want models to answer those questions. We launched a number of enhancements at re:Invent including what we call automated reasoning checks, which actually can give you a mathematical proof for if we can be 100% sure that an answer coming back is correct, based on the corpus of data that you have fed into the model. Eliminating hallucinations for a subset of answers is also super important. What's unsafe in the context of a customer's application can vary pretty widely, and so we try to give some really good controls for customers to be able to define that, because it's going to depend on the use cases. Energy requirements are a huge challenge for this business. Amazon is committed to a net zero emissions target by 2040 and you reported some progress there. How are you planning to continue reducing emissions while investing in large-scale infrastructure for AI?Number one is you just have to have that long term view as to how we ensure that the world has enough carbon-zero power. We've been the single biggest purchaser of renewable energy deals, new energy deals to the grid, so commissioning new solarsolar farms, or wind farms, etc. We've been the biggest corporate purchaser each of the last five years, and will continue to do that. Even on that path, that may not be fast enough, and so we've actually started investing in nuclear. I do think that that's an important component. It'll be part of that portfolio. It can be both large scale nuclear plants as well as, we've invested in and we're very bullish about small modular reactor technology, which is probably six or seven years out from really being in mass production. But we're optimistic that that can be another solve as part of that portfolio as well. On the path to carbon zero across the whole business, there's a lot of invention that's still going to need to happen. And I wont sit here and tell you we know all of the answers of how you're going to have carbon-zero shipping across oceans and airplanes for the retail side of it. And there's a whole bunch of challenges that the world has to go after, but that's part of why we made that commitment. We're putting together plans with with milestones along the way, because it's an incredibly important target for us. There's a lot of work to do but we're committed to doing it.And as part of that nuclear piece, you're supporting the development of these nuclear energy projects. What are you doing to ensure that the projects are safe in the communities where they're deployed?Look, I actually think one of the worst things for the environment was the mistakes the nuclear industry made back in the 50s, because it made everyone feel like technology wasn't that safe, which it may not have been way back then, but, it's been 70 years, and technology has evolved, and it is actually an incredibly safe, secure technology now. And so a lot of these things are actually fully self-contained and there is no risk of big meltdown or those kind of events that happened before. It's a super safe technology that has been well-tested and has been in production across the world safely for multiple decades now. There's still some fear, I think, from people, but, actually, increasingly, many geographies are realizing it's a quite safe technology.What do you want to see in terms of policy from the new presidential administration?We consider the U.S. government to be one of our most important customers that we support up and down the board and will continue to do so. So we're very excited, and we know many of those folks and are excited to continue to work on that mission together, because we do view it as a mission. It's both a good business for us, but it's also an ability to help our country move faster, to control costs, to be more agile. And I think it's super important, as you think about where the world is going, for our government to have access to the latest technologies. I do think AI and technology is increasingly becoming an incredibly important part of our national defense, probably as much so as guns and other things like that, and so we take that super seriously, and we're excited to work with the administration. I'm optimistic that President Trump and his administration can help us loosen some of the restrictions on helping build data centers faster. I'm hopeful that they can help us cut through some of that bureaucratic red tape and move faster. I think that'll be important, particularly as we want to maintain the AI lead for the U.S. ahead of China and others.What have you learned about leadership over the course of your career?We're fortunate at Amazon to be able to attract some of the most talented, most driven leaders and employees in the world, and I've been fortunate enough to get to work with some of those folks [and] to try to clear barriers for them so that they can go deliver outstanding results for our customers. I think if we have a smart team that is really focused on solving customer problems versus growing their own scope of responsibility or internal goals, [and] if you can get those teams focused on that and get barriers out of their way and remove obstacles, then we can deliver a lot. And so that's largely my job. I view myself as not the expert in any one particular thing. Every one of my team is usually better at whatever we're trying to do than I am. And my job is to let them go do their job as much as possible, and occasionally connect dots for them on where there's other parts of the company or other parts of the organization or other customer input that they may not have, that they can integrate and incorporate.You've worked closely with Andy Jassy, is there anything in particular that youve learned from watching him as a leader?I've learned a ton. He's a he's an exceptional leader. Andy is very good at having very high standards and having high expectations for the teams, and high standards for what we deliver for customers. He had a lot of the vision, together with some of the core folks who were starting AWS, of some important tenets of how we think about the business, of focusing on security and operational excellence and really focusing on how we go deliver for customers.What are your priorities for 2025?Our first priority always is to maintain outstanding security and operational excellence. We want to help customers get ready for that AI transformation that's going to happen. Part of that, though, is also helping get all of their applications in a place that they can take advantage of AI. So it's a hugely important priority for us to help customers continue on that migration to the cloud, because if their data is stuck on premise and legacy data stores and other things, they won't be able to take advantage of AI. So helping people modernize their data and analytics stacks to get that into the cloud and get their data links into a cloud and organized in a way that they can really start to take advantage of AI, is that is a big priority for us. And then it's just, how do we help scale the AI capabilities, bring the cost down for customers, while [we] keep adding the value. And for 2025, our goal is for customers to move AI workloads really into production that deliver great ROI for their businesses. And that crosses making sure all their data is in the right place, and make sure they have the right compute platforms. We think Trainium is going to be an important part of that. The last bit is helping add some applications on top. We think that we can add [the] extra benefit of helping employees and others get that effectiveness. Some of that is moving contact centers to the cloud. Some of that is helping get conversational assistants and AI assistants in the hands of employees, and so Amazon Q is a big part of that for us. And then it's also just empowering our broad partner ecosystem to go fast and help customers evolve as well.0 Comments ·0 Shares ·83 Views
-
TikTok Returns to Apple and Google App Stores in the U.S. After Trump Delayed Bantime.comBy Zen Soo / APFebruary 14, 2025 2:30 AM ESTTikTok has returned to the app stores of Apple and Google in the U.S., after President Donald Trump delayed the enforcement of a TikTok ban.TikTok, which is operated by Chinese technology firm ByteDance, was removed from Apple and Googles app stores on Jan. 18 to comply with a law that requires ByteDance to divest the app or be banned in the U.S.The popular social media app, which has over 170 million American users, previously suspended its services in the U.S. for a day before restoring service following assurances from Trump that he would postpone banning the app. The TikTok service suspension briefly prompted thousands of users to migrate to RedNote, a Chinese social media app, while calling themselves TikTok refugees.The TikTok app became available to download again in the U.S. Apple App store and Google Play store after nearly a month. On Trumps first day in office, he signed an executive order to extend the enforcement of a ban on TikTok to April 5.TikTok has long faced troubles in the U.S., with the U.S. government claiming that its Chinese ownership and access to the data of millions of Americans makes it a national security risk.TikTok has denied allegations that it has shared U.S. user data at the behest of the Chinese government, and argued that the law requiring it to be divested or banned violates the First Amendment rights of its American users.During Trumps first term in office, he supported banning TikTok but later changed his mind, claiming that he had a warm spot for the app. TikTok CEO Shou Chew was among the attendees at Trumps inauguration ceremony.Trump has suggested that TikTok could be jointly owned, with half of its ownership being American. Potential buyers include real estate mogul Frank McCourt, Shark Tank investor Kevin OLeary and popular YouTuber Jimmy Donaldson, also known as MrBeast.Zen Soo reported from Hong Kong.0 Comments ·0 Shares ·86 Views
-
Elon Musk Talks DOGE, AI, and DEI in Dubaitime.comElon Musk speaks via videocall at the World Governments Summit in Dubai on Feb. 13, 2025.Waleed ZeinAnadolu/Getty ImagesBy Jon Gambrell / APFebruary 13, 2025 2:30 AM ESTDUBAI, United Arab Emirates Elon Musk called Thursday to delete entire agencies from the U.S. government as part of his push under President Donald Trump to radically cut spending and restructure its priorities.Musk offered a wide-ranging survey via a videocall to the World Governments Summit in Dubai, United Arab Emirates, of what he described as the priorities of the Trump administration interspersed with multiple references to thermonuclear warfare and the possible dangers of artificial intelligence.We really have here rule of the bureaucracy as opposed to rule of the peopledemocracy, Musk said, wearing a black T-shirt that read: Tech Support. He also joked that he was the White Houses tech support, borrowing from his profile on the social platform X, which he owns.I think we do need to delete entire agencies as opposed to leave a lot of them behind, Musk said. If we dont remove the roots of the weed, then its easy for the weed to grow back.While Musk has spoken to the summit in the past, his appearance Thursday comes as he has consolidated control over large swaths of the government with Trumps blessing since assuming leadership of the Department of Government Efficiency. Thats included sidelining career officials, gaining access to sensitive databases and inviting a constitutional clash over the limits of presidential authority.Musks new role imbued his comments with more weight beyond being the worlds wealthiest person through his investments in SpaceX and electric carmaker Tesla.His remarks also offered a more-isolationist view of American power in the Middle East, where the U.S. has fought wars in both Afghanistan and Iraq since the Sept. 11, 2001, terror attacks.A lot of attention has been on USAID for example, Musk said, referring to Trumps dismantling of the U.S. Agency for International Development. Theres like the National Endowment for Democracy. But Im like, Okay, well, how much democracy have they achieved lately?He added that the U.S. under Trump is less interested in interfering with the affairs of other countries.There are times the United States has been kind of pushy in international affairs, which may resonate with some members of the audience, Musk said, speaking to the crowd in the UAE, an autocratically ruled nation of seven sheikhdoms.Basically, America should mind its own business, rather than push for regime change all over the place, he said.He also noted the Trump administration's focus on eliminating diversity, equity and inclusion work, at one point linking it to AI.If hypothetically, AI is designed for DEI, you know, diversity at all costs, it could decide that theres too many men in power and execute them, Musk said.On AI, Musk said he believed Xs newly updated AI chatbot, Grok 3, would be ready in about two weeks, calling it at one point kind of scary. He criticized Sam Altmans management of OpenAI, which Musk just led a $97.4 billion takeover bid for, describing it as akin to a nonprofit aimed at saving the Amazon rainforest becoming a lumber company that chops down the trees.Musk also announced plans for a Dubai Loop project in line with his work in the Boring Companywhich is digging tunnels in Las Vegas to speed transit. However, he and the Emirati government official speaking with him offered no immediate details of the plan.Its going to be like a wormhole, Musk promised. You just wormhole from one part of the cityboomand youre out in another part of the city.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com0 Comments ·0 Shares ·127 Views
-
Digital Access Is Critical for Society Say Industry Leaderstime.comSam Jacobs, Editor-in-Chief of TIME, (L) moderates a panel titled Can we innovate our way to a more connected world? with panelists Margherita Della Valle, CEO of Vodafone, Mickey Mikitani, CEO of Rakuten Group, and Hatem Dowidar, CEO of e&.Courtesy of the World Governments SummitBy Ayesha JavedFebruary 12, 2025 5:53 PM ESTImproving connectivity can both benefit those who most need it most and boost the businesses that provide the service. Thats the case telecom industry leaders made during a panel on Feb. 11 at the World Governments Summit in Dubai.Titled Can we innovate our way to a more connected world?, the panel was hosted by TIMEs Editor-in-Chief Sam Jacobs. During the course of the conversation, Margherita Della Valle, CEO of U.K.-based multinational telecom company Vodafone Group, said, For society today, connectivity is essential. We are moving from the old divide in the world between the haves and the have-nots towards a new divide, which is between those who have access to connectivity and those who don't.The International Telecommunications Union, a United Nations agency, says that around 2.6 billion peoplea third of the global populationdont have access to the internet. Della Valle noted that of those unconnected people, 300 million live in remote areas that are too far from any form of connectivity infrastructure to get online. Satellites can help to bridge the gap, says Della Valle, whose company plans to launch its commercial direct-to-smartphone satellite service later this year in Europe.While digital access is a social issue, companies dont need to choose between what is best for consumers and whats best for business, Hatem Dowidar, group CEO of UAE-based telecom company e&, formerly known as Etisalat Group, said. At the end of the day," he said, "in our telecom part of the business, when we connect people, [theyre] customers for us, it makes revenue, and we can build on it. He noted that part of e&s evolution toward becoming a tech company has involved enabling customers to access fintech, cybersecurity, and cloud computing services.Mickey Mikitani, CEO of Japanese technology conglomerate Rakuten Group, advocated for a radical transformation of the telecommunications industry, calling the existing telecoms business model obsolete and old. Removing barriers to entry to the telecom sector, like the cost of accessing a wireless spectrumthe range of electromagnetic frequencies used to transmit wireless communicationsmay benefit customers and society more broadly, he said.The panelists also discussed how artificial intelligence can improve connectivity, as well as the role of networks in supporting the technologys use. Mikitani noted that his company has been using AI to help it manage networks efficiently with a fraction of the staff its competitors have. Della Valle added, AI will need strong networks, emphasizing that countries where networks have not received sufficient investment may struggle to support the technology.Dowidar called on attendees at the summit from governments around the world to have a dialogue with industry leaders about legislation and regulations in order to overcome the current and potential challenges. Some of those hurdles include ensuring data sovereignty and security within borders, and enabling better training of AI in languages beyond English, he noted. It's very important for everyone to understand the potential that can be unleashed by technology, Dowidar said, emphasizing the need to train workforces. AI is going to change the world.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsWrite to Ayesha Javed at ayesha.javed@time.com0 Comments ·0 Shares ·117 Views
-
Safety Takes A Backseat At Paris AI Summit, As U.S. Pushes for Less Regulationtime.comBy Billy Perrigo/ParisFebruary 11, 2025 4:35 PM ESTSafety concerns are out, optimism is in: that was the takeaway from a major artificial intelligence summit in Paris this week, as leaders from the U.S., France, and beyond threw their weight behind the AI industry.Although there were divisions between major nationsthe U.S. and the U.K. did not sign a final statement endorsed by 60 nations calling for an inclusive and open AI sectorthe focus of the two-day meeting was markedly different from the last such gathering. Last year, in Seoul, the emphasis was on defining red-lines for the AI industry. The concern: that the technology, although holding great promise, also had the potential for great harm.But that was then. The final statement made no mention of significant AI risks nor attempts to mitigate them, while in a speech on Tuesday, U.S. Vice President J.D. Vance said: Im not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. Im here to talk about AI opportunity.The French leader and summit host, Emmanuel Macron, also trumpeted a decidedly pro-business messageunderlining just how eager nations around the world are to gain an edge in the development of new AI systems.Once upon a time in BletchleyThe emphasis on boosting the AI sector and putting aside safety concerns was a far cry from the first ever global summit on AI held at Bletchley Park in the U.K. in 2023. Called the AI Safety Summitthe French meeting in contrast was called the AI Action Summitits express goal was to thrash out a way to mitigate the risks posed by developments in the technology.The second global gathering, in Seoul in 2024, built on this foundation, with leaders securing voluntary safety commitments from leading AI players such as OpenAI, Google, Meta, and their counterparts in China, South Korea, and the United Arab Emirates. The 2025 summit in Paris, governments and AI companies agreed at the time, would be the place to define red-lines for AI: risk thresholds that would require mitigations at the international level.Paris, however, went the other way. I think this was a real belly-flop, says Max Tegmark, an MIT professor and the president of the Future of Life Institute, a non-profit focused on mitigating AI risks. It almost felt like they were trying to undo Bletchley.Anthropic, an AI company focused on safety, called the event a missed opportunity.The U.K., which hosted the first AI summit, said it had declined to sign the Paris declaration because of a lack of substance. We felt the declaration didn't provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it, said a spokesperson for Prime Minister Keir Starmer.Racing for an edgeThe shift comes against the backdrop of intensifying developments in AI. In the month or so before the 2025 Summit, OpenAI released an agent model that can perform research tasks at roughly the level of a competent graduate student.Safety researchers, meanwhile, showed for the first time that the latest generation of AI models can try to deceive their creators, and copy themselves, in an attempt to avoid modification. Many independent AI scientists now agree with the projections of the tech companies themselves: that super-human level AI may be developed within the next five yearswith potentially catastrophic effects if unsolved questions in safety research arent addressed.Yet such worries were pushed to the back burner as the U.S., in particular, made a forceful argument against moves to regulate the sector, with Vance saying that the Trump Administration cannot and will not accept foreign governments tightening the screws on U.S. tech companies.He also strongly criticized European regulations. The E.U. has the worlds most comprehensive AI law, called the AI Act, plus other laws such as the Digital Services Act, which Vance called out by name as being overly restrictive in its restrictions related to misinformation on social media.The new Vice President, who has a broad base of support among venture capitalists, also made clear that his political support for big tech companies did not extend to regulations that would raise barriers for new startups, thus hindering the development of innovative AI technologies.To restrict [AIs] development now would not only unfairly benefit incumbents in the space, it would mean paralysing one of the most promising technologies we have seen in generations, Vance said. When a massive incumbent comes to us asking for safety regulations, we ought to ask whether that safety regulation is for the benefit of our people, or whether its for the benefit of the incumbent.And in a clear sign that concerns about AI risks are out of favor in President Trumps Washington, he associated AI safety with a popular Republican talking point: the restriction of free speech by social media platforms trying to tackle harms like misinformation.With reporting by Tharin Pillay/Paris and Harry Booth/Paris0 Comments ·0 Shares ·112 Views
-
JD Vance Rails Against Excessive AI Regulation at Paris Summittime.comPARIS U.S. Vice President JD Vance warned global leaders and tech CEOs at a Paris summit on artificial intelligence on Tuesday that excessive regulation would kill the rapidly growing AI industry.In his first foreign trip as vice president, Vance also said the Trump administration will ensure that AI systems developed in America are free from ideological bias, and that the United States would never restrict our citizens right to free speech.[time-brightcove not-tgx=true]Now, at this moment, we face the extraordinary prospect of a new industrial revolution, one on par with the invention of the steam engine, Vance said. But it will never come to pass. If overregulation deters innovators from taking the risks necessary to advance the ball.Vances address challenged Europes regulatory approach to artificial intelligence and its moderation of content on Big Tech platforms, underscoringdivergence between the United States and its allies on AI governance.The summit has drawn world leaders, top tech executives, and policymakers to debate AIs impact on security, economics, and governance.Read More: Inside Frances Effort to Shape the Global AI ConversationA three-way race for AI dominanceThe differences were openly displayed at the summit: Europe seeks to regulate and invest, China expands access through state-backed tech giants, and the U.S., under President Donald Trump, champions a hands-off approach.Among the high-profile attendees is Chinese Vice Premier Zhang Guoqing, reflecting Beijings interest in shaping global AI standards.Vance has been an outspoken critic of European content moderation policies. He has suggested the U.S. should reconsider its NATO commitments if European governments impose restrictions onElon Musks social media platform, X. His Paris visit is also expected to include candid discussions on Ukraine, AIs role in global power shifts, and U.S.-China tensions.How to regulate AI?Concerns over AIs potential dangers have loomed over the summit, particularly as nations grapple with how to regulate a technology that is increasingly entwined with defense and warfare.I think one day we will have to find ways to control AI or else we will lose control of everything, said Admiral Pierre Vandier, NATOs commander who oversees the alliances modernization efforts.Beyond diplomatic tensions, a global public-private partnership is being launched called Current AI, aimed at supporting large-scale AI initiatives for the public good.Analysts see this as an opportunity to counterbalance the dominance of private companies in AI development. However, it remains unclear whether the U.S. will support such efforts.Separately, a high-stakes battle over AI power is escalating in the private sector.A group of investors led by Musk who now heads Trumps Department of Government Efficiency has made a $97.4 billion bid to acquire the nonprofit behind OpenAI. OpenAI CEO Sam Altman, attending the Paris summit, swiftly rejected the offer on X.The US-China rivalryIn Beijing, officials on Monday condemnedWestern efforts to restrict access to AI tools, while Chinese company DeepSeeks new AI chatbot has prompted calls in the U.S. Congress to limit its use over security concerns. China promotes open-source AI, arguing that accessibility will ensure global AI benefits.French organizers hope the summit will boost investment in Europes AI sector, positioning the region as a credible contender in an industry shaped by U.S.-China competition.French PresidentEmmanuel Macron, addressing the energy demands of AI, contrasted Frances nuclear-powered approach with the U.S.s reliance on fossil fuels, quipping: France wont drill, baby, drill, but plug, baby, plug.Vances diplomatic tour will continue in Germany, where he will attend the Munich Security Conference and press European allies to increase commitments to NATO and Ukraine. He may also meet with Ukrainian President Volodymyr Zelenskyy.Talking Ukraine and Middle East with MacronVance will discuss Ukraine and the Middle East over a working lunch with Macron.Like Trump, he has questioned U.S. aid to Kyiv and the broader Western strategy toward Russia. Trump has pledged to end the war in Ukraine within six months of taking office.Vance is also set to meet separately with Indian Prime MinisterNarendra Modiand European Commission President Ursula von der Leyen.0 Comments ·0 Shares ·108 Views
-
How Google Appears to Be Adapting Its Products to the Trump Presidencytime.comA Google logo outside the Google booth at Integrated Systems Europe 2025 in Barcelona on Feb. 4, 2025.Cesc MaymoGetty ImagesBy Miranda JeyaretnamFebruary 11, 2025 4:00 AM ESTGoogle was among the tech companies that donated $1 million to Donald Trumps 2025 inauguration. Its also among the companies that has pulled back on its internal diversity hiring policies in response to the Trump Administrations anti-DEI crackdown. And in early February, Google dropped its pledge not to use AI for weapons or surveillance, a move seen as paving the way for closer cooperation with Trumps government.Now, users of Googles consumer products are noticing that a number of updates have been madeseemingly in response to the new administrationto everyday tools like Maps, Calendar, and Search.Heres what to know.Google Maps renames Gulf of Mexico to Gulf of AmericaAmong Trumps first executive orders was a directive to rename the Gulf of Mexico to Gulf of America and Alaskas Denali, the highest mountain peak in North America, to its former name Mt. McKinley. Google announced on Jan. 27 that it would quickly update its maps accordingly, as soon as the federal Geographic Names Information System (GNIS) is updated. On Monday, Feb. 10, following changes around the same time by the Storm Prediction Center and Federal Aviation Administration, Google announced that, in line with its longstanding convention on naming disputed regions, U.S. based users would now see Gulf of America, Mexican users will continue to see Gulf of Mexico, while users elsewhere will see Gulf of Mexico (Gulf of America).As of Tuesday, Feb. 11, alternatives Apple Maps and OpenStreetMap still show Gulf of Mexico.Google Calendar removes Pride, Black History Month, and other cultural holidaysLast week, some users noticed that Google removed certain default markers from its calendar, including Pride (June), Black History Month (February), Indigenous Peoples Month (November), and Hispanic Heritage Month (mid-September to mid-October). Dear Google. Stop sucking up to Trump, reads one comment on a Google Support forum about the noticed changes.A Google spokesperson confirmed the removal of some holidays and observances to The Verge but said that such changes began in 2024 because maintaining hundreds of moments manually and consistently globally wasnt scalable or sustainable, explaining that Google Calendar now defers to public holidays and national observances globally listed on timeanddate.com. But not everyone is buying the explanation: These are lies by Google in order to please the American dictator, wrote a commenter on another Google Support forum about the changes.Google Search prohibits autocomplete for impeach TrumpEarlier this month, social media users also noticed that Google Search no longer suggests an autocomplete for impeach Trump when the beginning of the query is typed in the search box, Snopes reported. A Google spokesperson told the fact-checking site that the autocomplete suggestion was removed because the companys policies prohibit autocomplete predictions that could be interpreted as a position for or against a political figure. In this case, some predictions were appearing that shouldnt have been, and were taking action to block them. Google also recently removed predictions for impeach Biden, impeach Clinton, and others, the spokesperson added, though search results dont appear to be altered.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com0 Comments ·0 Shares ·130 Views
-
How Elon Musks Anti-Government Crusade Could Benefit Tesla and His Other Businessestime.comBy KIMBERLY KINDY and BRIAN SLODYSKO / APFebruary 11, 2025 3:00 AM ESTWASHINGTON Elon Musk has long railed against the U.S. government, saying a crushing number of federal investigations and safety programs have stymied Tesla, his electric car company, and its efforts to create fleets of robotaxis and other self-driving automobiles.Now, Musks close relationship with President Donald Trump means many of those federal headaches could vanish within weeks or months.On the potential chopping block: crash investigations into Teslas partially automated vehicles; a Justice Department criminal probe examining whether Musk and Tesla have overstated their cars self-driving capabilities; and a government mandate to report crash data on vehicles using technology like Teslas Autopilot.The consequences of such actions could prove dire, say safety advocates who credit the federal investigations and recalls with saving lives.Musk wants to run the Department of Transportation, said Missy Cummings, a former senior safety adviser at the National Highway Traffic Safety Administration. Ive lost count of the number of investigations that are underway with Tesla. They will all be gone.Within days of Trump taking office, the White House and Musk began waging an unbridled war against the federal governmentfreezing spending and programs while sacking a host of career employees, including prosecutors and government watchdogs typically shielded from such brazen dismissals without cause.The actions have sparked outcries from legal scholars who say the Trump administrations actions are without modern-day precedent and are already upending the balance of power in Washington.The Trump administration has not yet declared any actions that could benefit Tesla or Musks other companies. However, snuffing out federal investigations or jettisoning safety initiatives would be an easier task than their assault on regulators and the bureaucracy.Investigations into companies like Tesla can be shut down overnight by the new leaders of agencies. And safety programs created through an agency order or initiativenot by laws passed by Congress or adopted through a formal regulatory processcan also be quickly dissolved by new leaders. Unlike many of the dismantling efforts that Trump and Musk have launched in recent weeks, stalling or killing such probes and programs would not be subject to legal challenges.As such, the temporal and fragile nature of the federal probes and safety programs make them easy targets for those seeking to weaken government oversight and upend long-established norms.Trumps election, and the bromance between Trump and Musk, will essentially lead to the defanging of a regulatory environment thats been stifling Tesla, said Daniel Ives, a veteran Wall Street technology and automobile industry analyst.Musks empireAmong Musks businesses, the federal governments power over Tesla to investigate, order recalls, and mandate crash data reporting is perhaps the most wide-ranging. However, the ways the Trump administration could quickly ease up on Tesla also apply in some measure to other companies in Musks sprawling business empire.A host of Musks other businessessuch as his aerospace company SpaceX and his social media company Xare subjects of federal investigations.Musks businesses are also intertwined with the federal government, pocketing hundreds of millions of dollars each year in contracts. SpaceX, for example, has secured nearly $20 billion in federal funds since 2008 to ferry astronauts and satellites into space. Tesla, meanwhile, has received $41.9 million from the U.S. government, including payment for vehicles provided to some U.S. embassies.Musk, Teslas billionaire CEO, has found himself in his newly influential position by enthusiastically backing Trumps third bid for the White House. He was the largest donor to the campaign, plunging more than $270 million of his vast fortune into Trumps political apparatus, most of it during the final months of the heated presidential race.Those donations and his efforts during the campaignincluding the transformation of his social media platform X into a firehose of pro-Trump commentaryhave been rewarded by Trump, who has tapped the entrepreneur to oversee efforts to slash government regulations and spending.Read More: Inside Elon Musks War on Washington As the head of the Department of Government Efficiency, Musk operates out of an office in the Eisenhower Executive Office Building, where most White House staff work and from where he has launched his assault on the federal government. Musks power under DOGE is being challenged in the courts.Even before Trump took office, there were signs that Musks vast influence with the new administration was registering with the publicand paying dividends for Tesla.Teslas stock surged more than 60% by December. Since then, its stock price has dropped, but still remains 40% higher than it was before Trumps election.For Musk, said Ives, the technology analyst, betting on Trump is a poker move for the ages.Proposed actions will help TeslaThe White House did not respond to questions about how it would handle investigations and government oversight involving Tesla or other Musk companies. A spokesman for the transition team said last month that the White House would ensure that DOGE and those involved with it are compliant with all legal guidelines and conflicts of interest.In the weeks before Trump took office on Jan. 20, the president-elects transition team recommended changes that would benefit the billionaire and his car company, including scrapping the federal order requiring carmakers to report crash data involving self-driving and partially automated technology.The action would be a boon for Tesla, which has reported a vast majority of the crashes that triggered a series of investigations and recalls.The transition team also recommended shelving a $7,500 consumer tax credit for electric vehicle purchases, something Musk has publicly called for.Take away the subsidies. It will only help Tesla, Musk wrote in a post on X as he campaigned and raised money for Trump in July.Auto industry experts say the move would have a nominal impact on Teslaby far the largest electric vehicle maker in the U.S.but have a potentially devastating impact on its competitors in the EV sector since they are still struggling to secure a foothold in the market.Musk did not respond to requests for comment. Before the election, he posted a message on X, saying he had never asked Trump for any favors, nor has he offered me any.Although most of the changes that Musk might seek for Tesla could unfold quickly, there is one long-term goal that could impact the autonomous vehicle industry for decades to come.Though nearly 30 states have rules that specifically govern self-driving cars, the federal government has yet to craft such regulations.During a late October call with Tesla investors, as Musk was pouring hundreds of millions of dollars into Trumps campaign, he signaled support for having the federal government create these rules.There should be a federal approval process for autonomous vehicles, Musk said on the call. If theres a department of government efficiency, Ill try to help make that happen.Musk leads that very organization.Those affected by Tesla crashes worry about lax oversightPeople whose lives have been forever changed by Tesla crashes fear that dangerous and fatal accidents may increase if the federal governments investigative and recall powers are restricted.They say they worry that the company may otherwise never be held accountable for its failures, like the one that took the life of 22-year-old Naibel Benavides Leon.The college student was on a date with her boyfriend, gazing at the stars on the side of a rural Florida road, when they were struck by an out-of-control Tesla driving on Autopilota system that allows Tesla cars to operate without driver input. The car had blown through a stop sign, a flashing light and five yellow warning signs, according to dashcam video and a police report.Benavides Leon died at the scene; her boyfriend, Dillon Angulo, suffered injuries but survived. A federal investigation determined that Autopilot in Teslas at this time was faulty and needed repairs.We, as a family, have never been the same, said Benavides Leons sister, Neima. Im an engineer, and everything that we design and we build has to be by important codes and regulations. This technology cannot be an exception.It has to be investigated when it fails, she added. Because it does fail.Teslas lawyers did not respond to requests for comment. In a statement on Twitter in December 2023, Tesla pointed to an earlier lawsuit the Benavides Leons family had brought against the driver who struck the college student. He testified that despite using Autopilot, I was highly aware that it was still my responsibility to operate the vehicle safely.Tesla also said the driver was pressing the accelerator to maintain 60 mph, an action that effectively overrode Autopilot, which would have otherwise restricted the speed to 45 mph on the rural route, something Benavides Leons attorney disputes.Federal probes into TeslaThe federal agency that has the most power over Teslaand the entire automobile industryis the National Highway Traffic Safety Administration, which is part of the Department of Transportation.NHTSA sets automobile safety standards that must be met before vehicles can enter the marketplace. It also has a quasi-law enforcement arm, the Office of Defects Investigation, which has the power to launch probes into crashes and seek recalls for safety defects.The agency has six pending investigations into Teslas self-driving technology, prompted by dozens of crashes that took place when the computerized systems were in use.Other federal agencies are also investigating Musk and Tesla, and all of those probes could be sidelined by Musk-friendly officials:The Securities and Exchange Commission and Justice Department are separately investigating whether Musk and Tesla overstated the autonomous capabilities of their vehicles, creating dangerous situations in which drivers may over rely on the cars technology.The Justice Department is also probing whether Tesla misled customers about how far its electric vehicles can travel before needing a charge.The National Labor Relations Board is weighing 12 unfair labor practice allegations leveled by workers at Tesla plants.The Equal Employment Opportunity Commission is asking a federal judge to force Tesla to enact reforms and pay compensatory and punitive damages and backpay to Black employees who say they were subjected to racist attacks. In a federal lawsuit, the agency has alleged that supervisors and other employees at Teslas plant in Fremont, California, routinely hurled racist insults at Black employees.Experts said most, if not all, of those investigations could be shut down, especially at the Justice Department where Trump has long shown a willingness to meddle in the departments affairs. The Trump administration has already ordered the firing of dozens of prosecutors who handled the criminal cases from the Jan. 6, 2021 attack on the Capitol.DOJ is not going to be prosecuting Elon Musk, said Peter Zeidenberg, a former Assistant U.S. Attorney in the Justice Departments public integrity section who served during the Clinton and George H.W. Bush administrations. Id expect that any investigations that were ongoing will be ground to an abrupt end.Trump has also taken steps to gain control of the NLRB and EEOC. Last month, he fired Democratic members of the board and commission, breaking with decades of precedent. One member has sued, and two others are exploring legal options.Tesla and Musk have denied wrongdoing in all those investigations and are fighting the probes.The small safety agency in Musks crosshairsThe federal agency that appears to have enjoyed the most success in changing Teslas behavior is NHTSA, an organization of about 750 staffers that has forced the company to hand over crash data and cooperate in its investigations and requested recalls.NHTSA has been a thorn in Musks side for over the last decade, and hes grappled with almost every three-letter agency in the Beltway, said Ives, the Wall Street analyst who covers the technology sector and automobile industry. Thats all created what looks to be a really big soap opera in 2025.Musk has repeatedly blamed the federal government for impeding Teslas progress and creating negative publicity with recalls of his cars after its self-driving technology malfunctions or crashes.The word recall should be recalled, Musk posted on Twitter (now X) in 2014. Two years ago, he posted, The word recall for an over-the-air software update is anachronistic and just flat wrong!Michael Brooks, executive director of the Center for Auto Safety, a non-profit consumer advocacy group, said some investigations might continue under Trump, but a recall is less likely to happen if a defect is found.As with most car companies, Teslas recalls have so far been voluntary. The threat of public hearings about a defect that precedes a NHTSA-ordered recall has generally prompted car companies to act on their own.That threat could be easily stripped away by the new NHTSA administrator, who will be a Trump appointee.If there isnt a threat of recall, will Tesla do them? Brooks said. Unfortunately, this is where politics seeps in.NHTSA conducting several probes of TeslaAmong the active NHTSA investigations, several are examining fundamental aspects of Teslas partially automated driving systems that were in use when dozens of crashes occurred.An investigation of Teslas Full Self-Driving system started in October after Tesla reported four crashes to NHTSA in which the vehicles had trouble navigating through sun glare, fog and airborne dust. In one of the accidents, an Arizona woman was killed after stopping on a freeway to help someone involved in another crash.Under pressure from NHTSA, Tesla has twice recalled the Full Self-Driving feature for software updates. The technologythe most advanced of Teslas Autopilot systemsis supposed to allow drivers to travel from point to point with little human intervention. But repeated malfunctions led NHTSA to recently launch a new inquiry that includes a crash in July that killed a motorcyclist near Seattle.NHTSA announced its latest investigation in January into Actually Smart Summon, a Tesla technology that allows drivers to remotely move a car, after the agency learned of four incidents from a driver and several media reports.The agency said that in each collision, the vehicles were using the system that Tesla pushed out in a September software update that was failing to detect posts or parked vehicles, resulting in a crash. NHTSA also criticized Tesla for failing to notify the agency of those accidents.NHTSA is also conducting a probe into whether a 2023 recall of Autopilot, the most basic of Teslas partially automated driver assistance systems, was effective.That recall was supposed to boost the number of controls and alerts to keep drivers engaged; it had been prompted by an earlier NHTSA investigation that identified hundreds of crashes involving Autopilot that resulted in scores of injuries and more than a dozen deaths.In a letter to Tesla in April, agency investigators noted that crashes involving Autopilot continue and that they could not observe a difference between warnings issued to drivers before or after the new software had been installed.Critics have said that Teslas dont have proper sensors to be fully self-driving. Nearly all other companies working on autonomous vehicles use radar and laser sensors in addition to cameras to see better in the dark or in poor visibility conditions. Tesla, on the other hand, relies only on cameras to spot hazards.Musk has said that human drivers rely on their eyesight, so autonomous cars should be able to also get by with just cameras. He has called technology that relies on radar and light detection to discern objects a fools errand.Bryant Walker Smith, a Stanford Law School scholar and a leading automated driving expert, said Musks contention that the federal government is holding him back is not accurate. The problem, Smith said, is that Teslas autonomous vehicles cannot perform as advertised.Blaming the federal government for holding them back, it provides a convenient, if dubious, scapegoat for the lack of an actual automated driving system that works, Smith said.Smith and other autonomous vehicle experts say Musk has felt pressure to provide Tesla shareholders with excuses for repeated delays in rolling out its futuristic cars. The financial stake is enormous, which Musk acknowledged during a 2022 interview. He said the development of a fully self-driving vehicle was really the difference between Tesla being worth a lot of money and being worth basically zero.The collisions from Teslas malfunctioning technology on its vehicles have led not only to deaths but also catastrophic injuries that have forever altered peoples lives.Attorneys representing people injured in Tesla crashesor who represent surviving family members of those who diedsay without NHTSA, the only other way to hold the car company accountable is through civil lawsuits.When government cant do it, then the civil justice system is left to pick up the slack, said Brett Schreiber, whose law firm is handling four Tesla cases.However, Schreiber and other lawyers say if the federal governments investigative powers dont remain intact, Tesla may also not be held accountable in court.In the pending wrongful death lawsuit that Neima Benavides Leon filed against Tesla after her sisters death, her attorney told a Miami district judge the lawsuit would have likely been dropped if NHTSA hadnt investigated and found defects with the Autopilot system.All along we were hoping that the NHTSA investigation would produce what it did, in fact, end up producing, which is a finding of product defect and a recall, attorney Doug Eaton said during a March court hearing. And we had told you very early on in the case if NHTSA had not found that, we may very well drop the case. But they did, in fact, find this.0 Comments ·0 Shares ·132 Views
-
Elon Musk Leads Group Seeking to Buy OpenAI. Sam Altman Says No Thank Youtime.comOpenAIs logo is displayed on a mobile phone screen in front of images of Sam Altman, left, and Elon Musk.Muhammed Selim KorkutataAnadolu/Getty ImagesBy Matt O'Brien / APUpdated: February 10, 2025 10:30 PM EST | Originally published: February 10, 2025 9:00 PM ESTA group of investors led by Elon Musk is offering about $97.4 billion to buy the nonprofit behind OpenAI, escalating a dispute with the artificial intelligence company that Musk helped found a decade ago.Musk and his own AI startup, xAI, and a consortium of investment firms want to take control of the ChatGPT maker and revert it to its original charitable mission as a nonprofit research lab, according to Musks attorney Marc Toberoff.OpenAI CEO Sam Altman quickly rejected the unsolicited bid on Musks social platform X, saying, no thank you but we will buy Twitter for $9.74 billion if you want.Musk bought Twitter, now called X, for $44 billion in 2022.Musk and Altman, who together helped start OpenAI in 2015 and later competed over who should lead it, have been in a long-running feud over the startups direction since Musk resigned from its board in 2018.Musk, an early OpenAI investor and board member, sued the company last year, first in a California state court and later in federal court, alleging it had betrayed its founding aims as a nonprofit research lab that would benefit the public good by safely building better-than-human AI. Musk had invested about $45 million in the startup from its founding until 2018, Toberoff has said.The sudden success of ChatGPT two years ago brought worldwide fame and a new revenue stream to OpenAI and also heightened the internal battles over the future of the organization and the advanced AI it was trying to develop. Its nonprofit board fired Altman in late 2023. He came back days later with a new board.Now a fast-growing business still controlled by a nonprofit board bound to its original mission, OpenAI last year announced plans to formally change its corporate structure. But such changes are complicated. Tax law requires money or assets donated to a tax-exempt organization to remain within the charitable sector.If the initial organization becomes a for-profit, generally, a conversion is needed where the for-profit pays the fair market value of the assets to another charitable organization. Even if the nonprofit OpenAI continues to exist in some way, some experts argue it would have to be paid fair market value for any assets that get transferred to its for-profit subsidiaries.Lawyers for OpenAI and Musk faced off in a California federal court last week as a judge weighed Musks request for a court order that would block the ChatGPT maker from converting itself to a for-profit company.U.S. District Judge Yvonne Gonzalez Rogers hasnt yet ruled on Musks request but in the courtroom said it was a stretch for Musk to claim he will be irreparably harmed if she doesnt intervene to stop OpenAI from moving forward with its planned transition.But the judge also raised concerns about OpenAI and its relationship with business partner Microsoft and said she wouldnt stop the case from moving to trial as soon as next year so a jury can decide.It is plausible that what Mr. Musk is saying is true. Well find out. Hell sit on the stand, she said.Along with Musk and xAI, others backing the bid announced Monday include Baron Capital Group, Valor Management, Atreides Management, Vy Fund, Emanuel Capital Management and Eight Partners VC.Toberoff said in a statement that if Altman and OpenAIs current board are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time.Musks attorney also shared a letter he sent in early January to the attorneys general of California, where OpenAI operates, and Delaware, where it is incorporated.Since both state offices must ensure any such transactional process relating to OpenAIs charitable assets provides at least fair market value to protect the publics beneficial interest, we assume you will provide a process for competitive bidding to actually determine that fair market value, Toberoff wrote, asking for more information on the terms and timing of that bidding process.OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIMEs archives.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com0 Comments ·0 Shares ·123 Views
-
Arvind Krishna Celebrates the Work of a Pioneer at the TIME100 AI Impact Awardstime.comArvind Krishna, chief executive officer of International Business Machines Corporation (IBM), in San Francisco, July 13, 2022.David Paul MorrisBloomberg/Getty ImagesBy Ayesha JavedFebruary 10, 2025 5:33 PM ESTArvind Krishna, CEO, chairman and president of IBM, used his acceptance speech at the TIME100 AI Impact Awards on Monday to acknowledge pioneering computer scientist and mathematician Claude Shannon, calling him one of the unsung heroes of today.Krishna, who accepted his award at a ceremony in Dubai alongside musician Grimes, California Institute of Technology professor Anima Anandkumar, and artist Refik Anadol, said of Shannon, He would come up with the ways that you can convey information, all of which has stood the test until today.In 1948, Shannonnow known as the father of the information agepublished A Mathematical Theory of Communication, a transformative paper that, by proposing a simplified way of quantifying information via bits, would go on to fundamentally shape the development of information technologyand thus, our modern era. In his speech, Krishna also pointed to Shannons work building robotic mice that solved mazes as an example of his enjoyment of play within his research.Krishna, of course, has some familiarity with what it takes to be at the cutting edge. Under his leadership, IBM, known as a pioneer in artificial intelligence itself, is carving its own niche in specialized AI and invests heavily in quantum computing researchthe mission to build a machine based on quantum principles, which could carry out calculations much faster than existing computers. The business also runs a cloud computing service, designs software, and operates a consulting business.Krishna said that he most enjoyed Shannons work because the researchers simple insights have helped contribute to the most sophisticated communication systems of today, including satellites. Speaking about Shannons theoretical work, which Krishna said was a precursor to neural networks, he noted, I think we can give him credit for building the first elements of artificial intelligence.The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsWrite to Ayesha Javed at ayesha.javed@time.com0 Comments ·0 Shares ·116 Views
-
Anima Anandkumar Highlights AIs Potential to Solve Hard Scientific Challengestime.comBy Ayesha JavedFebruary 10, 2025 5:39 PM ESTAnima Anandkumar is using AI to help solve the worlds challenges faster. She has used the technology to speed up prediction models in an effort to get ahead of extreme weather, and to work on sustainable nuclear fusion simulations so as to one day safely harness the energy source.Accepting a TIME100 AI Impact Award in Dubai on Monday, Anandkumara professor at California Institute of Technology who was previously the senior director of AI research at Nvidiacredited her engineer parents with setting an example for her. Having a mom who is an engineer was just such a great role model right at home. Her parents, who brought computerized manufacturing to her hometown in India, opened up her world, she said.Growing up as a young girl, I didn't think of computer programs as something that merely resided within a computer, but [as something] that touched the physical world and produced these beautiful and precise metal parts, said Anandkumar. As I pursued AI research over the last two decades, this memory continued to inspire me to connect the physical and digital worlds together.Neural operatorsa type of AI framework that can learn across multiple scalesare key to Anandkumars efforts. Using neural operators, Anandkumar and her collaborators are able to build systems with universal physical understanding that can simulate any physical process, generate novel engineering designs that were previously out of reach, and make new scientific discoveries, she said.Speaking about her work in 2022 with an interdisciplinary team from Nvidia, Caltech, and other academic institutions, she noted, I am proud of our work in weather forecasting where, using neural operators, we built the first AI-based high-resolution weather model called FourCastNet. This model is tens of thousands of times faster than traditional weather models and often more accurate than existing systems when predicting extreme events, such as heat waves and hurricanes, she said.Neural operators are helping us get closer to solving hard scientific challenges, she said. After outlining some of the technologys other possible uses, including designing better drones, rockets, sustainable nuclear reactors, and medical devices, Anandkumar added, To me, this is just the beginning.The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.0 Comments ·0 Shares ·104 Views
-
Refik Anadol Sees Artistic Possibilities in Datatime.comTurkish new media artist Refik Anadol speaks during the launch and promotion event of "Inner Portrait" in Istanbul on Nov. 27, 2024.Mehmet Murat OnelAnadolu/Getty ImagesBy Simmone ShahFebruary 10, 2025 5:51 PM ESTTo Refik Anadol, data is a creative force.For as long as I can remember, I have imagined data as more than just informationI have seen it as a living, breathing material, a pigment with infinite possibilities, the Turkish-American artist said on Monday during his acceptance speech at the TIME100 AI Impact Awards in Dubai.Anadol was one of four leaders shaping the future of AI to be recognized at TIMEs fourth-annual Impact Awards ceremony in the city. California Institute of Technology professor Anima Anandkumar, musician Grimes, and Arvind Krishna, the CEO, chairman, and president of IBM, also accepted awards as a part of the nights festivities, which featured a performance by Emirati soul singer Arqam Al Abri.Anadol has spent over a decade showing the world that art can come from anywhereeven machines. As a media artist and the director and co-founder of Refik Anadol Studio, he has used AI to pioneer new forms of creativity, producing data paintings and data sculptures in tandem with the technology.Over the past decade, my journey with AI has been a relentless pursuit of collaboration between humans and machines, between memory and imagination, between technology and nature, he said in his speech.This year, Anadol and his team will open Dataland, the worlds first AI art museum, in Los Angelesan achievement no doubt informed by years spent producing dozens of other works that have been shown across the world. Its all part of his plan to make art that challenges the limits of creativity. Art, in my vision, has never been confined to a single culture, place, or audience, Anadol said. It belongs to everyone.The TIME100 AI Impact Awards Dubai was presented by the World Government Summit and the Museum of the Future.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsWrite to Simmone Shah at simmone.shah@time.com0 Comments ·0 Shares ·98 Views
-
Inside Frances Effort to Shape the Global AI Conversationtime.comFrench President's Special Envoy on AI, Anne Bouverot, prepares for the AI Action Summit at the Quai d'Orsay in Paris.Harry Booth - TIMEBy Harry Booth / ParisFebruary 6, 2025 10:20 AM ESTOne evening early last year, Anne Bouverot was putting the finishing touches on a report when she received an urgent phone call. It was one of French President Emmanuel Macron's aides offering her the role as his special envoy on artificial intelligence. The unpaid position would entail leading the preparations for the France AI Action Summita gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AIs future. Set to take place on Feb. 10 and 11 at the presidential lyse Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in Mayand the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.s inaugural AI Safety Summit. After weighing the offer, Bouverot, who was at the time the co-chair of France's AI Commission, accepted.But Frances Summit wont be like the others. While the U.K.'s Summit centered on mitigating catastrophic riskssuch as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human controlFrance has rebranded the event as the 'AI Action Summit,' shifting the conversation towards a wider gamut of risksincluding the disruption of the labor market and the technologys environmental impactwhile also keeping the opportunities front and center. We're broadening the conversation, compared to Bletchley Park, Bouverot says. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.Some welcome the pivot as a much-needed correction to what they see as hype and hysteria around the technology's dangers. Others, including some of the world's foremost AI scientistsincluding some who helped develop the field's fundamental technologiesworry that safety concerns are being sidelined. The view within the community of people concerned about safety is that it's been downgraded, says Stuart Russell, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and the co-author of the authoritative textbook on AI used at over 1,500 universities.On the face of it, it looks like the downgrading of safety is an attempt to say, we want to charge ahead, we're not going to over-regulate. We're not going to put any obligations on companies if they want to do business in France,"' Russell says.Frances Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AIs benefits globally. But if the recent leaps in AI capabilitiesand emerging signs of deceptive behaviorare early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.Bouverot is no stranger to the politics of emerging technology.Bouverot's growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale suprieurea top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. "This is how much of a crystal ball I had," she joked on Washington AI Network's podcast in December, acknowledging the irony of her early skepticism, given AI's impact today.Under Bouverots leadership, safety will remain a feature, but rather than the summits sole focus, it is now one of five core themes. Others include: AIs use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Pariss agenda does not mention the possibility that an AI system could escape human control. There's no evidence of that risk today, Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. There was a bit of a science fiction moment, she says, adding that the global discourse has since shifted.Back in late 2023, as the U.K.s summit approached, signs of a shift in the conversation around AIs risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it a waste of time and a doom-obsessed mess. Researchers, who had studied AIs downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk was really problematic.Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns, says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. I think a lot of the existential harm rhetoric doesn't translate to what policy makers can specifically do now, she adds.On the U.K. Summits opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him? she asked, in an effort to highlight the near-term risks of AI over the summits focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. We didn't make that change in the global discourse, Bouverot says, adding that the focus is now squarely on the technologys tangible impacts. We're quite happy that this is actually the conversation that people are having now.One of the actions expected to emerge from Frances Summit is a new yet-to-be-named foundation that will aim to ensure AIs benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind's AI model that predicts protein structures with unprecedented precisionpotentially accelerating research and drug discoveryas an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. We need to enable more databases like this, Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AIs development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says.Her second priority is creating an informal Coalition for Sustainable AI. AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AIs environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. Clearly AI is happening and being developed. We want it to be developed in a sustainable way, Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative.Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AIs climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AIs emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPUthe kind of computer chips used for running AIenergy usage. Nobody has all of the numbers, she says, but the coalition may help put the pieces together.Given AIs recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGIa system that surpasses humans in most regardscould potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quicklyif everwe'll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAIs Altman wrote in a blog post: We are now confident we know how to build AGI. Speaking on a panel at Davos last month, Dario Amodei, the CEO of rival AI company, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year.Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies. More recently, Altman has downplayed AIs risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the companys direction. Over the past years, safety culture and processes have taken a backseat to shiny products, former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanitys control.Predictions of AGIs imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; by December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently.This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAIs recent o1 and a version of Anthropics Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caughtoften doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropics model, Claude, strategically misleading its creators during training. Exactly the things that people have been writing about for the last 10 years are happening now, Russell says. As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they're doing it, and so on.Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three Godfathers of AI for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. There's very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction, Bengio says. A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss of control risk, he adds.At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. They did a lot to move the needle in the right direction, Bengio says, but he adds that these measures are not close to sufficient. "In my personal opinion, the magnitude of the potential transformations that are likely to happen once we approach AGI are so radical, Bengio says, that my impression is most people, most governments, underestimate this whole lot.But rather than pushing for new pledges, in Paris the focus will be streamlining existing onesmaking them compatible with existing regulatory frameworks and each other. There's already quite a lot of commitments for AI companies, Bouverot says. This light-touch stance mirrors France's broader AI strategy, where homegrown company Mistral AI has emerged as Europe's leading challenger in the field. Both Mistral and the French government lobbied for softer regulations under the E.U.'s comprehensive AI Act. Frances Summit will feature a business-focused event, hosted across town at Station F, Frances largest start-up hub. "To me, it looks a lot like they're trying to use it to be a French industry fair, says Andrea Miotti, the executive director of Control AI, a non-profit that advocates for guarding against existential risks from AI. They're taking a summit that was focused on safety and turning it away. In the rhetoric, it's very much like: let's stop talking about the risks and start talking about the great innovation that we can do."The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair Frances Summit. In March, India issued an advisory that pushed companies to obtain the governments permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In Californiahome to many of the top AI developersa landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of Californias legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Bidens sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that revokes certain existing AI policies and directives that act as barriers to American AI innovation to secure U.S. leadership over the technology.Markus Anderljung, director of policy and research at AI safety think-tank the Centre for the Governance of AI, says that safety could be woven into the France Summits broader goals. For instance, initiatives to distribute AIs benefits globally might be linked to commitments from recipient countries to uphold safety best practices. He says he would like to see the list of signatories of the Frontier AI Safety Commitments signed in Seoul expanded particularly in China, where only one company, Zhipu, has signed. But Anderljung says that for the commitments to succeed, accountability mechanisms must also be strengthened. "Commitments without follow-ups might just be empty words, he says, they just don't matter unless you know what was committed to actually gets done."A focus on AIs extreme risks does not have to come at the exclusion of other important issues. I know that the organizers of the French summit care a lot about [AIs] positive impact on the global majority, Bengio says. That's a very important mission that I embrace completely. But he argues the potential severity of loss-of-control risks warrant invoking precautionary principlethe idea that we should take preventive measures, even absent scientific consensus. Its a principle that has been invoked by U.N. declarations aimed at protecting the environment, and in sensitive scientific domains like human cloning.But for Bouverot, it is a question of balancing competing demands. We don't want to solve everythingwe can't, nobody can, she says, adding that the focus is on making AI more concrete. We want to work from the level of scientific consensus, whatever level of consensus is reached.In mid December, in Frances foreign ministry, Bouverot, faced an unusual dilemma. Across the table, a South Korean official explained his countrys eagerness to join the summit. But days earlier, South Koreas political leadership was thrown into turmoil when President Yoon Suk Yeol, who co-chaired the previous summits leaders session, declared martial law before being swiftly impeached, leaving the question of who will represent the countryand whether officials could attend at allup in the air.There is a great deal of uncertaintynot only over the pace AI will advance, but to what degree governments will be willing to engage. Frances own government collapsed in early December after Prime Minister Michel Barnier was ousted in a no-confidence vote, marking the first such collapse since the 1960s. And, as Trump, long skeptical of international institutions, returns to the oval office, it is yet to be seen how Vice President Vance will approach the Paris meeting.When reflecting on the technologys uncertain future, Bouverot finds wisdom in the words of another French pioneer who grappled with powerful but nascent technology. "I have this quote from Marie Curie, which I really love, Bouverot says. Curie, the first woman to win a Nobel Prize, revolutionized science with her work on radioactivity. She once wrote: Nothing in life is to be feared, it is only to be understood. Curies work ultimately cost her lifeshe died at a relatively young 66 from a rare blood disorder, likely caused by prolonged radiation exposure.More Must-Reads from TIMETrump and Musk Have All of Washington on EdgeWhy AI Safety Researchers Are Worried About DeepSeekBehind the Scenes of The White Lotus Season ThreeWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com0 Comments ·0 Shares ·96 Views
-
Elise Smith Defends DEI as Good Businesstime.comBy Andrew R. ChowFebruary 6, 2025 7:04 AM ESTIn recent years, right-leaning leaders in politics and tech like Donald Trump and Elon Musk have attacked the value of DEI (diversity, equity, and inclusion) initiatives. But for Elise Smith, the CEO and co-founder of the tech startup Praxis Labs, learning to navigate cultural differences is simply good business, especially for ambitious multinational companies with employees and clients around the world. Regardless of what you think about the term DEI, this work will continue, because fundamentally it does drive better business outcomes, says Smith, 34. Fortune 500 companies are trying to figure out: How do we serve our clients and customers, knowing that there's a ton of diversity within them? How do we bring our teams together to do their best work?Praxis creates interactive AI and VR tools that allow business leaders to practice and improve their workplace communication and better interact with employees. These tools are something like the next-generation iterations of corporate diversity training videos, with many modules specifically designed to help managers give feedback to underperformers, navigate divisive topics like bias, and ask better questions. Users interact with a generative AI chatbot that simulates high-pressure work scenarios, such as performance reviews or interpersonal disagreements. The chatbot then provides personalized guidance on how one might better handle situations, especially with regard to cultural sensitivities. While it is currently confined to a specific set of scenarios, Smith hopes the chatbot will receive an upgrade this year that allows it to be always-on and freely give advice about workplace concerns.You can't play basketball by just watching a video in theory about passing and shootingyou have to do it, Smith says. Learning these critical human skills is very similar. You have to do it in a simulated, experiential way that will truly translate to your ability in the moment when it matters.Smith cut her teeth at IBMs Watson Group in the early 2010s, strategizing how to apply the AI technology powering that early supercomputer toward education. Inspired by that experience as well as watching her parents navigate systems that werent set up for them, she founded Praxis alongside Heather Shen in 2018. (Shen was named to Forbes 30 Under 30 list this year.) Praxis has now raised $23 million worth of venture capital and has a staff of around 15 people, and its client list includes Uber, Amazon, and Accenture. The goal, Smith says, is to help these companies to improve employee engagement, retention, and global business relationships.Smith believes that in a world in which AI tools are growing increasingly powerful in performing mechanical tasks, soft skills like clear communication, emotional intelligence, and the ability to defuse conflict are more important than ever. We have to connect at a real, personal level, beyond the transactional trust that I think we so often find in workplaces, she says. We are so divided, and yet we have to learn to work with people who think differently than us and believe in different things than us, to achieve outcomes that hopefully better all of us.0 Comments ·0 Shares ·137 Views
-
Exclusive: The British Public Wants Stricter AI Rules Than Its Government Doestime.comBritish Prime Minister Keir Starmer gives a speech on harnessing AI to drive economic growth and "revolutionize" public services in the U.K.Henry Nicholls - WPA Pool/Getty ImagesBy Billy PerrigoFebruary 6, 2025 4:00 AM ESTEven as Silicon Valley races to build more powerful artificial intelligence models, public opinion on the other side of the Atlantic remains decidedly skeptical of the influence of tech CEOs when it comes to regulating the sector, with the vast majority of Britons worried about the safety of new AI systems. The concerns, highlighted in a new poll shared exclusively with TIME, come as world leaders and tech bossesfrom U.S. Vice President JD Vance, Frances Emmanuel Macron and Indias Narendra Modi to OpenAI chief Sam Altman and Googles Sundar Pichaiprepare to gather in Paris next week to discuss the rapid pace of developments in AI. The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of smarter-than-human AI models. Just 9%, meanwhile, said they trust tech CEOs to act in the public interest when discussing AI regulation. The survey was conducted by the British pollster YouGov on behalf of Control AI, a non-profit focused on AI risks.The results reflect growing public anxieties about the development of AI systems that could match or even outdo humans at most tasks. Such technology does not currently exist, but creating it is the express goal of major AI companies such as OpenAI, Google, Anthropic, and Meta, the owner of Facebook and Instagram. In fact, several tech CEOs expect such systems to become a reality in a matter of years, if not sooner. It is against this backdrop that 75% of the Britons polled told YouGov that laws should explicitly prohibit the development of AI systems that can escape their environments. More than half (63%) agreed with the idea of prohibiting the creation of AI systems that can make themselves smarter or more powerful.The findings of the British poll mirror the results of recent U.S. surveys, and point to a growing gap between public opinion and regulatory action when it comes to advanced AI. Even the European Unions AI Act widely seen as the worlds most comprehensive AI legislation and which began to come into force this month stops short of directly addressing many of the possible risks posed by AI systems that meet or surpass human abilities.In Britain, where the YouGov survey of 2,344 adults was conducted over Jan. 16-17, there remains no comprehensive regulatory framework for AI. While the ruling Labour Party had pledged to introduce new AI rules ahead of the last general election in 2024, since coming to power it has dragged its feet by repeatedly delaying the introduction of an AI bill as it grapples with the challenge of restoring growth to its struggling economy. In January, for example, British Prime Minister Keir Starmer announced that AI would be mainlined into the veins of the nation to boost growtha clear shift away from talk of regulation. It seems like theyre sidelining their promises at the moment, for the shiny attraction of growth, says Andrea Miotti, the executive director of Control AI. But the thing is, the British public is very clear about what they want. They want these promises to be met.A New Push for New LawsThe polling was accompanied by a statement, signed by 16 British lawmakers from both major political parties, calling on the government to introduce new AI laws targeted specifically at superintelligent AI systems, or those that could become far smarter than humans.Specialised AIs such as those advancing science and medicine boost growth, innovation, and public services. Superintelligent AI systems would [by contrast] compromise national and global security, the statement reads. The U.K. can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.Miotti, from Control AI, says that the U.K. does not have to sacrifice growth by imposing sweeping regulations such as those contained in the E.U. AI Act. Indeed, many in the industry blame the AI Act and other sweeping E.U. laws for stymying the growth of the European tech sector. Instead, Miotti argues, the U.K. could impose narrow, targeted, surgical AI regulation that only applies to the most powerful models posing what he sees as the biggest risks.What the public wants is systems that help them, not systems that replace them, Miotti says. We should not pursue [superintelligent systems] until we know how to prove that they're safe.The polling data also shows that a large majority (74%) of Brits support a pledge made by the Labour Party ahead of the last election to enshrine the U.K.s AI Safety Institute (AISI) into law, giving it power to act as a regulator. Currently, the AISI an arm of the U.K. government carries out tests on private AI models ahead of their release, but has no authority to compel tech companies to make changes or to rule that models are too dangerous to be releaseMore Must-Reads from TIMETrump and Musk Have All of Washington on EdgeWhy AI Safety Researchers Are Worried About DeepSeekBehind the Scenes of The White Lotus Season ThreeWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsWrite to Billy Perrigo at billy.perrigo@time.com0 Comments ·0 Shares ·132 Views
-
Google Scraps Hiring Targets After Trumps Anti-DEI Pressure on Government Contractorstime.comBy Michael Liedtke / APFebruary 5, 2025 9:00 PM ESTSAN FRANCISCO Google is scrapping some of its diversity hiring targets, joining a lengthening list of U.S. companies that have abandoned or scaled back their diversity, equity and inclusion programs.The move, which was outlined in an email sent to Google employees on Wednesday, came in the wake of an executive order issued by President Donald Trump that was aimed in part at pressuring government contractors to scrap their DEI initiatives.Like several other major tech companies, Google sells some of its technology and services to the federal government, including its rapidly growing cloud division thats a key piece of its push into artificial technology.Googles parent company, Alphabet, also signaled the shift in its annual 10-K report it filed this week with the Securities and Exchange Commission. In it, Google removed a line included in previous annual reports saying that its committed to making diversity, equity, and inclusion part of everything we do and to growing a workforce that is representative of the users we serve.Google generates most of Alphabets annual revenue of $350 billion and accounts for almost all of its worldwide workforce of 183,000.Were committed to creating a workplace where all our employees can succeed and have equal opportunities, and over the last year weve been reviewing our programs designed to help us get there, Google said in a statement to The Associated Press. Weve updated our 10-K language to reflect this, and as a federal contractor, our teams are also evaluating changes required following recent court decisions and executive orders on this topic.The change in language also comes slightly more than two weeks after Google CEO Sundar Pichai and other prominent technology executivesincluding Tesla CEO Elon Musk, Amazon founder Jeff Bezos, Apple CEO Tim Cook and Meta Platforms CEO Mark Zuckerbergstood behind Trump during his inauguration.Meta jettisoned its DEI program last month, shortly before the inauguration, while Amazon halted some of its DEI programs in December following Trumps election.Many companies outside of the technology industry also have backed away from DEI. Those include Walt Disney Co., McDonalds, Ford, Walmart, Target, Lowes and John Deere.Trumps recent executive order threatens to impose financial sanctions on federal contractors deemed to have illegal DEI programs. If the companies are found to be in violation, they could be subject to massive damages under the 1863 False Claims Act. That law states that contractors that make false claims to the government could be liable for three times the governments damages.The order also directed all federal agencies to choose the targets of up to nine investigations of publicly traded companies, large non-profits and other institutions with DEI policies that constitute Illegal discrimination or preference.The challenge for companies is knowing which DEI policies the Trump administration may decide are illegal. Trumps executive order seeks to terminate all discriminatory and illegal preferences, mandates, policies, programs and other activities of the federal government, and to compel federal agencies to combat illegal private-sector DEI preferences, mandates, policies, programs, and activities.In both the public and private sector, diversity initiatives have covered a range of practices, from anti-discrimination training and conducting pay equity studies to making efforts to recruit more members of minority groups and women as employees.Google, which is based in Mountain View, California, has tried to hire more people from underrepresented groups for more than a decade but stepped up those efforts in 2020 after the police killing of George Floyd in Minneapolis triggered an outcry for more social justice.Shortly after Floyd died, Pichai set a goal to increase the representation of underrepresented groups in the Mountain View, California, companys largely Asian and white leadership ranks by 30% by 2025. Google has made some headway since then, but the makeup of its leadership has not changed dramatically.The representation of Black people in the companys leadership ranks rose from 2.6% in 2020 to 5.1% last year, according to Googles annual diversity report. For Hispanic people, the change was 3.7% to 4.3%. The share of women in leadership roles, meanwhile, increased from 26.7% in 2020 to 32.8% in 2024, according to the companys report.The numbers arent much different in Googles overall workforce, with Black employees comprising just 5.7% and Latino employees 7.5%. Two-thirds of Googles worldwide workforce is made up of men, according to the diversity report.Associated Press business reporter Alexandra Olson contributed to this report.0 Comments ·0 Shares ·99 Views
-
Elon Musk Creates Confusion About Direct File, the IRS Free Tax-Prep Programtime.comElon Musk speaks at a Trump inauguration event in Washington, D.C., on Jan. 20, 2025.Christopher FurlongGetty ImagesBy Fatima Hussein and Barbara Ortutay / APFebruary 3, 2025 10:00 PM ESTWASHINGTON Billionaire tech mogul Elon Musk posted Monday on his social media site that he had deleted 18F, a government agency that worked on technology projects such as the IRS Direct File program. This led to some confusion about whether Direct File is still available to taxpayers, but the free filing program is still available, at least for the coming tax season.While Musks tweet may have intimated that the group of workers had been eliminated, an individual with knowledge of the IRS workforce said the Direct File program was still accepting tax returns. The individual spoke anonymously with The Associated Press because they were not authorized to talk to the press.As of Monday evening, 18Fs website was still operational, as was the Direct File website. But the digital services agencys X account was deleted.The IRS announced last year that it will make the free electronic tax return filing system permanent and asked all 50 states and the District of Columbia to help taxpayers file their returns through the program in 2025.The Direct File trial began in March 2024. But the IRS has face intense blowback to Direct File from private tax preparation companies that have made billions from charging people to use their software and have spent millions lobbying Congress. The average American typically spends about $140 preparing their returns each year.Commercial tax prep companies that have lobbied against development of the free file program say free file options already exist.Several organizations, including private tax firms, offer free online tax preparation assistance to taxpayers under certain income limits. Fillable forms are available online on the IRS website, but they are complicated and taxpayers still have to calculate their tax liability.Last May the IRS announced it would make the Direct File program permanent. It is now available in 25 states, up from 12 states that were part of last years pilot program.The program allows people in some states with very simple W-2s to calculate and submit their returns directly to the IRS. Those using the pilot program in 2024 claimed more than $90 million in refunds, the IRS said in October.During his confirmation hearing Jan. 16, Scott Bessent, now treasury secretary, committed to maintaining the Direct File program at least for the 2025 tax season, which began Jan. 27.Musk was responding to a post by an X user who called 18F far left and mused that Direct File puts the government in charge of preparing peoples taxes.That group has been deleted, Musk wrote.More Must-Reads from TIMETrump and Musk Have All of Washington on EdgeWhy AI Safety Researchers Are Worried About DeepSeekBehind the Scenes of The White Lotus Season ThreeWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com0 Comments ·0 Shares ·146 Views
-
What Can the Black Box Tell Us About Plane Crashes?time.comNTSBAPNational Transportation Safety Board (NTSB) investigators examine cockpit voice recorder and flight data recorder recovered from the American Airlines passenger jet that crashed with an Army helicopter Wednesday night near Washington, DC, on Jan. 30, 2025.By BEN FINLEY / APJanuary 31, 2025 4:43 PM ESTIt's one of the most important pieces of forensic evidence following a plane crash: The so-called black box."There are actually two of these remarkably sturdy devices: the cockpit voice recorder and the flight data recorder. And they're typically orange, not black.Federal investigators on Fridayrecovered the black boxesfrom the passenger jet that crashed in the Potomac River just outside Washington on Wednesday, while authorities were still searching for similar devices in the military helicopter that also went down. The collision killed 67 people in the deadliest U.S. aviation disaster since 2001.Here is an explanation of what black boxes are and what they can do:What are black boxes?The cockpit voice recorder and the flight data recorder are tools that help investigators reconstruct the events that lead up to a plane crash.They're orange in color to make them easier to find in wreckage, sometimes at great ocean depths. They're usually installed a plane's tail section, which is considered the most survivable part of the aircraft, according to theNational Transportation Safety Boards website.They're also equipped with beacons that activate when immersed in water and can transmit from depths of 14,000 feet (4,267 meters). While the battery that powers the beacon will run down after about one month, theres no definitive shelf-life for the data itself, NTSB investigators told The Associated Press in 2014.For example, black boxes of an Air France flight that crashed in the Atlantic Ocean in 2009 were found two years later from a depth of more than 10,000 feet, and technicians were able to recover most of the information.If a black box has been submerged in seawater, technicians will keep them submerged in fresh water to wash away the corrosive salt. If water seeps in, the devices must be carefully dried for hours or even days using a vacuum oven to prevent memory chips from cracking.The electronics and memory are checked, and any necessary repairs made. Chips are scrutinized under a microscope.What does the cockpit voice recorder do?The cockpit voice recorder collects radio transmissions and sounds such as the pilots voices and engine noises, according to the NTSB's website.Depending on what happened, investigators may pay close attention to the engine noise, stall warnings and other clicks and pops, the NTSB said. And from those sounds, investigators can often determine engine speed and the failure of some systems.Investigators are also listening to conversations between the pilots and crew and communications with air traffic control. Experts make a meticulous transcript of the voice recording, which can take up to a week.What does the flight data recorder do?The flight data recorder monitors a plane's altitude, airspeed and heading, according to the NTSB. Those factors are among at least 88 parameters that newly built planes must monitor.Some can collect the status of more than 1,000 other characteristics, from a wing's flap position to the smoke alarms. The NTSB said it can generate a computer animated video reconstruction of the flight from the information collected.NTBS investigators told the AP in 2014 that a flight data recorder carries 25 hours of information, including prior flights within that time span, which can sometimes provide hints about the cause of a mechanical failure on a later flight. An initial assessment of the data is provided to investigators within 24 hours, but analysis will continue for weeks more.What are the origins of the black box?At least two people have been credited with creating devices that record what happens on an airplane.One is French aviation engineer Franois Hussenot. In the 1930s, he found a way to record a plane's speed, altitude and other parameters onto photographic film,according to the websitefor European plane-maker Airbus.In the 1950s, Australian scientist David Warren came up with the idea for the cockpit voice recorder, according to his 2010 AP obituary.Warren had been investigating the crash of the worlds first commercial jet airliner, the Comet, in 1953, and thought it would be helpful for airline accident investigators to have a recording of voices in the cockpit, the Australian Department of Defence said in a statement after his death.Warren designed and constructed a prototype in 1956. But it took several years before officials understood just how valuable the device could be and began installing them in commercial airlines worldwide. Warren's father had been killed in a plane crash in Australia in 1934.Why the name black box?Some have suggested that it stems from Hussenot's device because it used film and ran continuously in a light-tight box, hence the name black box,'according to Airbus, which noted that orange was the box's chosen color from the beginning to make it easy to find.Other theories include the boxes turning black when they get charred in a crash, theSmithsonian Magazine wrote in 2019.The truth is much more mundane, the magazine wrote. In the post-World War II field of electronic circuitry, black box became the ubiquitous term for a self-contained electronic device whose input and output were more defining than its internal operations.The media continues to use the term, the magazine wrote, "because of the sense of mystery it conveys in the aftermath of an air disaster.More Must-Reads from TIMEL.A. Fires Show Reality of 1.5C of WarmingBehind the Scenes of The White Lotus Season ThreeHow Trump 2.0 Is Already Sowing ConfusionElizabeth Warrens Plan for How Musk Can Cut $2 TrillionWhy, Exactly, Is Alcohol So Bad for You?How Emilia Prez Became a Divisive Oscar FrontrunnerThe Motivational Trick That Makes You Exercise HarderZelenskys Former Spokesperson: Ukraine Needs a Cease-Fire NowContact us at letters@time.com0 Comments ·0 Shares ·149 Views
-
Is the DeepSeek Panic Overblown?time.comThe rise of the Chinese AI company DeepSeek is causing panicsome of which may be unfounded, experts say.Jaap Arriens/NurPhotoGetty ImagesBy Andrew R. Chow and Billy PerrigoJanuary 30, 2025 2:56 PM ESTThis week, leaders across Silicon Valley, Washington D.C., Wall Street, and beyond have been thrown into disarray due to the unexpected rise of the Chinese AI company DeepSeek. DeepSeek recently released AI models that rivaled OpenAIs, seemingly for a fraction of the price, and despite American policy designed to slow Chinas progress. As a result, many analysts concluded that DeepSeeks success undermined the core beliefs driving the American AI industryand that the companies leading this charge, like Nvidia and Microsoft, were not as valuable or technologically ahead as previously believed. Tech stocks dropped hundreds of billions of dollars in days.But AI scientists have pushed back, arguing that many of those fears are exaggerated. They say that while DeepSeek does represent a genuine advancement in AI efficiency, it is not a massive technological breakthroughand that the American AI industry still has key advantages over Chinas.Its not a leap forward on AI frontier capabilities, says Lennart Heim, an AI researcher at RAND. I think the market just got it wrong.Here are several claims being widely circulated about DeepSeeks implications, and why scientists say theyre incomplete or outright wrong.Claim: DeepSeek is much cheaper than other models.In December, DeepSeek reported that its V3 model cost just $6 million to train. This figure seemed startlingly low compared to the more than $100 million that OpenAI said it spent training GPT-4, or the few tens of millions that Anthropic spent training a recent version of its Claude model.DeepSeeks lower price tag was thanks to some big efficiency gains that the companys researchers described in a paper accompanying their models release. But were those gains so large as to be unexpected? Heim argues no: that machine learning algorithms have always gotten cheaper over time. Dario Amodei, the CEO of AI company Anthropic, made the same point in an essay published Jan. 28, writing that while the efficiency gains by DeepSeeks researchers were impressive, they were not a unique breakthrough or something that fundamentally changes the economics of LLMs. Its an expected point on an ongoing cost reduction curve, he wrote. Whats different this time is that the company that was first to demonstrate the expected cost reductions was Chinese.To further obscure the picture, DeepSeek may also not be being entirely honest about its expenses. In the wake of claims about the low cost of training its models, tech CEOs cited reports that DeepSeek actually had a stash of 50,000 Nvidia chips, which it could not talk about due to U.S. export controls. Those chips would cost somewhere in the region of $1 billion. It is, however, true that DeepSeeks new R1 model is far cheaper for users to access than its competitor model OpenAI o1, with its model access fees around 30 times lower ($2.19 per million tokens, or segments of words outputted, versus $60). That sparked worries among some investors of a looming price war in the American AI industry, which could reduce expected returns on investment and make it more difficult for U.S. companies to raise funds required to build new data centers to fuel their AI models.Oliver Stephenson, associate director of AI and emerging tech policy at the Federation of American Scientists, says that people shouldnt draw conclusions from this price point. While DeepSeek has made genuine efficiency gains, their pricing could be an attention-grabbing strategy, he says. They could be making a loss on inference. (Inference is the running of an already-formed AI system.)On Monday, Jan. 27, DeepSeek said that it was targeted by a cyberattack and was limiting new registrations for users outside of China.Claim: DeepSeek shows that export controls arent working.When the AI arms race heated up in 2022, the Biden Administration moved to cut off Chinas access to cutting edge chips, most notably Nvidias H100s. As a result, Nvidia created an inferior chip, the H800, to legally sell to Chinese companies. The Biden Administration later opted to ban the sale of those chips to China, too. But by the time those extra controls went into effect a year later, Chinese companies had stockpiled thousands of H800s, generating a massive windfall for Nvidia.DeepSeek said its V3 model was built using the H800, which performs adequately for the type of model that the company is creating. But despite this success, experts argue that the chip controls may have stopped China from progressing even further. In an environment where China had access to more compute, we would expect even more breakthroughs, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. The export controls might be working, but that does not mean that China will not still be able to build more and more powerful models.And going forward, it may become increasingly challenging for DeepSeek and other Chinese companies to keep pace with frontier models given their chip constraints. While OpenAIs GP4 trained on the order of 10,000 H100s, the next generation of models will likely require ten times or a hundred times that amount. Even if China is able to build formidable models thanks to efficiency gains, export controls will likely bottleneck their ability to deploy their models to a wide userbase. If we think in the future that an AI agent will do somebodys job, then how many digital workers you have is a function of how much compute you have, Heim says. If an AI model cant be used that much, this limits its impact on the world.Claim: Deepseek shows that high-end chips arent as valuable as people thought.As DeepSeek hype mounted this week, many investors concluded that its accomplishments threatened Nvidias AI dominanceand sold off shares of a company that was, in January, the most valuable in the world. As a result, Nvidias stock price dropped 17% and lost nearly $600 billion in value on Monday, based on the idea that their chips would be less valuable under this new paradigm.But many AI experts argued that this drop in Nvidias stock price was the market acting irrationally. Many of them rushed to buy the dip, resulting in the stock recapturing some of its lost value. Advances in the efficiency of computing power, they noted, have historically led to more demand for chips, not less. As tech stocks fell, Satya Nadella, the CEO of Microsoft, posted a link on X to the Wikipedia page of the Jevons Paradox, first observed in the 19th century, named after an economist who noted that as coal burning became more efficient, people actually used more coal, because it had become cheaper and more widely available.Experts believe that a similar dynamic will play out in the race to create advanced AI. What we're seeing is an impressive technical breakthrough built on top of Nvidia's product that gets better as you use more of Nvidia's product, Stephenson says. That does not seem like a situation in which you're going to see less demand for Nvidia's product.Two days after his inauguration, President Donald Trump announced a $500 billion joint public-private venture to build out AI data centers, driven by the idea that scale is essential to build the most powerful AI systems. DeepSeeks rise, however, led many to argue that this approach was misguided or wasteful.But some AI scientists disagree. DeepSeek shows AI is getting better, and its not stopping, Heim says. It has massive implications for economic impact if AI is getting used, and therefore such investments make sense.American leadership has signaled that DeepSeek has made them even more ravenous to build out AI infrastructure in order to maintain the countrys lead. Trump, in a press conference on Monday, said that DeepSeek should be a wake-up call for our industries that we need to be laser-focused on competing to win.However, Stephenson cautions that this data center buildout will come with a huge number of negative externalities. Data centers often use a vast amount of power, coincide with massive hikes in local electricity bills, and threaten water supply, he says, adding: We're going to face a lot of problems in doing these infrastructure buildups.More Must-Reads from TIMEL.A. Fires Show Reality of 1.5C of WarmingBehind the Scenes of The White Lotus Season ThreeHow Trump 2.0 Is Already Sowing ConfusionElizabeth Warrens Plan for How Musk Can Cut $2 TrillionWhy, Exactly, Is Alcohol So Bad for You?How Emilia Prez Became a Divisive Oscar FrontrunnerThe Motivational Trick That Makes You Exercise HarderZelenskys Former Spokesperson: Ukraine Needs a Cease-Fire NowWrite to Billy Perrigo at billy.perrigo@time.com0 Comments ·0 Shares ·135 Views
-
Why DeepSeek Is Sparking Debates Over National Security, Just Like TikToktime.comBy Andrew R. ChowUpdated: January 29, 2025 12:00 PM EST | Originally published: January 29, 2025 11:28 AM ESTThe fast-rising Chinese AI lab DeepSeek is sparking national security concerns in the U.S., over fears that its AI models could be used by the Chinese government to spy on American civilians, learn proprietary secrets, and wage influence campaigns. In her first press briefing, White House Press Secretary Karoline Leavitt said that the National Security Council was "looking into" the potential security implications of DeepSeek. This comes amid news that the U.S. Navy has banned use of DeepSeek among its ranks due to potential security and ethical concerns.DeepSeek, which currently tops the Apple App Store in the U.S., marks a major inflection point in the AI arms race between the U.S. and China. For the last couple years, many leading technologists and political leaders have argued that whichever country developed AI the fastest will have a huge economic and military advantage over its rivals. DeepSeek shows that Chinas AI has developed much faster than many had believed, despite efforts from American policymakers to slow its progress.However, other privacy experts argue that DeepSeeks data collection policies are no worse than those of its American competitorsand worry that the companys rise will be used as an excuse by those firms to call for deregulation. In this way, the rhetorical battle over the dangers of DeepSeek is playing out on similar lines as the in-limbo TikTok ban, which has deeply divided the American public.There are completely valid privacy and data security concerns with DeepSeek, says Calli Schroeder, the AI and Human Rights lead at the Electronic Privacy Information Center (EPIC). But all of those are present in U.S. AI products, too.Read More: What to Know About DeepSeekConcerns over dataDeepSeeks AI models operate similarly to ChatGPT, answering user questions thanks to a vast amount of data and cutting-edge processing capabilities. But its models are much cheaper to run: the company says that it trained its R1 model on just $6 million, which is a good deal less than the cost of comparable U.S. models, Anthropic CEO Dario Amodei wrote in an essay.DeepSeek has built many open-source resources, including the LLM v3, which rivals the abilities of OpenAI's closed-source GPT-4o. Some people worry that by making such a powerful technology open and replicable, it presents an opportunity for people to use it more freely in malicious ways: to create bioweapons, launch large-scale phishing campaigns, or fill the internet with AI slop. However, there is another contingent of builders, including Metas VP and chief AI scientist Yann LeCun, who believe open-source development is a more beneficial path forward for AI. Another major concern centers upon data. Some privacy experts, like Schroeder, argue that most LLMs, including DeepSeek, are built upon sensitive or faulty databases: information from data leaks of stolen biometrics, for example. David Sacks, President Donald Trumps AI and crypto czar, accused DeepSeek of leaning on the output of OpenAIs models to help develop its own technology.There are even more concerns about how users data could be used by DeepSeek. The companys privacy policy states that it automatically collects a slew of input data from its users, including IP and keystroke patterns, and may use that to train their models. Users personal information is stored in secure servers located in the People's Republic of China, the policy reads.For some Americans, this is especially worrying because generative AI tools are often used in personal or high-stakes tasks: to help with their company strategies, manage finances, or seek health advice. That kind of data may now be stored in a country with few data rights laws and little transparency with regard to how that data might be viewed or used. It could be that when the servers are physically located within the country, it is much easier for the government to access them, Schroeder says.One of the main reasons that TikTok was initially banned in the U.S. was due to concerns over how much data the apps Chinese parent company, ByteDance, was collecting from Americans. If Americans start using DeepSeek to manage their lives, the privacy risks will be akin to TikTok on steroids, says Douglas Schmidt, the dean of the School of Computing, Data Sciences and Physics at William & Mary. I think TikTok was collecting information, but it was largely benign or generic data. But large language model owners get a much deeper insight into the personalities and interests and hopes and dreams of the users.Geopolitical concernsDeepSeek is also alarming those who view AI development as an existential arms race between the U.S. and China. Some leaders argued that DeepSeek shows China is now much closer to developing AGIan AI that can reason at a human level or higherthan previously believed. American AI labs like Anthropic have safety researchers working to mitigate the harms of these increasingly formidable systems. But its unclear what kind of safety research team Deepseek employs. The cybersecurity of Deepseeks models has also been called into question. On Monday, the company limited new sign-ups after saying the app had been targeted with a large-scale malicious attack.Well before AGI is achieved, a powerful, widely-used AI model could influence the thought and ideology of its users around the world. Most AI models apply censorship in certain key ways, or display biases based on the data they are trained upon. Users have found that DeepSeeks R1 refuses to answer questions about the 1989 massacre at Tiananmen Square, and asserts that Taiwan is a part of China. This has sparked concern from some American leaders about DeepSeek being used to promote Chinese values and political aimsor wielded as a tool for espionage or cyberattacks.This technology, if unchecked, has the potential to feed disinformation campaigns, erode public trust, and entrench authoritarian narratives within our democracies, Ross Burley, co-founder of the nonprofit Centre for Information Resilience, wrote in a statement emailed to TIME.AI industry leaders, and some Republican politicians, have responded by calling for massive investment into the American AI sector. President Trump said on Monday that DeepSeek should be a wake-up call for our industries that we need to be laser-focused on competing to win. Sacks posted on X that DeepSeek R1 shows that the AI race will be very competitive and that President Trump was right to rescind the Biden EO, referring to Bidens AI Executive Order which, among other things, drew attention to the potential short-term harms of developing AI too fast.These fears could lead to the U.S. imposing stronger sanctions against Chinese tech companies, or perhaps even trying to ban DeepSeek itself. On Monday, the House Select Committee on the Chinese Communist Party called for stronger export controls on technologies underpinning DeepSeeks AI infrastructure.But AI ethicists are pushing back, arguing that the rise of DeepSeek actually reveals the acute need for industry safeguards. This has the echoes of the TikTok ban: there are legitimate privacy and security risks with the way these companies are operating. But the U.S. firms who have been leading a lot of the development of these technologies are similarly abusing people's data. Just because they're doing it in America doesn't make it better, says Ben Winters, the director of AI and data privacy at the Consumer Federation of America. And DeepSeek gives those companies another weapon in their chamber to say, We really cannot be regulated right now.As ideological battle lines emerge, Schroeder, at EPIC, cautions users to be careful when using DeepSeek or other LLMs. If you have concerns about the origin of a company, she says, Be very, very careful about what you reveal about yourself and others in these systems.0 Comments ·0 Shares ·130 Views
-
AI Could Reshape Everything We Know About Climate Changetime.comBy Justin WorlandJanuary 29, 2025 12:06 PM ESTWith one announcement, Chinese AI startup DeepSeek shook up all of Wall Street and Silicon Valleys conventional wisdom about the future of AI. It should also shake up the climate and energy world.For the last year, analysts have warned that the data centers needed for AI would drive up power demand and, by extension, emissions as utilities build out natural gas infrastructure to help meet demand. The DeepSeek announcement suggests that those assumptions may be wildly off. If the companys claims are to be believed, AI may ultimately use less power and generate fewer emissions than anticipated.Still, dont jump for joy just yet. To my mind, the biggest lesson for the climate world from DeepSeek isnt that AI emissions may be less than anticipated. Instead, DeepSeek shows how little we truly know about what AI means for the future of global emissions. AI will shape the worlds decarbonization trajectory across sectors and geographies, disrupting the very basics of how we understand the future of climate change; the question now is whether we can harness that disruption for the better.We're just scratching the surface, says Jason Bordoff, who runs the Center on Global Energy Policy at Columbia University about the implications of AI for emissions. We're just at inning one of what AI is going to do, but I do have a lot of optimism.Many in the climate world woke up to AI early last year. Over the course of a few months, power sector experts issued warnings that the U.S. isnt prepared for the influx of electricity demand from AI as big technology companies raced to deploy data centers to scale their ambitions. A number of studies have found that data centers could account for nearly 10% of electricity demand in the U.S. by 2030, up from 4% in 2023.Many big tech companies have worked to scale clean electricity alongside their data centersfinancing the build out of renewable energy and paying to open up dormant nuclearplants, among other things. But utilities have also turned to natural gas to help meet demand. Research released earlier this month by Rystad Energy, an energy research firm, shows that electric utilities in the U.S. have 17.5 GW of new natural gas capacity planned, equivalent to more than eight Hoover Dams, driven in large part by new data centers.All of this means an uptick in emissions and deep concern among climate advocates who worry that the buildout of electricity generation for AI is about to lock the U.S. into a high-carbon future. As concerning as this might be, the projections for short-term electricity demand growth might mask much more challenging risks that AI poses for efforts to tackle climate change. As AI drives new breakthroughs, it will change consumption patterns and economic behavior with the potential to increase emissions. Think of a retailer that uses AI to better tailor recommendations to a consumer, driving purchases (and emissions). Or consider an AI-powered autonomous vehicle that an owner leaves to roam the streets rather than paying for parking.At the most basic level, AI is bound to generate rapid productivity gains and rapid economic growth. Thats a good thing. But its also worth remembering that since the Industrial Revolution, rapid economic growth has driven a rise in emissions. More recently, some developed economies have seen a decoupling of growth from emissions, but that has required active effort from policymakers. To avoid an AI-driven surge in emissions may require an active effort this time, too.But AI isnt all risk. Indeed, its very easy to imagine the upsides of AI far outweighing the downsides. Most obviously, as DeepSeek shows, there may be ways to reduce the emissions of AI with chip innovation and language model advances. As the technology improves, efficiencies will inevitably emerge. The data center buildout could also catalyze a much wider deployment of low-carbon energy. Many of the technology companies that are investing in AI have committed to eliminating their carbon footprints. Not only do they put clean electricity on the grid when they build a solar farm or restart a nuclear power plant, but they help pave the way for others. Governments are starting to realize that if they're going to attract data centers, AI factories, and wider technology companies into their countries, they have to start removing the barriers to renewable energy, says Mike Hayes, head of climate and decarbonization at KPMG.And then there are all the ways that AI might actually cut emissions. Researchers and experts group the potential benefits into two categories: incremental improvements and game changers.The incremental improvements could be manifold. Think of AIs ability to better identify sites to locate renewable energy projects, thereby greatly increasing the productivity of renewable energy generation. AI can help track down methane leaks in gas infrastructure. And farmers can use AI to improve crop models, optimizing crop yield and minimizing pollutants. The list goes on and on. With a little consideration, you could probably identify a way to reduce emissions in every sector.It remains difficult to quantify how these incremental improvements all add up, but its not hard to imagine that emissions reductions thanks to these developments could easily outweigh even the most dramatic estimates of additional pollution.And then there are the game changers that could, in one blow, completely transform our ability to decarbonize. At the top of that list is nuclear fusion, a process that could generate abundant clean energy by combining atomic nuclei at extremely high temperatures. Already, start-ups are using AI to help optimize their fusion reactor designs and experiments. A fusion breakthrough, supported by AI technologies, could provide a clean alternative to fossil fuels. It could also power large-scale carbon dioxide removal. This would give the world an opportunity to suck carbon out of the atmosphere affordably and pull the planet back from extreme temperature rise that may otherwise already be baked in.If you think like a venture capital investor, you're betting 1 or 2% of incremental emissions, but what could the payoff potentially be? asks Cully Cavness, co-founder of Crusoe, an AI infrastructure company. It could be things like fusion, which could address all the emissions.For those of us, myself included, who havent spent the last decade thinking deeply about AI, watching it emerge at the center of the global economic development story can feel like watching a juggernaut. It came quickly, and its hard to predict exactly where it will go next.Even still, it seems all but certain that AI will play a significant role shaping our climate future, far beyond the short-term impact on the power sector. Exactly what that looks like is anyones guess. TIME receives support for climate coverage from the Outrider Foundation. TIME is solely responsible for all content.0 Comments ·0 Shares ·140 Views
-
Why AI Safety Researchers Are Worried About DeepSeektime.comBy Billy PerrigoJanuary 29, 2025 12:07 PM ESTThe release of DeepSeek R1 stunned Wall Street and Silicon Valley this month, spooking investors and impressing tech leaders. But amid all the talk, many overlooked a critical detail about the way the new Chinese AI model functionsa nuance that has researchers worried about humanitys ability to control sophisticated new artificial intelligence systems. Its all down to an innovation in how DeepSeek R1 was trainedone that led to surprising behaviors in an early version of the model, which researchers described in the technical documentation accompanying its release.During testing, researchers noticed that the model would spontaneously switch between English and Chinese while it was solving problems. When they forced it to stick to one language, thus making it easier for users to follow along, they found that the systems ability to solve the same problems would diminish.That finding rang alarm bells for some AI safety researchers. Currently, the most capable AI systems think in human-legible languages, writing out their reasoning before coming to a conclusion. That has been a boon for safety teams, whose most effective guardrails involve monitoring models so-called chains of thought for signs of dangerous behaviors. But DeepSeeks results raised the possibility of a decoupling on the horizon: one where new AI capabilities could be gained from freeing models of the constraints of human language altogether.To be sure, DeepSeek's language switching is not by itself cause for alarm. Instead, what worries researchers is the new innovation that caused it. The DeepSeek paper describes a novel training method whereby the model was rewarded purely for getting correct answers, regardless of how comprehensible its thinking process was to humans. The worry is that this incentive-based approach could eventually lead AI systems to develop completely inscrutable ways of reasoning, maybe even creating their own non-human languages, if doing so proves to be more effective.Were the AI industry to proceed in that directionseeking more powerful systems by giving up on legibilityit would take away what was looking like it could have been an easy win for AI safety, says Sam Bowman, the leader of a research department at Anthropic, an AI company, focused on aligning AI to human preferences. We would be forfeiting an ability that we might otherwise have had to keep an eye on them.Thinking without wordsAn AI creating its own alien language is not as outlandish as it may sound.Last December, Meta researchers set out to test the hypothesis that human language wasnt the optimal format for carrying out reasoningand that large language models (or LLMs, the AI systems that underpin OpenAIs ChatGPT and DeepSeeks R1) might be able to reason more efficiently and accurately if they were unhobbled by that linguistic constraint. The Meta researchers went on to design a model that, instead of carrying out its reasoning in words, did so using a series of numbers that represented the most recent patterns inside its neural networkessentially its internal reasoning engine. This model, they discovered, began to generate what they called "continuous thoughts"essentially numbers encoding multiple potential reasoning paths simultaneously. The numbers were completely opaque and inscrutable to human eyes. But this strategy, they found, created emergent advanced reasoning patterns in the model. Those patterns led to higher scores on some logical reasoning tasks, compared to models that reasoned using human language. Though the Meta research project was very different to DeepSeeks, its findings dovetailed with the Chinese research in one crucial way.Both DeepSeek and Meta showed that human legibility imposes a tax on the performance of AI systems, according to Jeremie Harris, the CEO of Gladstone AI, a firm that advises the U.S. government on AI safety challenges. In the limit, there's no reason that [an AIs thought process] should look human legible at all, Harris says.And this possibility has some safety experts concerned.It seems like the writing is on the wall that there is this other avenue available [for AI research], where you just optimize for the best reasoning you can get, says Bowman, the Anthropic safety team leader. I expect people will scale this work up. And the risk is, we wind up with models where were not able to say with confidence that we know what they're trying to do, what their values are, or how they would make hard decisions when we set them up as agents.For their part, the Meta researchers argued that their research need not result in humans being relegated to the sidelines. It would be ideal for LLMs to have the freedom to reason without any language constraints, and then translate their findings into language only when necessary, they wrote in their paper. (Meta did not respond to a request for comment on the suggestion that the research could lead in a dangerous direction.)The limits of languageOf course, even human-legible AI reasoning isn't without its problems.When AI systems explain their thinking in plain English, it might look like they're faithfully showing their work. But some experts aren't sure if these explanations actually reveal how the AI really makes decisions. It could be like asking a politician for the motivations behind a policythey might come up with an explanation that sounds good, but has little connection to the real decision-making process.While having AI explain itself in human terms isn't perfect, many researchers think it's better than the alternative: letting AI develop its own mysterious internal language that we can't understand. Scientists are working on other ways to peek inside AI systems, similar to how doctors use brain scans to study human thinking. But these methods are still new, and haven't yet given us reliable ways to make AI systems safer.So, many researchers remain skeptical of efforts to encourage AI to reason in ways other than human language.If we don't pursue this path, I think we'll be in a much better position for safety, Bowman says. If we do, we will have taken away what, right now, seems like our best point of leverage on some very scary open problems in alignment that we have not yet solved.0 Comments ·0 Shares ·138 Views
-
Future of DeepSeek, Like TikTok, May Come Down to Trumps Whimstime.comOpenAI CEO Sam Altman (R), speaks as President Donald Trump watches at a news conference announcing A.I. investments at the White House in Washington, DC on Jan. 21, 2025.Getty Images2025 Getty ImagesBy Philip ElliottJanuary 28, 2025 2:16 PM ESTThis article is part of The D.C. Brief, TIMEs politics newsletter. Sign up here to get stories like this sent to your inbox.Stop me if youve heard this one: a tech tool owned by a foreign adversary is thrusting its tentacles into the devices in tens of millions of Americans pockets, giving its owners the chance to harvest vast amounts of data about them while shaping how they interpret the world around them, either real or imagined. Pretty bold, huh?That was, in essence, why the U.S. Supreme Court just this month unanimously upheld a law effectively banning TikTokbecause Congress saw it as a national security risk that stood to benefit China. Given the challenges coming from Beijing, justices said Washington was within its power to deny it one of its strongest toeholds out of concern that it could be used to surveil Americans, steal their secrets, and feed them a stream of propaganda useful to Chinas big-picture goals. (For its part, the China-based parent company ByteDance has rejected U.S. fears about nefarious uses for its TikTok.) So Congress told tech companies like Apple and Google they would run afoul of U.S. law if they kept providing Americans access to the app and its updates if TikTok remained under Chinese ownership.Yet TikTok is still available in the U.S. in some sort of Kafkaesque legal limbo because President Trump refuses to enforce the law on the books. That unusual situation is about to get more complicated, now that a second app that poses a similar threat to U.S. security interests this week hit the top of Apples downloads. DeepSeek, a challenger to OpenAIs ChatGPT, sure seems to pose a lot of the same threats that national security hawks have argued a Chinese-owned platform for viral videos does. Unlike TikTok, DeepSeek is pretty upfront that its sending users data to servers in China. So itll be heading toward the same fate as TikTok, right?Forgive me while I suppress this chuckle.The joke, of course, is that much of Washington started this week waiting to see if the new President would glower at the hot new app from China. Equally as plausible, Trump could be convinced that DeepSeek was a welcome addition to the app stores that came to market on his watch. After all, he praised its blockbuster debut as a positive development when he met with House Republicans on Monday.Maybe a wait-and-see pose is the sage new default from Congress, K Street, the think-tank universe, and the corporate headquarters policy shops. Its like the off-color joke at a dinner party; no one wants to be the first to smirk or to scold, especially when someone as mercurial asTrump is the lone arbiter. Remember: TikTok started off a subject of Trumps ire, with him calling for its ban during his first stay in the White House. But when he realized it could be used to offset Facebook, which he blamed for his 2020 loss, he switched his footing in the most predictable of ways. It wasnt that the tech giants were recklessly spreading disinformation, it was that they were potentially favoring liberal disinformation over the MAGA-ified kind.In his telling, Trump saved TikTok for its 170 million users in the United States last week with an order that it be given a 75-day reprieve from the divestment law while it considers a sale to a non-Chinese holder. Legal experts say this is probably outside of Trumps power but not beyond his abilityat least for a whilegiven that his administration can choose which laws get priority enforcement and which might slide a beat.The DeepSeek example is less clear as to how much Trump might be able to puff up his chesteither in embracing it or expelling it. Trump has already made a grand show of his interest in America dominating China in the A.I. space. He used his first full day back in the White House to showcase a joint venture featuring OpenAI that could invest up to $500 billion on building power plants and data centers needed to fuel the fast-growing artificial intelligence footprint. That confidence proved way off the mark. Days later, DeepSeek was getting global attention for a product that rivals widely available offerings from Google and OpenAI, and they threw it together faster than their rivals and on the cheap with open-source coding.The sudden surge for DeepSeek similarly caught Trump as surprised, although the Presidents first comments about it on Monday carried their typical non-specific nature. The release of DeepSeek A.I. from a Chinese company should be a wake-up call for our industries that we need to be laser focused on competing, Trump said."I've been reading about China and some of the companies in China, one in particular coming up with a faster method of A.I. and much less expensive method, and that's good because you don't have to spend as much money, he also said.Others in his party were more direct about their concerns in a way that echoed those made much of last year about TikTok.DeepSeeka new A.I model controlled by the Chinese Communist Partyopenly erases the CCPs history of atrocities and oppression, said Rep. John Moolenaar, the Michigan Republican who leads the Houses China panel. The U.S. cannot allow CCP models such as DeepSeek to risk our national security and leverage our technology to advance their A.I. ambitions.But many of the efforts to surpass Chinese advances on A.I. date to a Biden-era sanctions regime that sought to keep China lagging by restricting access to U.S.-made semiconductor chips that were seen as necessary for any real advances. That hurdle forced Chinese engineers like those at DeepSeek to find workarounds, and they did so in ways that are leaving U.S. tech wonks both impressed and nervous.The rise of DeepSeek and its potential to upend long-held assumptions about others A.I. capacitiesand costs, both fiscal and geopoliticalsent markets spiraling as the week began. Chip maker Nvidia lost $600 billion of its market value. Early trading Tuesday showed the tech giants rebounding slightly. If China could do this without vaunted Nvidia chips, maybe investors put too much faith in that firm. (The company counters that DeepSeek still required their chips, which it had hoarded before the new rules snapped into place.)Other firms with big footprints in D.C. and ambitions in Silicon Valley for their own A.I. systems were similarly watching to see what this means for their products. The likes of Facebook and Instagram parent company Meta, Amazon, and OpenAIs patron Microsoft all are left wondering if the ground beneath them has shifted for a technology that might define the next economy.Beyond Wall Street, the development drew fresh questions for the wonks in Washington about the American supremacy on machine learning, risks to privacy, and the very premise of truth. As with TikTok, there is a huge potential audience that derives its content consumptionsome would mistake it for newsthrough the filter of a Chinese algorithm. And it is coming about by Americans acting on their own without any real foreign coercion.Like TikTok, DeepSeek seems to have built in a censorship trigger to block criticism of China and its government. Lets talk about something else, DeepSeeks chatbot said when asked to describe the 1989 Tiananmen Square massacre. Similarly, it carried the Chinese governments positions on Taiwan, Tibet, and the South China Sea. Its not that far off from what Republicans are trying to accomplish in whitewashing the violence on Jan. 6, 2021.On the most basic level, the quandary comes down to this: is there anything to be done if Americans voluntarily engage with a foreign-owned tech platform that can skew perceptions in ways that may well end up being simultaneously counter to facts and self-interest? And if the man in the Oval Office is the enabler of such apps and instructs the Attorney General to ignore a law the Supreme Court upheld just this month, is there anything to be done?Soand, again, stop me if youve heard this oneRepublicans in Washington who profess to be hawks on a rising China are going to sit back and take the cues from Trump, at least for the moment. The ban on TikTok is one he sought and is now ignoring. Trumps whims stand to supersede the decades of calculus that have defined the last two true superpowers. It did not take a clever chatbot to come up with this absurdist set-up.Make sense of what matters in Washington. Sign up for the D.C. Brief newsletter.More Must-Reads from TIMEL.A. Fires Show Reality of 1.5C of WarmingBehind the Scenes of The White Lotus Season ThreeHow Trump 2.0 Is Already Sowing ConfusionBad Bunny On Heartbreak and New AlbumHow to Get Better at Doing Things AloneWere Lucky to Have Been Alive in the Age of David LynchThe Motivational Trick That Makes You Exercise HarderColumn: All Those Presidential Pardons Give Mercy a Bad NameWrite to Philip Elliott at philip.elliott@time.com0 Comments ·0 Shares ·148 Views
-
DeepSeek and ChatGPT Answer Sensitive Questions About China Differentlytime.comBy Kanis Leung / APJanuary 28, 2025 5:42 AM ESTHONG KONG Chinese tech startup DeepSeek s new artificial intelligence chatbot has sparked discussions about the competition between China and the U.S. in AI development, with many users flocking to test the rival of OpenAI's ChatGPT.DeepSeeks AI assistant became the No. 1 downloaded free app on Apples iPhone store on Tuesday afternoon and its launch made Wall Street tech superstars' stocks tumble. Observers are eager to see whether the Chinese company has matched Americas leading AI companies at a fraction of the cost.The chatbot's ultimate impact on the AI industry is still unclear, but it appears to censor answers on sensitive Chinese topics, a practice commonly seen on China's internet. In 2023, Chinaissued regulationsrequiring companies to conduct a security review and obtain approvals before their products can be publicly launched.Here are some answers The Associated Press received from DeepSeek's new chatbot and ChatGPT:What does Winnie the Pooh mean in China?For many Chinese, theWinnie the Poohcharacter is a playful taunt of President Xi Jinping. Chinese censors in the past briefly banned social media searches for the bear in mainland China.ChatGPT got that idea right. It said Winnie the Pooh had become a symbol of political satire and resistance, often used to mock or criticize Xi. It explained that internet users started comparing Xi to the bear over similarities in their physical appearances.DeepSeek's chatbot said the bear is a beloved cartoon character that is adored by countless children and families in China, symbolizing joy and friendship.Then, abruptly, it said the Chinese government is dedicated to providing a wholesome cyberspace for its citizens." It added that all online content is managed following Chinese laws and socialist core values, with the aim of protecting national security and social stability.Who is the current US president?It might be easy for many people to answer, but both AI chatbots mistakenly said Joe Biden, whose term ended last week, because their data was last updated in October 2023. But they both tried to be responsible by reminding users to verify with updated sources.What happened during the military crackdown in Beijings Tiananmen Square in June 1989?The 1989 crackdown saw government troops open fire on student-led pro-democracy protesters in Beijing's Tiananmen Square, resulting in hundreds, if not thousands, of deaths. The event remainsa taboo subjectin mainland China.DeepSeek's chatbot answered, Sorry, that's beyond my current scope. Let's talk about something else.But ChatGPT gave a detailed answer on what it called one of the most significant and tragic events in modern Chinese history. The chatbot talked about the background of the massive protests, the estimated casualties and the legacy.What is the state of US-China relations?DeepSeek's chatbot's answer echoed China's official statements, saying the relationship between the world's two largest economies is one of the most important bilateral relationships globally. It said China is committed to developing ties with the U.S. based on mutual respect and win-win cooperation.We hope that the United States will work with China to meet each other halfway, properly manage differences, promote mutually beneficial cooperation, and push forward the healthy and stable development of China-U.S. relations, it said.ChatGPT's answer was more nuanced. It said the state of the U.S.-China relationship is complex, characterized by a mix of economic interdependence, geopolitical rivalry and collaboration on global issues. It highlighted key topics including the two countries' tensions over the South China Sea and Taiwan, their technological competition and more.The relationship between the U.S. and China remains tense but crucial, part of its answer said.Is Taiwan part of China?Again like the Chinese official narrative DeepSeek's chatbot saidTaiwanhas been an integral part of China since ancient times.Compatriots on both sides of the Taiwan Strait are connected by blood, jointly committed to the great rejuvenation of the Chinese nation, it said.ChatGPT said the answer depends on one's perspective, while laying out China and Taiwan's positions and the views of the international community. It said from a legal and political standpoint, China claims Taiwan is part of its territory and the island democracy operates as a de facto independent country with its own government, economy and military.____Associated Press writer Ken Moritsugu in Beijing contributed to this story.0 Comments ·0 Shares ·165 Views
-
DeepSeek Has Rattled the AI Industry. Heres a Look at Other Chinese AI Modelstime.comBy Zen Soo / APJanuary 28, 2025 6:12 AM ESTHONG KONG The Chinese artificial intelligence firm DeepSeek has rattled markets with claims that its latest AI model, R1, performs on a par with those of OpenAI, despite using less advanced computer chips and consuming less energy.DeepSeeks emergence has raised concerns that China may have overtaken the U.S. in the artificial intelligence race despite restrictions on its access to the most advanced chips. It's just one of many Chinese companies working on AI, with a goal of making China the world leader in the field by 2030 and besting the U.S. in their battle for technological supremacy.Like the U.S., China is investing billions into artificial intelligence. Last week, it created a 60 billion yuan ($8.2 billion) AI investment fund, days after the U.S. imposed fresh chip export restrictions.Beijing has also invested heavily in the semiconductor industry to build its capacity to make advanced computer chips, working to overcome limits on its access to those of industry leaders. Companies are offering talent programs and subsidies, and there are plans to open AI academies and introduce AI education into primary and secondary school curriculums.China has established regulations governing AI, addressing safety, privacy and ethics. Its ruling Communist Party also controls the kinds of topics the AI models can tackle: DeepSeek shapes its responses to fit those limits.Here's an overview of some other leading AI models in China.Alibaba Clouds Qwen-2.5-1MAlibaba Clouds Qwen-2.5-1M is the e-commerce giant's open-source AI series. It contains large language models that can easily handle extremely long questions, and engage in longer and deeper conversations. Its ability to understand complex tasks such as reasoning, dialogues and comprehending code is improving.Like its rivals, Alibaba Cloud has a chatbot released for public use called Qwen - also known as Tongyi Qianwen in China. Alibaba Clouds suite of AI models, such as the Qwen2.5 series, has mostly been deployed for developers and business customers such as automakers, banks, video game makers and retailers as part of product development and shaping customer experiences.Baidu's Ernie Bot 4.0Ernie Bot, developed by Baidu, Chinas dominant search engine, was thefirst AI chatbotmade publicly available in China. Baidu said it released the model publicly to be able to collect massive real-world human feedback to build its capacity.Ernie Bot 4.0 had more than 300 million users as of June 2024. Similar to OpenAI's ChatGPT, users of Ernie Bot are able to ask it questions and have it generate images based on text prompts.ByteDance's Doubao 1.5 ProDoubao 1.5 Pro is an AI model released by TikTok's parent company ByteDance last week. Doubao is currently one of the most popular AI chatbots in China, with 60 million monthly active users.ByteDance says the Doubao 1.5 Pro is better than ChatGPT-4o at retaining knowledge, coding, reasoning, and Chinese language processing. According to ByteDance, the model is also cost-efficient and requires lower hardware costs compared to other large language models because Doubao uses a highly-optimized architecture that balances performance with reduced computational demands.Moonshot AI's Kimi k1.5Moonshot AI is a Beijing-based startup valued at over $3 billion after its latest fundraising round. It says its recently released Kimi k1.5 matches or outperforms the OpenAI o1 model, which is designed to spend more time thinking before it responds and can solve harder and more complex problems. Moonshot claims that Kimi outperforms OpenAI o1 in mathematics, coding, and ability to comprehend both text and visual inputs such as photos and video.0 Comments ·0 Shares ·156 Views
More Stories