TIME
TIME
News and current events from around the globe. Since 1923.
1 people like this
130 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • How This Tool Could Decode AIs Inner Mysteries
    time.com
    The scientists didnt have high expectations when they asked their AI model to complete the poem. He saw a carrot and had to grab it, they prompted the model. His hunger was like a starving rabbit, it replied.The rhyming couplet wasnt going to win any poetry awards. But when the scientists at AI company Anthropic inspected the records of the models neural network, they were surprised by what they found. They had expected to see the model, called Claude, picking its words one by one, and for it to only seek a rhyming wordrabbitwhen it got to the end of the line. Instead, by using a new technique that allowed them to peer into the inner workings of a language model, they observed Claude planning ahead. As early as the break between the two lines, it had begun thinking about words that would rhyme with grab it, and planned its next sentence with the word rabbit in mind.The discovery ran contrary to the conventional wisdomin at least some quartersthat AI models are merely sophisticated autocomplete machines that only predict the next word in a sequence. It raised the questions: How much further might these models be capable of planning ahead? And what else might be going on inside these mysterious synthetic brains, which we lack the tools to see?The finding was one of several announced on Thursday in two new papers by Anthropic, which reveal in more depth than ever before how large language models (LLMs) think. Todays AI tools are categorically different from other computer programs for one big reason: they are grown, rather than coded by hand. Peer inside the neural networks that power them, and all you will see is a bunch of very complicated numbers being multiplied together, again and again. This internal complexity means that even the machine learning engineers who grow these AIs dont really know how they spin poems, write recipes, or tell you where to take your next holiday. They just do.But recently, scientists at Anthropic and other groups have been making progress in a new field called mechanistic interpretabilitythat is, building tools to read those numbers and turn them into explanations for how AI works on the inside. What are the mechanisms that these models use to provide answers? says Chris Olah, an Anthropic cofounder, of the questions driving his research. What are the algorithms that are embedded in these models? Answer those questions, Olah says, and AI companies might be able to finally solve the thorny problem of ensuring AI systems always follow human rules.The results announced on Thursday by Olahs team are some of the clearest findings yet in this new field of scientific inquiry, which might best be described as a kind of neuroscience for AI.A new microscope for looking inside LLMsIn earlier research published last year, Anthropic researchers identified clusters of artificial neurons within neural networks. They called them features, and found that they corresponded to different concepts. To illustrate this finding, Anthropic artificially boosted a feature inside Claude corresponding to the Golden Gate Bridge, which led the model to insert mention of the bridge, no matter how irrelevant, into its answers until the boost was reversed.In the new research published Thursday, the researchers go a step further, tracing how groups of multiple features are connected together inside a neural network to form what they call circuitsessentially algorithms for carrying out different tasks.To do this, they developed a tool for looking inside the neural network, almost like the way scientists can image the brain of a person to see which parts light up when thinking about different things. The new tool allowed the researchers to essentially roll back the tape and see, in perfect HD, which neurons, features, and circuits were active inside Claudes neural network at any given step. (Unlike a biological brain scan, which only gives the fuzziest picture of what individual neurons are doing, digital neural networks provide researchers with an unprecedented level of transparency; every computational step is laid bare, waiting to be dissected.)When the Anthropic researchers zoomed back to the beginning of the sentence, His hunger was like a starving rabbit, they saw the model immediately activate a feature for identifying words that rhyme with it. They identified the features purpose by artificially suppressing it; when they did this and re-ran the prompt, the model instead ended the sentence with the word jaguar. When they kept the rhyming feature but suppressed the word rabbit instead, the model ended the sentence with the features next top choice: habit.Anthropic compares this tool to a microscope for AI. But Olah, who led the research, hopes that one day he can widen the aperture of its lens to encompass not just tiny circuits within an AI model, but the entire scope of its computation. His ultimate goal is to develop a tool that can provide a "holistic account" of the algorithms embedded within these models. I think there's a variety of questions that will increasingly be of societal importance, that this could speak to, if we could succeed, he says. For example: Are these models safe? Can we trust them in certain high-stakes situations? And when are they lying?Universal languageThe Anthropic research also found evidence to support the theory that language models think in a non-linguistic statistical space that is shared between languages.Anthropic scientists tested this by asking Claude for the opposite of small in several different languages. Using their new tool, they analyzed the features that activated inside Claude when it answered each of those prompts in English, French, and Chinese. They found features corresponding to the concepts of smallness, largeness, and oppositeness, which activated no matter what language the question was posed in. Additional features would also activate corresponding to the language of the question, telling the model what language to answer in.This isnt an entirely new findingAI researchers have conjectured for years that language models think in a statistical space outside of language, and earlier interpretability work has borne this out with evidence. But Anthropics paper is the most detailed account yet of exactly how this phenomenon happens inside a model, Olah says.The finding came with a tantalizing prospect for safety research. As models get larger, the team found, they tend to become more capable of abstracting ideas beyond language and into this non-linguistic space. This finding could be useful in a safety context, because a model that is able to form an abstract concept of, say, harmful requests is more likely to be able to refuse them in all contexts, compared to a model that only recognizes specific examples of harmful requests in a single language.This could be good news for speakers of so-called low-resource languages that are not widely represented in the internet data that is used to train AI models. Todays large language models often perform more poorly in those languages than in, say, English. But Anthropics finding raises the prospect that LLMs may one day not need unattainably vast quantities of linguistic data to perform capably and safely in these languages, so long as there is a critical mass big enough to map onto a models internal non-linguistic concepts. However, speakers of those languages will still have to contend with how those very concepts have been shaped by the dominance of languages like English, and the cultures that speak them.Toward a more interpretable futureDespite these advances in AI interpretability, the field is still in its infancy, and significant challenges remain. Anthropic acknowledges that even on short, simple prompts, our method only captures a fraction of the total computation expended by Claudethat is, there is much going on inside its neural network into which they still have zero visibility. It currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words, the company adds. Much more work will be needed to overcome those limitations.But if researchers can achieve that, the rewards might be vast. The discourse around AI today is very polarized, Olah says. At one extreme, there are people who believe AI models "understand" just like people do. On the other, there are people who see them as just fancy autocomplete tools. I think part of whats going on here is, people dont really have productive language for talking about these problems," Olah says. "Fundamentally what they want to ask, I think, is questions of mechanism. How do these models accomplish these behaviors? They dont really have a way to talk about that. But ideally they would be talking about mechanism, and I think that interpretability is giving us the ability to make much more nuanced, specific claims about what exactly is going on inside these models. I hope that that can reduce the polarization on these questions.
    0 Comments ·0 Shares ·6 Views
  • What Is Signal, the Messaging App Used by Trump Officials, and Is It Safe?
    time.com
    The Trump administration is facing heavy blowback for using Signal, a messaging app, to discuss sensitive military plans. On March 24, officials usage of the app was revealed after The Atlantic editor Jeffrey Goldberg published a story titled "The Trump Administration Accidentally Texted Me Its War Plans," in which Secretary of Defense Pete Hegseth, among others, discussed upcoming military strikes on Yemen. The U.S. government previously discouraged federal employees from using the app for official business. Some experts have speculated that sharing sensitive national security details over Signal could be illegal, and Democratic lawmakers have demanded an investigation. If our nation's military secrets are being peddled around over unsecure text chains, we need to know that at once, New York Democrat Chuck Schumer said on the Senate floor.Signal is one of the most secure and private messaging platforms that exists for general public use. But cybersecurity experts argue that the app should not have been used for this level of sensitive communication. Signal is a very robust app: a lot of cybersecurity professionals use it for our communications that we want to protect, says Michael Daniel, president and CEO of the Cyber Threat Alliance and a cybersecurity coordinator under President Obama. But its not as secure as government communications channels. And the use of these kinds of channels increases the risk that something is going to go wrong.Signals StrengthsSignal was launched in 2014, with the goal of creating a privacy-preserving messaging platform in an age of increasing mass surveillance. Signal conversations are protected by end-to-end encryption, a technique that makes it extremely hard for a third party to intercept or decipher private messages. While other messaging tools may collect sensitive personal data, Signal prides itself on securely protecting information such as messaging contacts, frequency, and duration.The app has other privacy features, such as automatically disappearing messages after a set period and preventing screenshots of conversations. Signal data is stored locally on user's devices, not the companys servers. Our goal is that everyone in the world can pick up their device, and without thinking twice about it, or even having an ideological commitment to privacy, use Signal to communicate with anyone they want, Signal President Meredith Whittaker told TIME in 2022.Read More: Signals President Meredith Whittaker Shares Whats Next for the Private Messaging AppOver the last few years, Signal has been used by dissidents and protestors around the world who want to keep their conversations safe from political enemies or law enforcement. In Ukraine, the U.S. Embassy in Kyiv described Signal as critical to their work in its ability to ensure secure, rapid, and easily accessible communications. The app now has 70 million users worldwide, according to the tracking site Business of Apps.Government UseThe usage of Signal for government purposes is more contentious. In 2021, the Pentagon scolded a former official for using Signal, saying that it did not comply with the Freedom of Information Act, which decrees the government has legal obligations to maintain federal records. Goldberg, however, reported this week that the Trump officials Signal chat was set to automatically delete messages after a period of time.Sam Vinograd, who served in former President Barack Obama's Homeland Security Department, told CBS that sharing sensitive security details over Signal could violate the Espionage Act as well. Top intelligence officials testified this week that no classified information was shared over the group chat. CIA Director John Ratcliffe said that Signal was a permissible work-use application for the CIA.Last week, a Pentagon advisory cautioned military personnel against using Signal due to Russian hackers targeting the app.The Cyber Threat Alliances Daniel says that he was surprised that top officials were using Signal, given that they have access to government-specific channels that are more secure. When discussing sensitive information, officials are typically required to do so in designated, secure areas called Sensitive Compartmented Information Facilities (SCIFs), or to use SIPRNet, a secure network used by the Defense and State Departments.These are very senior officials who have a lot of options. They have people whose entire jobs are is to make sure that they're able to communicate at all times, Daniel says. We've had that for decades now, and those procedures are really well honed.Daniel contends that government tools could have prevented what went wrong in this instance: the human error of an outside party mistakenly being added to a message chain. He says that government channels have a much higher level of authentication to ensure that members of communication channels are supposed to have access.Dave Chronister, the CEO of the cybersecurity company Parameter Security, says that the governments bespoke communications channels prevent other kinds of interlopers or hackers attempting to use phishing or malware techniques to learn information. If youre on a cell phone, I dont know who could be looking over my shoulder to see what Im typing, not to mention I dont know what else is on that mobile device, he says.Chronister adds that officials use of Signal, as opposed to internal channels, also makes it harder for the government to identify and contain breaches once theyve happened. We could have data out there we didnt know was compromised, he says. If top cabinet officials are using Signal, Im wondering how much is being done on a daily basisand I think theres going to be a lot more fallout from this.A representative for Signal did not immediately respond to a request for comment.
    0 Comments ·0 Shares ·11 Views
  • Why 23andMes Genetic Data Could Be a Gold Mine for AI Companies
    time.com
    The genetic testing company 23andMe, which holds the genetic data of 15 million people, declared bankruptcy on Sunday night after years of financial struggles. This means that all of the extremely personal user data could be up for saleand that vast trove of genetic data could draw interest from AI companies looking to train their data sets, experts say. Data is the new oiland this is very high quality oil, says Subodha Kumar, a professor at the Fox School of Business at Temple University. With the development of more and more complicated and rigorous algorithms, this is a gold mine for many companies.But any AI-related company attempting to acquire 23andMe would run significant reputational risks. Many people are horrified by the thought that they surrendered their genetic data to trace their ancestry, only for it to now be potentially used in ways they never consented to.Anybody touching this data is running a risk, Kumar, who is the director of Foxs Center for Business Analytics and Disruptive Technologies, says. But at the same time, not touching it, they might be losing on something big as well.Training LLMsCompanies like OpenAI and Google have poured time and resources into making an impact on the medical field, and 23andMes data trove may attract interest from large AI firms with the financial means to acquire it. 23andMe was valued at around $48 million this week, down from a peak of $6 billion in 2021.These companies are striving to build the most powerful general purpose models possible, which are trained on vast amounts of granular data. But researchers have argued that high-quality data sources are drying up, which makes new and robust information sources all the more coveted. A TechCrunch survey of venture capitalists earlier this year found that more than half of respondents cited the quality or rarity of their proprietary data as the edge that AI startups have over their competition.I think it could be a really valuable data set for some of the big AI companies because it represents this ground truth data of actual genetic data, Kazlauskas says of 23andMe. Some of the human errors that might exist in bio publications, you could avoid.Kumar says that 23andMes data could be especially valuable to companies in their push for agentic AI, or AIs that can perform tasks without the involvement of humans, whether in medical research or company decisionmaking.The whole goal of agentic AI models has been a modular approach: you crack the smaller pieces of the problem and then you put them together, he says.Representatives for Google and OpenAI did not immediately respond to requests for comment.Industry-Based Value23andMes data could also be valuable across different industries using AI to sort through vast amounts of datafirst and foremost, medical research.23andMe already had agreements in place with pharmaceutical companies such as GlaxoSmithKline, which tapped into the companys data sets in the hopes of developing new treatments for disease. Kumar says that at Temple, he and colleagues are working on a project to create personalized treatment for ovarian cancer patientsand have found that genetic data can be very, very powerful in understanding structures that we were not able to understand, he says.However, Alex Zhavoronkov, founder and CEO at Insilico Medicine, contends that 23andMes data may not be as valuable as some think, especially in relation to drug discovery. "Most low hanging fruits have already been picked up and there is significant data in the public domain published together with major academic papers, he wrote in an email to TIME.But companies in many other industries will likely be interested, too. This is an abnormally large and nuanced data set: This amount of genetic data, especially that which comes with personal health and medical records, is rarely publicly accessible, says Anna Kazlauskas, CEO of Open Data Labs and the creator of Vana, a network for user-owned data. All of that contextual data makes it really valuableand hard data to get, she says.Potentially interested industries include insurance companies, who could use the data to identify people with greater health risks, in order to up their premiums. Financial institutions could track the relationship between genetic markers and spending patterns in the process of assessing loans. And e-commerce companies could use the data to tailor ads to people with specific medical conditions.Ethical and Privacy ConcernsBut companies also face significant reputational risks in getting involved. 23andMe suffered a hack in 2023 which exposed the personal data of millions of users, severely hurting the companys reputation. Bidders who come from other industries may have even less data protection than 23andMe did, Kumar says. My worry is that some of the companies are not used to having this kind of data, and they may not have enough governance in place, he says.This is especially dangerous because genetic information is inherently sensitive and cannot be altered once compromised. The genetic information of family members of people who willingly gave their data to the company are also at risk. And given AIs well-known biases, the misuse of such data could lead to discrimination in areas like hiring, insurance and loans. On Friday, California Attorney General Rob Bonta released an urgent alert to 23andMe customers advising them to ask the company to delete their data and destroy their genetic samples under a California privacy law.Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, worries that 23andMes genetic data might exist in a state of permanent flux on the market. Once you have sold the data, there are no limits to how many times it may be resold, she says. This could result in genetic data falling into the hands of organizations that may not prioritize ethical considerations or have robust data protection measures in place.Insilico Medicines Zhavoronkov says all of these fears mean that potential AI-related bidders will be dissuaded from trying to purchase 23andMe and its data. Their dataset is actually toxic, he says. Whoever buys it and trains on it will get negative publicity, and the acquirer will be possibly investigated or sued."Regardless of what ultimately happens, Kazlauskas says she is at least thankful that this conundrum has opened up larger conversations about data sovereignty. We should probably, in the future, want to avoid this kind of situation where you decide you want to do a genetic test, and then five years later, this company is struggling financially, and that now puts your genetic data at risk of being sold to the highest bidder, she says. In this AI era, that data is super valuable.
    0 Comments ·0 Shares ·33 Views
  • 23andMe Filed for Bankruptcy. What Does That Mean For Your Account?
    time.com
    The genetic testing and information company, 23andMe, announced on March 23 that it has filed for bankruptcy, after years of financial struggles and data privacy concerns.Filing for bankruptcy will allow the company to facilitate a sale process to maximize the value of its business, 23andMe said in a press release. The news also comes amid management changes; according to the press release, Chief Executive Officer Anne Wojcicki is stepping down from her role, effective immediately, but will continue to serve as a board member. The companys board selected Chief Financial and Accounting Officer Joe Selsavage to serve as the interim CEO.AdvertisementAdvertisementIn the press release, 23andMe said it intends to continue operating its business in the ordinary course throughout the sale process. There are no changes to the way the Company stores, manages, or protects customer data.We are committed to continuing to safeguard customer data and being transparent about the management of user data going forward, and data privacy will be an important consideration in any potential transaction, Mark Jensen, chair and member of the Special Committee of the Board of Directors, said in the press release.Still, some officials are urging customers to consider deleting their data. Just a few days before the bankruptcy announcement, on March 21, California Attorney General Rob Bonta issued a consumer alert to 23andMe customers, advising them to consider deleting their data from the companys website.Given 23andMes reported financial distress, I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company, Bonta said in a press release.Some technology experts also encouraged 23andMe users to delete their data. Meredith Whittaker, the president of the messaging app Signal, said in a post on X: "It's not just you. If anyone in your FAMILY gave their DNA to 23&me, for all of your sakes, close your/their account now. This won't solve the issue, but they will (they claim) delete some of your data."In October 2024, NPR reported on customers' concerns over what could happen to their private data amid the company's financial challenges. A 23andMe spokesperson told NPR that the company was committed to privacy, but wouldn't answer questions about what the company might do with customer data. Legal experts said that there are few federal protections for customers, and worried that the sensitive data could potentially be sold off or even accessed by law enforcement, NPR reported.The California Attorney Generals Office outlined in its press release on March 21 the steps customers need to take to delete their genetic data from 23andMe: After logging into your account, click on Settings and scroll to the bottom of the page to a section called 23andMe Data; click View; then you can download your data; scroll to the Delete Data section and click Permanently Delete Data. Youll receive an email from 23andMe after that, and you can follow the link in the email to confirm your request to delete your data.If you had previously allowed 23andMe to store a saliva sample and DNA, you can change that preference by going to the Settings page on your account, under Preferences. If you had previously allowed 23andMe and third-party researchers to use your genetic data and sample for research purposes, you can also revoke that consent from the Settings page, under Research and Product Consents.In addition to years of financial challenges, 23andMe dealt with the fallout from a data breach in 2023 that affected almost 7 million customers.
    0 Comments ·0 Shares ·50 Views
  • What Encrypted Messaging Means for Government Transparency
    time.com
    As a devastating wildfire burned through a Maui town, killing more than 100 people, emergency management employees traded dozens of text messages, creating a record that would later help investigators piece together the governments response to the 2023 tragedy.One text exchange hinted officials might also be using a second, untraceable messaging service.Thats what Signal was supposed to be for, then-Maui Emergency Management Agency Administrator Herman Andaya texted a colleague.AdvertisementAdvertisementSignal is one of many end-to-end encrypted messaging apps that include message auto-delete functions.While such apps promise increased security and privacy, they often skirt open records laws meant to increase transparency around and public awareness of government decision-making. Without special archiving software, the messages frequently aren't returned under public information requests.An Associated Press review in all 50 states found accounts on encrypted platforms registered to cellphone numbers for over 1,100 government workers and elected officials.Its unclear if Maui officials actually used the app or simply considered ita county spokesperson did not respond to questionsbut the situation highlights a growing challenge: How can government entities use technological advancements for added security while staying on the right side of public information laws?How common is governmental use of encryption apps?The AP found accounts for state, local and federal officials in nearly every state, including many legislators and their staff, but also staff for governors, state attorneys general, education departments and school board members.The AP is not naming the officials because having an account is neither against the rules in most states, nor proof they use the apps for government business. While many of those accounts were registered to government cellphone numbers, some were registered to personal numbers. The APs list is likely incomplete because users can make accounts unsearchable.Improper use of the apps has been reported over the past decade in places likeMissouri,Oregon,Oklahoma,Marylandand elsewhere, almost always because of leaked messages.Whats the problem?Public officials and private citizens are consistently warned about hacking and data leaks, but technologies designed to increase privacy often decrease government transparency.Apps like Signal, WhatsApp, Confide, Telegram and others use encryption to scramble messages so only the intended end-user can read them, and they typically arent stored on government servers. Some automatically delete messages, and some prevent users from screenshotting or sharing messages.The fundamental problem is that people do have a right to use encrypted apps for their personal communications, and have those on their personal devices. Thats not against the law, said Matt Kelly, editor of Radical Compliance, a newsletter that focuses on corporate compliance and governance issues. But how would an organization be able to distinguish how an employee is using it?Are there acceptable government uses of end-to-end encryption apps?The U.S. Cybersecurity and Infrastructure Security Agency, or CISA, has recommended that highly valued targetssenior officials who handle sensitive informationuse encryption apps for confidential communications. Those communications are not typically releasable under public record laws.CISA leaders also say encrypted communications could be a useful security measure for the public, but did not encourage government officials to use the apps to skirt public information laws.Journalists, including many at the AP, often use encrypted messages when talking to sources or whistleblowers.What are states doing?While some cities and states are grappling with how to stay transparent, public record laws arent evolving as quickly as technology, said Smarsh general manager Lanika Mamac. The Portland, Oregon-based company helps governments and businesses archive digital communications.People are worried more about cybersecurity attacks. Theyre trying to make sure its secure, Mamac said. I think that they are really trying to figure out, How do I balance being secure and giving transparency?Mamac said Smarsh has seen an uptick in inquiries, mostly from local governments. But many others have done little to restrict the apps or clarify rules for their use.In 2020, the New Mexico Child, Youth and Families Departments new division director told employees to use the app Signal for internal communications and to delete messages after 24 hours. A 2021 investigation into the possible violation of New Mexicos document retention rules was followed by a court settlement with two whistleblowers and the division directors departure.But New Mexico still lacks regulations on using encrypted apps. The APs review found at least three department or agency directors had Signal accounts as of December 2024.In Michigan, State Police leaders were found in 2021 to be using Signal on state-issued cellphones. Michigan lawmakers responded by banning the use of encrypted messaging apps on state employees work-issued devices if they hinder public record requests.However, Michigan's law did not include penalties for violations, and monitoring the government-owned devices used by 48,000 executive branch employees is a monumental task.Whats the solution?The best remedy is stronger public record laws, said David Cuillier, director of the Brechner Freedom of Information Project at the University of Florida. Most state laws already make clear that the content of communicationnot the methodis what makes something a public record, but many of those laws lack teeth, he said.They should only be using apps if they are able to report the communications and archive them like any other public record, he said.Generally, Cuillier said, theres been a decrease in government transparency over the past few decades. To reverse that, governments could create independent enforcement agencies, add punishments for violations, and create a transparent culture that supports technology, he said.We used to be a beacon of light when it came to transparency. Now, were not. We have lost our way, Cuillier said.Boone reported from Boise, Idaho. Lauer reported from Philadelphia. Associated Press reporters at statehouses nationwide contributed to this report.
    0 Comments ·0 Shares ·74 Views
  • Cybersecurity Experts Are Sounding the Alarm on DOGE
    time.com
    Since January, Elon Musks Department of Government Efficiency (DOGE) has carved up federal programs, removing positions related to hazardous waste removal, veteran support and disease control, among others. While many have already been affected, cybersecurity experts worry about the impacts not yet realized in the form of hacks, fraud, and privacy breaches. DOGE has fired top cybersecurity officers from various agencies, gutted the Cybersecurity and Infrastructure Agency (CISA), and cancelled at least 32 cybersecurity-related contracts with the Consumer Financial Protection Bureau (CFPB). Cybersecurity experts, including those fired by DOGE, argue that the agency has demonstrated questionable practices toward safeguarding the vast amount of personal data the government holds, including in agencies such as the Social Security Administration and the Department of Veterans Affairs (VA). Last week, a court filing revealed that a DOGE staffer violated Treasury Department policy by sending an email containing unencrypted personal information.AdvertisementAdvertisementI see DOGE actively destroying cybersecurity barriers within government in a way that endangers the privacy of American citizens, says Jonathan Kamens, who oversaw cybersecurity for VA.com until February, when he was let go. That makes it easier for bad actors to gain access.DOGEs access to some agencies data has been limited in response to dozens of filed lawsuits. But as those battles play out in court, DOGE continues to have access to huge amounts of sensitive data. Heres what cybersecurity experts caution is at stake.Personal informationAs DOGE picked up steam following the inauguration, cybersecurity experts began voicing concern about the new organizations privacy practices and digital hygiene. Reports surfaced that DOGE members connected to government networks on unauthorized servers and shared information over unsecure channels. Last month, the DOGE.gov website was altered by outside coders who found they could publish updates to the website without authorization. The same month, Treasury officials said that a 25-year-old DOGE staffer was mistakenly given temporary access to make changes to a federal payment system.Cybersecurity experts find these lapses concerning because the government stores vast amounts of data to serve Americans. For instance, the Department of Veterans Affairs stores the bank accounts and credit card numbers of millions of veterans who receive benefits and services. The department also collects medical data, social security numbers, and the names of relatives and caregivers, says Kamens, who says he was the only federal employee at the agency with an engineering technical background working on cybersecurity. Kamens says he was hired in 2023 to improve several specific security issues for the site, which he declined to name due to confidentiality reasons. Now, he says, hackers could take advantage of those unresolved issues to learn potentially compromising information about veterans, and then target them with phishing campaigns.Peter Kasperowicz, VAs press secretary, wrote to TIME in an email that VA employs hundreds of cybersecurity personnel who are dedicated to keeping the departments websites and beneficiary data safe 24/7.Erie Meyer, former chief technologist at the Consumer Financial Protection Bureau (CFPB), resigned in February after DOGE members showed up at the agencys offices requesting data privileges. Her role focused on safeguarding the CFPB's sensitive data, including transaction records from credit reporting agencies, complaints filed by citizens, and information from Big Tech companies under investigation. There are a bunch of careful protections in place that layer on to each other to make sure that no one could exploit that information, Meyer says.But DOGE slashed many of those efforts, including the regular upkeep of audit and event logs which showed how and when employees were accessing that information. The software we had in place tracking what was being done was turned off, she says. This means that DOGE employees could now have access to financial data with no oversight as to how or why they are accessing it, Meyer says.Meyer is also concerned about the cancellation of dozens of cybersecurity contracts, which included deals with companies who performed security equipment disposal, provided VPNs to government employees, and encrypted email servers. People need us when the worst financial disasters are happening to their family, she says. Its sloppy to open them up to fraud like this.A representative for the CFPB did not immediately respond to a request for comment. In an email statement to TIME, White House press secretary Karoline Leavitt, wrote: President Trump promised the American people he would establish a Department of Government Efficiency, overseen by Elon Musk, to make the federal government more efficient and accountable to taxpayers. DOGE has fully integrated into the federal government to cut waste, fraud, and abuse. Rogue bureaucrats and activist judges attempting to undermine this effort are only subverting the will of the American people and their obstructionist efforts will fail.Fraud and bad actorsIn addition to being worried about what DOGE is doing with citizens data, cybersecurity experts are concerned that their aggressive tactics could make it easier for scammers to infiltrate systems, which could have disastrous consequences. For instance, DOGE currently has access to Social Security Administration data, which includes personal information about elderly Americans. Kamens notes that scammers often use personal information, such as an individuals bank or hospital, in order to convince them theyre a trusted person. And these tactics seem to work especially well on the elderly, who are less tech-savy: roughly $3.4 billion in fraud losses was reported by people ages 60 and up in 2023, I3C found.These vulnerabilities also extend to matters of national security. DOGE members themselves would immediately become targets for foreign state actors, Kamens says. And earlier this month, Rob Joyce, the former leader of the NSAs unit focusing on foreign computer systems, warned that DOGEs mass firing of probationary federal employees would have a devastating impact on cybersecurity and our national security.About 130 of those fired probationary officers were part of the Cybersecurity and Infrastructure Agency (CISA), which is tasked with detecting breaches of the nations power grid, pipelines and water system. CISA was already understaffed to begin with, says Michael Daniel, president and CEO of the Cyber Threat Alliance and a cybersecurity coordinator under President Obama. It's possible that a critical infrastructure owner and operator might not be able to get assistance from CISA as a result of the cuts.Senator Elizabeth Warren penned a letter arguing that DOGE posed a national security threat by exposing secrets about Americas defense and intelligence agencies. We dont know what safeguards were pulled down. Are the gates wide open now for hackers from China, from North Korea, from Iran, from Russia? she said in a statement. Heck, who knows what black hat hackers all around the world are finding out about each one of us and copying that information for their own criminal uses?Systemic risksCybersecurity experts are also worried about the risk of DOGE engineers inadvertently breaking parts of the governments digital systems, which can be archaic and deeply complex, or unintentionally introducing malware to essential code.In particular, financial experts have said that mistakes made within the Treasury Departments delicate systems could harm the U.S. economy. Kamens warns that if DOGE interferes with the Social Security system, Medicare reimbursements or disability payments could fail to go out on time, endangering lives. They have fired the people who know where the danger points are, he says.Last week, a federal judge questioned government attorneys about why DOGE needs access to Social Security Administration systems, and is still considering whether to shut off access. Another lawsuit, filed by 19 state attorneys general in an attempt to block DOGEs access to the Treasury Department in February is ongoing.Kamens adds that the security risks could only heighten over time, especially if roles like his remain unfilled. Nearly everyone he worked with at USDS (United States Digital Service), DOGEs precursor, came into government from the privacy sector, he says, and he worries that top-level cybersecurity officials will not want to join the federal staff due to the instability and the risks of being fired or undermined.This lack of staffing, he says, could prevent the government from mitigating new and evolving attacks. The reality is that there are constantly new security holes being discovered, he says. If you're not actively evolving your cyber defenses to go along with the offensive things that are happening in that landscape, you end up losing ground.Daniel says that just because nothing has broken yet does not mean that DOGE is doing an adequate job in stopping cybersecurity threats. Its not an instant feedback loop, he says. That's part of the challenge here: we're talking about an increase in risk that may play out over an extended period of time.
    0 Comments ·0 Shares ·101 Views
  • AI Is Turbocharging Organized Crime, E.U. Police Agency Warns
    time.com
    THE HAGUE, Netherlands The European Union's law enforcement agency cautioned Tuesday that artificial intelligence is turbocharging organized crime that is eroding the foundations of societies across the 27-nation bloc as it becomes intertwined with state-sponsored destabilization campaigns.The grim warning came at the launch of the latest edition of a report on organized crime published every four years by Europol that is compiled using data from police across the EU and will help shape law enforcement policy in the bloc in coming years.AdvertisementAdvertisementCybercrime is evolving into a digital arms race targeting governments, businesses and individuals. AI-driven attacks are becoming more precise and devastating, said Europol's Executive Director Catherine De Bolle.Some attacks show a combination of motives of profit and destabilization, as they are increasingly state-aligned and ideologically motivated, she added.Read more: The AI Arms Race Is Changing EverythingThe report, the EU Serious and Organized Crime Threat Assessment 2025, said offenses ranging from drug trafficking to people smuggling, money laundering, cyber attacks and online scams undermine society and the rule of law by generating illicit proceeds, spreading violence, and normalizing corruption.The volume of child sexual abuse material available online has increased significantly because of AI, which makes it more difficult to analyze imagery and identify offenders, the report said.By creating highly realistic synthetic media, criminals are able to deceive victims, impersonate individuals and discredit or blackmail targets. The addition of AI-powered voice cloning and live video deepfakes amplifies the threat, enabling new forms of fraud, extortion, and identity theft," it said.States seeking geopolitical advantage are also using criminals as contractors, the report said, citing cyber-attacks against critical infrastructure and public institutions originating from Russia and countries in its sphere of influence.Hybrid and traditional cybercrime actors will increasingly be intertwined, with state-sponsored actors masking themselves as cybercriminals to conceal their origin and real disruption motives, it said.Polish Interior Ministry Undersecretary of State Maciej Duszczyk cited a recent cyberattack on a hospital as the latest example in his country.Unfortunately this hospital has to stop its activity for the hours because it was lost to a serious cyber-attack," boosted by AI, he said.AI and other technologies are a catalyst for crime, and drive criminal operations efficiency by amplifying their speed, reach, and sophistication, the report said.As the European Commission prepares to launch a new internal security policy, De Bolle said that nations in Europe need to tackle the threats urgently.We must embed security into everything we do, said European Commissioner for Internal Affairs and Migration Magnus Brunner. He added that the EU aims to provide enough funds in coming years to double Europol's staff.
    0 Comments ·0 Shares ·112 Views
  • What Is a Smishing Scam and How to Stay Safe
    time.com
    Recently, several state and federal agencies, including the Federal Trade Commission (FTC) and the Internal Revenue Service, have warned against the rise of smishing, the SMS version of phishing. Phishing is a cyber-attack that aims to trick people into divulging personal information. It happens via email. Now, some experts say, cyber-criminals have also been able to access phone numbers.In January, the FTC flagged a smishing scam: a message that appears to be from a state road toll company that informs recipients about an outstanding balance. AdvertisementAdvertisementThe scammy text might show a dollar amount for how much you supposedly owe and include a link that takes you to a page to enter your bank or credit card info, the FTC warned. Not only is the scammer trying to steal your money, but if you click the link, they could get your personal info (like your drivers license number)and even steal your identity. Smishing can be particularly convincing, posing as a FedEx carrier, bank, or other known entity. Since the scam happens via text, people may be particularly vulnerable to them. Text messages are more intimate, and you check them more quickly than emails, so people start falling for those scams, says Murat Kantarcioglu, a professor of computer science at Virginia Tech. State transportation departmentsincluding West Virginia and New Hampshire, and E-Z Pass itself issued warnings regarding such messages.Heres how to protect yourself against smishing.Why does smishing happen?Smishing happens when cybercriminals are looking to access private information about a personwhether it be their bank account password or birthdayto hack things such as their phone or credit card account.If you receive a suspicious message, cybercriminals already have some type of information about you, usually obtained through a third-party marketing company. Whenever you give your phone number to a company or organization, those phone numbers are sometimes sold [to others], warns Kantarcioglu. The other big area [of concern] is that there was lots of hacking over the years, and most people, social security numbers, phone numbers, addresses, etc, have also [been] leaked and stolen.Smishing may also happen on some social media apps, including Signal and Whatsapp.What to do if you receive a smishing scamSteer clear of any messages that appear to be suspicious. The FTC advises people to not click any links, or respond to any messages sent to them by an unknown sender. The link that they sent may be vulnerable so that your phone may be hacked automatically. In some cases it may get you to a site where they may want to get more information from, warns Kantarcioglu.Instead of directly responding to a message that poses as a bank or toll company, users should login to their personal accounts on their own, or directly get in contact with such companies right away. When signing in, it's also important to ensure you have clicked on a secure site. I've seen some scammers [create] ads for fake variants of the website, like a fake toll company website, says Kantarcioglu. You have to find the correct website for the organization.Many phones allow users to directly delete and report the message as junk. The FTC says that people can also forward such messages to 7226 (SPAM). Kantarcioglu adds that people should make sure they block the numbers or accounts they get these types of messages from. Smishing can also be reported to the IC3 internet crime complaint center at www.ic3.gov. It may also be important to inform less tech-savvy loved ones about these types of scams. I think everyone should make it their mission to educate the older people in their family about these issues, says Kantarcioglu. I'm trying to educate them, never answer the text messages or phone calls, for that matter, from anyone that's that you don't know.
    0 Comments ·0 Shares ·91 Views
  • The Oppenheimer Moment That Looms Over Todays AI Leaders
    time.com
    This year, hundreds of billions of dollars will be spent to scale AI systems in pursuit of superhuman capabilities. CEOs of leading AI companies, such as OpenAIs Sam Altman and xAIs Elon Musk, expect that within the next four years, their systems will be smart enough to do most cognitive workthink any job that can be done with just a laptopas effectively as or better than humans.Such an advance, leaders agree, would fundamentally transform society. Google CEO Sundar Pichai has repeatedly described AI as the most profound technology humanity is working on. Demis Hassabis, who leads Googles AI research lab Google DeepMind, argues AIs social impact will be more like that of fire or electricity than the introduction of mobile phones or the Internet.AdvertisementAdvertisementIn February, in the wake of an international AI Summit in Paris, Anthropic CEO Dario Amodei restated his belief that by 2030 AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people. In the same month, Musk, speaking on the Joe Rogan Experience podcast, said I think we're trending toward having something that's smarter than the smartest human in the next few years. He continued: There's a level beyond that which is smarter than all humans combined, which frankly is around 2029 or 2030.If these predictions are even partly correct, the world could soon radically change. But there is no consensus on how this transformation will or should be handled.With exceedingly advanced AI models released on a monthly basis, and the Trump administration seemingly uninterested in regulating the technology, the decisions of private-sector leaders matter more than ever. But they differ in their assessments of which risks are most salient, and whats at stake if things go wrong. Heres how:Existential risk or unmissable opportunity?I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true, Musk said in February, noting he thinks there is a 20% chance of human annihilation by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence probably the greatest threat to the continued existence of humanity. Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the worlds problems, using solutions we didn't think of, but not so brilliant that it cant escape whatever control constraints we think of, says Margaret Mitchell, Chief Ethics Scientist at Hugging Face. She notes that discourse sometimes conflates AI that supplements people with AI that supplants them. You cant have the benefits of both and the drawbacks of neither, she says.For Mitchell, risk increases as humans cede control to increasingly autonomous agents. Because we cant fully control or predict the behaviour of AI agents, we run a massive risk of AI agents that act without consent to, for example, drain bank accounts, impersonate us saying and doing horrific things, or bomb specific populations, she explains.Most people think of this as just another technology and, and not as a new species, which is the way you should think about it, says Professor Max Tegmark, co-founder and president of the Future of Life Institute. He explains that the default outcome when building machines at this level is losing control over them, which could lead to unpredictable and potentially catastrophic outcomes.But despite the apprehensions, other leaders avoid the language of superintelligence and existential risk, focusing instead on the positive upside. I think when history looks back it will see this as the beginning of a golden age of innovation, Pichai said at the Paris Summit in February. The biggest risk could be missing out.Similarly, asked in mid-2023 whether he thinks were on a path to creating superintelligence, Microsoft CEO Satya Nadella said he was much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI [artificial general intelligence] showing up, or showing up fast, he said.A race between countries and companiesEven among those that do believe AI poses an existential risk, there is a widespread belief that any slowdown in Americas AI development will allow foreign adversariesparticularly Chinato pull ahead in the race to create transformative AI. Future AI systems could be capable of creating novel weapons of mass destruction, or covertly hacking a countrys nuclear arsenaleffectively flipping the global balance of power overnight.My feeling is that almost every decision I make is balanced on the edge of a knife, Amodei said earlier this month, explaining that building too fast risks humanity losing control, whereas if we dont build fast enough, then the authoritarian countries could win.These dynamics play out not just between countries, but between companies. As Helen Toner, a director at Georgetowns Center for Security and Emerging Technology explains, there's often a disconnect between the idealism in public statements and the hard-nosed business logic that drives their decisions. Toner points to competition over release dates as a clear example of this. There have been multiple instances of AI teams being forced to cut corners and skip steps in order to beat a competitor to launch day, she says.Read More: How China Is Advancing in AI Despite U.S. Chip RestrictionsFor Meta CEO Mark Zuckerberg, ensuring advanced AI systems are not controlled by a single entity is key to safety. I kind of liked the theory that its only God if only one company or government controls it, he said in January. The best way to make sure it doesnt get out of control is to make it so that its pretty equally distributed, he claimed, pointing to the importance of open-source models.Parameters for controlWhile almost every company developing advanced AI models has their own internal policies and procedures around safetyand most have made voluntary commitments to the U.S. government regarding issues of trust, safety, and allowing third parties to evaluate their modelsnone of this is backed by the force of law. Tegmark is optimistic that if the U.S. national security establishment accepts the seriousness of the threat, safety standards will follow. Safety standard number one, he says, will be requiring companies to demonstrate how they plan to keep their models under control.Some CEOs are feeling the weight of their power. There's a huge amount of responsibilityprobably too muchon the people leading this technology, Hassabis said in February. The Google DeepMind leader has previously advocated for the creation of new institutions, akin to the European Organization for Nuclear Research (CERN) or the International Energy Agency, to bring together governments to monitor AI developments. Society needs to think about what kind of governing bodies are needed, he said.This is easier said than done. While creating binding international agreements has always been challenging, its more unrealistic than ever, says Toner. On the domestic front, Tegmark points out that right now, there are more safety standards for sandwich shops than for AI companies in America.Nadella, discussing AGI and superintelligence on a podcast in February, emphasized his view that legal infrastructure will be the biggest rate limiter to the power of future systems, potentially preventing their deployment. Before it is a real problem, the real problem will be in the courts, he said.An 'Oppenheimer moment'Mitchell says that AIs corporate leaders bring different levels of their own human concerns and thoughts to these discussions. Tegmark fears, however, that some of these leaders are falling prey to wishful thinking by believing theyre going to be able to control superintelligence, and that many are now facing their own Oppenheimer moment." He points to a poignant scene in that film where scientists watch their creation being taken away by military authorities. That's the moment where the builders of the technology realize they're losing control over their creation, he says. Some of the CEOs are beginning to feel that right now.
    0 Comments ·0 Shares ·67 Views
  • AI Made Its Way to Vineyards. Heres How the Technology Is Helping Make Your Wine
    time.com
    LOS ANGELES When artificial intelligence-backed tractors became available to vineyards, Tom Gamble wanted to be an early adopter. He knew there would be a learning curve, but Gamble decided the technology was worth figuring out.The third-generation farmer bought one autonomous tractor. He plans on deploying its self-driving feature this spring and is currently using the tractor's AI sensor to map his Napa Valley vineyard. As it learns each row, the tractor will know where to go once it is used autonomously. The AI within the machine will then process the data it collects and help Gamble make better-informed decisions about his crops what he calls precision farming.AdvertisementAdvertisementIts not going to completely replace the human element of putting your boot into the vineyard, and thats one of my favorite things to do, he said. But its going to be able to allow you to work more smartly, more intelligently and in the end, make better decisions under less fatigue.Gamble said he anticipates using the tech as much as possible because of economic, air quality and regulatory imperatives. Autonomous tractors, he said, could help lower his fuel use and cut back on pollution.As AI continues to grow, experts say that the wine industry is proof that businesses can integrate the technology efficiently to supplement labor without displacing a workforce. New agricultural tech like AI can help farmers to cut back on waste, and to run more efficient and sustainable vineyards by monitoring water use and helping determine when and where to use products like fertilizers or pest control. AI-backed tractors and irrigation systems, farmer say, can minimize water use by analyzing soil or vines, while also helping farmers to manage acres of vineyards by providing more accurate data on the health of a crop or what a seasons yield will be.Other facets of the wine industry have also started adopting the tech, from using generative AI to create custom wine labels to turning to ChatGPT to develop, label and price an entire bottle.I dont see anybody losing their job, because I think that a tractor operators skills are going to increase and as a result, and maybe theyre overseeing a small fleet of these machines that are out there, and theyll be compensated as a result of their increased skill level, he said.Farmers, Gamble said, are always evolving. There were fears when the tractor replaced horses and mules pulling plows, but that technology proved itself just like AI farming tech will, he said, adding that adopting any new tech always takes time.Companies like John Deere have started using the AI that wine farmers are beginning to adopt. The agricultural giant uses Smart Apply technology on tractors, for example, helping growers apply material for crop retention by using sensors and algorithms to sense foliage on grape canopies, said Sean Sundberg, business integration manager at John Deere.The tractors that use that tech then only spray where there are grapes or leaves or whatnot so that it doesnt spray material unnecessarily, he said. Last year, the company announced a project with Sonoma County Winegrowers to use tech to help wine grape growers maximize their yield.Tyler Klick, partner at Redwood Empire Vineyard Management, said his company has started automating irrigation valves at the vineyards it helps manage. The valves send an alert in the event of a leak and will automatically shut off if they notice an excessive water flow rate.That valve is actually starting to learn typical water use, Klick said. Itll learn how much water is used before the production starts to fall off.Klick said each valve costs roughly $600, plus $150 per acre each year to subscribe to the service.Our job, viticulture, is to adjust our operations to the climatic conditions were dealt, Klick said. I can see AI helping us with finite conditions.Angelo A. Camillo, a professor of wine business at Sonoma State University, said that despite excitement over AI in the wine industry, some smaller vineyards are more skeptical about their ability to use the technology. Small, family-owned operations, which Camillo said account for about 80% of the wine business in America, are slowly disappearing many dont have the money to invest in AI, he said. A robotic arm that helps put together pallets of wine, for example, can cost as much as $150,000, he said.For small wineries, theres a question mark, which is the investment. Then theres the education. Whos going to work with all of these AI applications? Where is the training? he said.There are also potential challenges with scalability, Camillo added. Drones, for example, could be useful for smaller vineyards that could use AI to target specific crops that have a bug problem, he said it would be much harder to operate 100 drones in a 1,000 acre vineyard while also employing the IT workers who understand the tech.I dont think a person can manage 40 drones as a swarm of drones, he said. So theres a constraint for the operators to adopt certain things.However, AI is particularly good at tracking a crops health including how the plant itself is doing and whether its growing enough leaves while also monitoring grapes to aid in yield projections, said Mason Earles, an assistant professor who leads the Plant AI and Biophysics Lab at UC Davis.Diseases or viruses can sneak up and destroy entire vineyards, Earles said, calling it an elephant in the room across the wine industry. The process of replanting a vineyard and getting it to produce well takes at least five years, he said. AI can help growers determine which virus is affecting their plants, he said, and whether they should rip out some crops immediately to avoid losing their entire vineyard.Earles, who is also cofounder of the AI-powered farm management platform Scout, said his company uses AI to process thousands of images in hours and extract data quickly something that would be difficult by hand in large vineyards that span hundreds of acres. Scout's AI platform then counts and measures the number of grape clusters as early as when a plant is beginning to flower in order to forecast what a yield will be.The sooner vintners know how much yield to expect, the better they can dial in their wine making process, he added.Predicting what yields youre going to have at the end of the season, no one is that good at it right now, he said. But its really important because it determines how much labor contract youre going to need and the supplies youll need for making wine.Earles doesnt think the budding use of AI in vineyards is freaking farmers out. Rather, he anticipates that AI will be used more frequently to help with difficult field labor and to discern problems in vineyards that farmers need help with.Theyve seen people trying to sell them tech for decades. Its hard to farm; its unpredictable compared to most other jobs, he said. The walking and counting, I think people would have said a long time ago, I would happily let a machine take over.
    0 Comments ·0 Shares ·131 Views
  • Trumps Crypto Summit Shows That the Industry Is in Charge
    time.com
    This is a very important day in your lives, President Donald Trump told crypto executives at the White House on March 7. Trump was presiding over the first ever Crypto Summit, in which he and other cabinet officials gathered some of the biggest names in crypto to reemphasize the Presidents support for the industry and to hear out the executives ideas for regulation and legislation. Participants largely came away from the meeting empoweredand believing that a new crypto era has dawned in Washington.AdvertisementAdvertisementThe government representatives expressed that there has been a negative regime towards the crypto industry, and that regime is now coming to an end, says Sergey Nazarov, co-founder of Chainlink, who attended the summit. Theres a significant shift and huge amounts of support.Very open and receptiveFor the last few years, the crypto industry chafed at the enforcement actions brought against them by President Joe Bidens Administration. Bidens Securities and Exchange Commission (SEC), led by Gary Gensler, sought to crack down on crypto companies he deemed were violating securities laws, and protect investors from the massive scams and frauds that are pervasive in the crypto world, like Terra-Luna and FTX. This resulted in lawsuits against companies big and small, including Coinbase.After Trump was elected, he appointed several cabinet members with close ties to the industry, such as AI & crypto czar David Sacks, Commerce Secretary Howard Lutnick, and Treasury Secretary Scott Bessent. Many enforcement actions, including the case against Coinbase, have since been dropped. And the most pro-crypto commissioners of the SEC, most prominently Hester Peirce, were elevated: She now leads the SECs Crypto Task Force.All of those officials were present at the Summit, as well as Tom Emmer, the House Majority Whip. I did not expect people that were so senior to be at the summit, Nazarov says. Everyone that came from the industry side was able to speak and provide their views. And all the senior government people, I think, were very open and receptive.Trump himself led both a public press conference of the summit as well as a private conference with the executives. In his public remarks, he mocked Biden for his anti-crypto stance, asked Congress to pass bills on stablecoins and a digital asset framework before the August recess, and, for some reason, allowed FIFA president Gianni Infantino to show off the soccer World Cup Trophy and pitch the idea of creating a FIFA meme coin. That coin may be worth more than FIFA in the end, Trump said in response. (Trumps own meme coin TRUMP initially raked in millions of dollars in trading fees alone, although it has since fallen all the way from its $75 peak to $12.)Industry participants at the summit included Coinbases Brian Armstrong, MicroStrategys Michael Saylor, the Winklevoss twins, and Zach Witkoff, co-founder of Trumps own crypto company, World Liberty Financial. Combined, the participants have given more than $11 million to Trumps inaugural committee, according to the Intercept, and critics have raised many questions around conflict of interest. When crypto companies spent over a hundred million dollars in the 2024 elections, they created a new playbook for the purchase of large-scale political power in America, Robert Weissman, co-president of Public Citizen, wrote in an email statement to TIME.The people that should be in front of him are in front of him, but there are also people who shouldn't be in front of him who are in front of him, says Avik Roy, co-founder and chairman of the think tank Foundation for Research on Equal Opportunity. One of the challenges in public policy always is: How does someone in the President's position distinguish between the people who are merely lobbying and the people who are public-spirited?After the summit, Trumps Office of the Comptroller of the Currency (OCC) issued guidance allowing banks to hold cryptocurrency, and asking them to do their own diligence around risk. This served as yet another signal that Trumps Administration will not regulate the industry very closely. This industry was kind of unfairly suppressed from reaching its potential in the U.S. system, Nazarov says. They want to go completely the other way.Trumps crypto reserveThe summit came a day after Trump issued an Executive Order announcing the creation of a federal Bitcoin reserve. When Trump floated the idea earlier in the week, many people expressed concerns: that Trump would levy taxes in order to buy crypto, and that he was creating risks by including much smaller and volatile coins like Cardano and XRP in the proposal.But the Executive Order pulled back those plans quite a bit. It announced that the U.S. would not buy any new Bitcoin, but simply hold onto the cryptocurrencies that they had seized in seizures. Andrew ONeill, the digital assets managing director of S&P Global Ratings, called the order mainly symbolic in a statement to TIME.Industry insiders cheered the decision to mainly focus on a separate Bitcoin reserve, effectively demoting the importance of the other crypto projectswhose founders have been lobbying Trump for support. It would have been a pretty clearly a cronyist outcome where well-connected people were able to get the government to buy their tokens without really any obvious strategic rationale for doing so, Roy says. Bitcoin is a special case; it has no CEO.The Executive Order also calls for a full audit of the U.S. crypto holdings, which is estimated to include around 200,000 Bitcoin (worth about $17 billion). Yesha Yadav, a law professor at Vanderbilt who specializes in crypto and securities regulation, says that the audit will be important to determine how much of that Bitcoin is usable, and how much might need to be returned to fraud victims. A good portion of that Bitcoin stash likely comes from the Bitfinex hack, which the U.S. government seized in 2022. Whether or not theyre motivated to trace every single victim in that case, whether victims have come forward, and whose claims have not been dealt withthat is something that's going to have to be looked at, Yadav says. Crypto prices have been turbulent over the last month, in part due to uncertainty around Trumps tariffs. But crypto industry insiders believe that ultimately, Trumps laissez-faire approach will help them grow. FTX is in the past now, says Nazarov. The big failures are in the past.Andrew R. Chows book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.
    0 Comments ·0 Shares ·149 Views
  • SpaceX Test Flight Explodes, Again
    time.com
    Nearly two months after an explosion sent flaming debris raining down on the Turks and Caicos, SpaceX launched another mammoth Starship rocket on Thursday, but lost contact minutes into the test flight as the spacecraft came tumbling down and broke apart.This time, wreckage from the latest explosion was seen streaming from the skies over Florida. It was not immediately known whether the spacecrafts self-destruct system had kicked in to blow it up.AdvertisementAdvertisementThe 403-ft. (123-m.) rocket blasted off from Texas. SpaceX caught the first-stage booster back at the pad with giant mechanical arms, but engines on the spacecraft on top started shutting down as it streaked eastward for what was supposed to be a controlled entry over the Indian Ocean, half a world away. Contact was lost as the spacecraft went into an out-of-control spin.Starship reached nearly 90 mi. (150 km.) in altitude before trouble struck and before four mock satellites could be deployed. It was not immediately clear where it came down, but images of flaming debris were captured from Florida, including near Cape Canaveral, and posted online.The space-skimming flight was supposed to last an hour.Unfortunately this happened last time too, so we have some practice at this now, SpaceX flight commentator Dan Huot said from the launch site.SpaceX later confirmed that the spacecraft experienced a rapid unscheduled disassembly" during the ascent engine firing. Our team immediately began coordination with safety officials to implement pre-planned contingency responses, the company said in a statement posted online.Starship didnt make it quite as high or as far as last time.NASA has booked Starship to land its astronauts on the moon later this decade. SpaceXs Elon Musk is aiming for Mars with Starship, the worlds biggest and most powerful rocket.Like last time, Starship had mock satellites to release once the craft reached space on this eighth test flight as a practice for future missions. They resembled SpaceXs Starlink internet satellites, thousands of which currently orbit Earth, and were meant to fall back down following their brief taste of space.Starships flaps, computers and fuel system were redesigned in preparation for the next big step: returning the spacecraft to the launch site just like the booster.During the last demo, SpaceX captured the booster at the launch pad, but the spacecraft blew up several minutes later over the Atlantic. No injuries or major damage were reported.According to an investigation that remains ongoing, leaking fuel triggered a series of fires that shut down the spacecrafts engines. The on-board self-destruct system kicked in as planned.SpaceX said it made several improvements to the spacecraft following the accident, and the Federal Aviation Administration recently cleared Starship once more for launch.Starships soar out of the southernmost tip of Texas near the Mexican border. SpaceX is building another Starship complex at Cape Canaveral, home to the companys smaller Falcon rockets that ferry astronauts and satellites to orbit.
    0 Comments ·0 Shares ·133 Views
  • Alibabas New Model Adds Fuel to Chinas AI Race
    time.com
    On March 5, Chinese tech giant Alibaba released its latest AI reasoning model, QwQ-32B, resulting in an 8% spike in the companys Hong Kong-listed shares. While less capable than Americas leading AI systems, such as OpenAIs o3 or Anthropics Claude 3.7 Sonnet, the model reportedly performs about as well as its Chinese competitor DeepSeeks model, R1, while requiring considerably less computing power to develop and to run. Its creators say QwQ-32B embodies an ancient philosophical spirit by approaching problems with genuine wonder and doubt.AdvertisementAdvertisementIt reflects the broader competitiveness of Chinas frontier AI ecosystem, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. That ecosystem includes DeepSeeks R1 and Tencents Hunyuan model, which Anthropic co-founder Jack Clark has said is by some measures world-class. That said, assessments of Alibabas latest model are preliminary, both due to the inherent challenge of measuring model capabilities, and because so far, the model has only been assessed by Alibaba itself. The information environment is not very rich right now, says Singer.Another step on the path to AGISince the release of DeepSeeks R1 model in January sent waves through the global stock market, Chinas tech ecosystem has been in the spotlightparticularly as the U.S. increasingly sees itself as racing against China to create artificial general intelligence (AGI)highly advanced AI systems capable of performing most cognitive work, from graphic design to machine-learning research. AGI is widely expected to confer a decisive military and strategic advantage to whichever company or government creates it first, as such a system may be capable of engaging in advanced cyberwarfare or creating novel weapons of mass destruction (though experts are highly skeptical humans will be able to retain control over such a system, regardless of who creates it).We are confident that combining stronger foundation models with reinforcement learning powered by scaled computational resources will propel us closer to achieving AGI, wrote the team behind Alibabas latest model. The quest to create AGI permeates most leading AI labs. DeepSeeks stated goal is to unravel the mystery of AGI with curiosity. OpenAIs mission, meanwhile, is to ensure that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity. Leading AI CEOs including Sam Altman, Dario Amodei, and Elon Musk all expect AGI-like systems to be built within President Trumps current term.China's turnAlibabas latest AI release comes just two weeks after the companys co-founder, Jack Ma, was pictured in the front row at a meeting between President Xi Jinping and the countrys preeminent business leaders. Since 2020, when Ma publicly criticized state regulators and state-owned banks for stifling innovation and operating with a pawn shop mentality, the Chinese billionaire has largely been absent from the public spotlight. In that time, the Chinese government cracked down on the tech industry, imposing stricter rules on how companies could use data and compete in the market, while also taking more control over key digital platforms.Singer says that by 2022, it became clear that the bigger threat to the country was not the tech industry, but economic stagnation. That economic stagnation story, and attempting to reverse it, has really shaped so much of policy over the last 18 months, says Singer. China is moving quickly to adopt cutting-edge technology, with at least 13 city governments and 10 state-owned energy companies reportedly having already deployed DeepSeek models into their systems. Technical innovationAlibabas model represents a continuation of existing trends: in recent years, AI systems have consistently increased in performance while becoming cheaper to run. Non-profit research organization Epoch AI estimates that the amount of computing power used to train AI systems has been increasing by more than 4x each year, while, thanks to regular improvements in algorithm design, that computing power is being used three times more efficiently each year. Put differently, a system that required, for example, 10,000 advanced computer chips to train last year could be trained with only a third as many this year.Despite efficiency improvements, Singer cautions that high-end computing chips remain crucial for advanced AI developmenta reality that makes U.S. export controls on these chips a continuing challenge for Chinese AI companies like Alibaba and DeepSeek, whose CEO has cited access to chips, rather than money or talent, as their biggest bottleneck.QwQ (pronounced like quill) is the latest to join a new generation of systems billed as reasoning models, which some consider to represent a new paradigm in AI. Previously, AI systems got better by scaling both the amount of computing power used to train them and the amount and quality of data on which they were trained. In this new paradigm, the emphasis is on taking a model that has already been trainedin this case, Qwen 2.5-32Band scaling the amount of computing the system uses in responding to a given query. As the Qwen team writes, when given time to ponder, to question, and to reflect, the models understanding of mathematics and programming blossoms like a flower opening to the sun. This is consistent with trends observed with Western models, where techniques that allow them to think longer have yielded significant improvements in performance on complex analytic problems.Alibabas QwQ has been released open weight, meaning the weights that constitute the modelaccessible in the form of a computer filecan be downloaded and run locally, including on a high-end laptop. Interestingly, a preview of the model, released last November, attracted considerably less attention. Singer notes that the stock market is generally reactive to model releases and not to the trajectory of the technology, which is expected to continue to improve rapidly on both sides of the Pacific. The Chinese ecosystem has a bunch of players in it, all of whom are putting out models that are very powerful and compelling, and its not clear who will emerge, when its all said and done, as having the best model, he says.
    0 Comments ·0 Shares ·125 Views
  • Reddit Co-Founder Alexis Ohanian Joins Bid to Buy TikTok
    time.com
    Reddit co-founder Alexis Ohanian has joined billionaire Frank McCourts bid to acquire TikTok as a strategic adviser.McCourts internet advocacy organization, Project Liberty, announced this week that Ohanian, an investor married to tennis star Serena Williams, had joined a consortium called The Peoples Bid for TikTok.Im officially now one of the people trying to buy TikTok USand bring it on-chain, Ohanian said in a series of posts made Tuesday on X, referencing a decentralized, blockchain-based platform that Project Liberty says it will leverage to provide users more control over their online data.AdvertisementAdvertisementIf successful in its bid, Project Liberty said the technology will serve as the backbone of the redesigned TikTok, ensuring that privacy, security, and digital independence are no longer optional but foundational. When asked by an X user on Monday what he would call TikTok if he purchased it, Ohanian said: TikTok: Freedom Edition.Under a federal bill passed with bipartisan support and signed into law by former President Joe Biden last year, TikTok was required to cut ties with its China-based parent company, ByteDance, or face a ban by Jan. 19.In one of his first executive orders signed in January, President Donald Trump extended the deadline for TikTok to find new ownership until early April.McCourts consortiumwhich includes Shark Tank star Kevin OLearyhas already offered ByteDance $20 billion in cash for the U.S. platform. Some analysts estimate TikTok could be worth much more than that even without its coveted algorithm, which McCourt has said hes not interested in.Trump said in January that Microsoft is among the U.S. companies looking to take control of TikTok. Others eyeing TikTok include the artificial intelligence startup Perplexity AI, which has proposed to merge its business with TikToks U.S. platform and give the U.S. government a stake in the new entity. Theres also Jesse Tinsley, the founder of the payroll firm Employer.com. Tinsley has said a consortium he put togetherwhich includes the CEO of video game platform Robloxis offering ByteDance more than $30 billion for TikTok.
    0 Comments ·0 Shares ·116 Views
  • Why Trumps Crypto Reserve Plan Has Experts Worried
    time.com
    After taking a massive tumble in February, crypto markets rallied on Sunday following Donald Trumps announcement that he would create a national crypto strategic reserve. Trump wrote on Truth Social that a working group would move forward on facilitating the strategic federal purchases of Bitcoin, Ethereum, and three other smaller cryptocurrencies.The announcement comes ahead of the White Houses first cryptocurrency summit on March 7, and builds on Trumps fixation on crypto over the past year, which has included a crypto venture, a crypto czar position in the White House, and a TRUMP meme coin which has fallen in price from $70 to $13 within a month and a half.AdvertisementAdvertisementIt is unclear whether taxpayers will fund the reserve, how big it will be, or whether it will be used to pay off the U.S. federal debt, as some have suggested.Trumps strategic reserve announcement drew criticism from economists and even some of cryptos biggest boosters. The U.S. has a long history of creating strategic reserves in key assets such as oil, in order to ensure access and stabilize prices in times of crisis. But this crypto reserve seems fundamentally different in nature, because it hinges not upon the assets importance to the nation, but the idea that its prices will increase going forward, says Stephen Cecchetti, an economist and professor at Brandeis International Business School who has written skeptically about crypto for several years.Cecchetti called the idea of a strategic crypto reserve absurd.It's foolish to purchase risky assets with leverage in the hope of making it easier to repay your debt, he says.What is a strategic reserve?In the past, the U.S. has stockpiled scarce assets to protect against supply disruptions. For instance, Congress created a Strategic Petroleum Reserve in 1975 after the Arab oil embargo crisis caused gas shortages across the country and decimated the American economy. Creating a store of petroleum, its backers argued, would stabilize prices.The U.S. has also created strategic reserves of other goods, like medical equipment and helium. A strategic reserve is for something that is essential, either for national defense or national economic security, Cecchetti says. What exactly is essential about Bitcoin in our lives that makes it so the U.S. would want a reserve?Why do crypto enthusiasts want a reserve?Most crypto enthusiasts dont want to stabilize prices; instead, they hope that a reserve would send prices shooting upward. Federal purchases of crypto would send a signal that crypto is here to stay, encouraging other respected financial institutions to buy in. Other governments, too, might follow the lead and create their own reserves, further upping prices.Some Bitcoiners also believe that a reserve could serve as a hedge against inflation. They point to the fact that the dollar has gotten less valuable over time, and argue that Bitcoins value could be stronger than the dollars during global economic crises. However, crypto has proved highly volatile during recent geopolitical conflicts, such as Russias invasion of Ukraine. And the U.S. government buying Bitcoin could actually threaten the dollars global value, some experts say. Austin Campbell, a crypto entrepreneur and a professor at NYU Stern, wrote on X that We should be doing everything we can to keep our fiscal house in order in dollar terms, which means cutting the deficit and future expenditures to a sustainable path, not trying to YOLO into an asset that benefits from dollar decline.How will Trump pay for a crypto reserve?On Sunday, many people online expressed worry that a Bitcoin reserve could be funded with taxpayer dollars, effectively transferring money from everyday Americans to crypto millionaires and billionaires. Trumps crypto czar David Sacks, however, batted down this idea on X on Monday, writing: Nobody announced a tax or a spending program. Maybe you should wait to find out whats actually being proposed. (He also refuted accusations that he personally stands to benefit from the proposal.)U.S. law enforcement already holds about 200,000 Bitcoin, worth around $17 billion, which has mostly been obtained through criminal seizures. These assets, managed by the U.S. Marshals Service, are typically auctioned to support law enforcement operations and compensate victims of crypto-related crimes. Its unclear if Trump aims to co-opt those Bitcoins for the reserve, or how the government could go about transferring those funds from Justice to Treasury.In response to email questions, a representative for the White House referred to Sacks X post which stated that more information will be provided at the summit on Friday.Could a crypto reserve help pay down the U.S. governments debt?Crypto enthusiasts believe the answer to that question is yes. Senator Cynthia Lummis, who proposed a Bitcoin reserve bill last year, has contended that because Bitcoin will continue increasing in value, simply investing in it would raise far more money than levying taxes. Those gains, she argued, could then be sold, allowing the U.S. to cut its debt in half in 20 years. (The bill has no co-sponsors.) But experts say that relying on a volatile asset like Bitcoin for debt reduction is risky. Just because an asset has gone up in the past doesnt mean it will go up in the future, says Chester Spatt, professor of finance at Carnegie Mellon Universitys Tepper School of Business. If we think our markets are pretty efficient, then we would expect the markets to be forward-looking, and so that would suggest there wouldn't necessarily be a lot of predictive power from the past.Cecchetti likened the ploy to a homeowner running up their credit card to gamble in the hopes of paying off their mortgage faster. He also argues that using U.S. debt to buy a large amount of crypto would increase the likelihood that credit rating agencies would downgrade the U.S., increasing the cost of borrowing.Meanwhile, some crypto enthusiasts are also concerned about what would happen to Bitcoin if the plan actually works. If Bitcoin increases massively in price and then the U.S. decides to sell off the reserve to pay down its debt, that transaction itself could trigger a significant decrease in Bitcoins price.Why are some crypto fans unhappy with Trumps announcement?While the larger crypto market responded positively to Trumps announcement, influential crypto voices criticized it for a slew of reasons. Some pointed to the irony of the federal government having so much power over a currency that is supposed to be decentralized. Its wrong to steal my money for grift on the left; its also wrong to tax me for crypto bro schemes, wrote the entrepreneur Joe Lonsdale on X.Others worried that the reserve would become a vector for scams and insider trading. One anonymous trader drew scrutiny for making a $200 million bet on Bitcoin hours before the announcement, and immediately cashing out for a $6.8 million profit.Trumps announcement also surprised many people for its inclusion of not just Bitcoin, but also smaller, more volatile currencies like ADA and XRP. So the U.S. will use taxpayer money on XRP, SOL and ADA? Why would one build a strategic reserve of something you can just print? Bad look, wrote Gabor Gurbacs on X.Given the lack of clarity around many of the ideas central details, its difficult to say whether a crypto reserve will actually come to fruition. However, Trump is far from the only one pushing this idea: several states are considering their own versions, including Oklahoma, which passed a strategic Bitcoin reserve act out of committee last week. Utah and Arizona have also advanced similar proposals.Dennis Porter, the CEO and co-founder of the crypto advocacy group Satoshi Action Fund, wrote on Twitter that the group had helped over 20 states introduce strategic reserve legislation. Yes, many will fail, but all we need is one then the door is wide open, he wrote. Andrew R. Chows book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.
    0 Comments ·0 Shares ·108 Views
  • Scientists Have Bred Woolly Mice on Their Journey to Bring Back the Mammoth
    time.com
    Extinction is typically for good. Once a species winks out, it survives only in memory and the fossil record. When it comes to the woolly mammoth, however, that rule has now been bent. Its been 4,000 years since the eight-ton, 12-foot, elephant-like beast walked the Earth, but part of its DNA now operates inside several litters of four-inch, half-ounce mice created by scientists at the Dallas-based Colossal Laboratories and Biosciences. The mice dont have their characteristic short, gray-brown coat, but rather the long, wavy, woolly hair of the mammoth and the extinct beasts accelerated fat metabolism, which helped it survive Earths last ice age. Both traits are the result of sophisticated gene editing that Colossals scientists hope will result in the reappearance of the mammoth itself as early as 2028.AdvertisementAdvertisementThe Colossal woolly mouse marks a watershed moment in our de-extinction mission, said company CEO Ben Lamm in a statement. "By engineering multiple cold-tolerant traits from mammoth evolutionary pathways into a living model species, we've proven our ability to recreate complex genetic combinations that took nature millions of years to create."Woolly mice at Colossal Biosciences lab. Courtesy of Colossal BiosciencesColossal has been working on restoring the mammoth ever since the companys founding in 2021. The animals relatively recent extinctionjust a few thousand years ago as opposed to the tens of millions that mark the end of the reign of the dinosaursand the fact that it roamed the far north, including the Arctic, means that its DNA has been preserved in multiple remains embedded in permafrost. For its de-extinction project, Colossal collected the genomes of nearly 60 of those recovered mammoths. Recreating the species from that raw biological material is relatively straightforward in principle, if exceedingly painstaking in practice. The work involves pinpointing the genes responsible for the traits that separate the mammoth from the Asian elephantits close evolutionary relationediting an elephant stem cell to express those traits, and introducing the stem cell into an elephant embryo. In the alternative, scientists could edit a newly conceived Asian elephant zygote directly. Either way, the next step would be to implant the resulting embryo into the womb of a modern-day female elephant. After 22 monthsthe typical elephant gestation periodan ice age mammoth should, at least theoretically, be born into the computer-age world.But speedbumps abound. The business of rewriting the genome takes extensive experimentation with hundreds of embryos to ensure that the key genes are properly edited. The only way to test if they indeed are is to follow the embryos through gestation and see if a viable mammoth pops out; the nearly two years it would take for even a single experimental animal to be born, however, would make that process impractical. Whats more, Asian elephants are highly social, highly intelligent, and endangered, raising intractable ethical obstacles to experimenting on them. Enter the mouse, an animal whose genome lends itself to easy manipulation with CRISPRa gene-editing tool developed in 2012, based on a natural process bacteria use to defend themselves in the wild. Whats more, mice need only 20 days to gestate, making for a quick turnaround from embryo to mouse pup.In the current experiment, researchers identified seven genes that code for the mammoths shaggy coatidentifying distinct ones that make it coarse, curly, and long. They also pinpointed one gene that guides the production of melaninwhich gives the coat its distinctive gold colorand another that governs the animals prodigious lipid metabolism. Relying on CRISPR, they then took both the stem cell and zygote approach to rewriting the mouses stem cell to express those traits. The next steps involved more than a little hit and miss.Over the course of five rounds of experiments, the Colossal scientists produced nearly 250 embryos. Fewer than half of them developed to larger, more-viable 200- to 300-cell embryos, which were then implanted in the womb of around a dozen surrogate females. Of these, 38 mouse pups were born. All of them successfully expressed the gold, woolly hair of the mammoth as well as its accelerated lipid metabolism. The Colossal scientists see the creature theyve produced as a critical development."The woolly mouse project doesn't bring us any closer to a mammoth, but it does validate the work we are doing on the path to a mammoth, Lamm tells TIME. [It] proves our end-to-end pipeline for de-extinction. We started this project in September and we had our first mice in October so that shows this worksand it works efficiently.Theres plenty still to accomplish. A mammoth is much more than its fur and its fat, and before one can lumber into the twenty-first century, the scientists will have to engineer dozens of other genes, including those that regulate its vasculature, its cold-resistant metabolism, and the precise distribution of its fat layers around its body. They would then have to test that work in more mouse models, and only if they succeed there try the same technique on an elephant."The list of genes will continue to evolve, says Lamm. We initially had about 65 gene targets and expanded up to 85. That number could go up or down with further analysis, but that's the general ballpark for the number of genes we think we will edit for our initial mammoths.Woolly mice at Colossal Biosciences lab. Courtesy of Colossal BiosciencesColossal scientists see all of this work as just a first step in developing a more widely applicable de-extinction technology. In addition to the mammoth, they would also like to bring back the dodo and the thylacineor Tasmanian tiger.Our three flagship species for de-extinctionmammoth, thylacine, and dodocapture much of the diversity of the animal tree of life, says Beth Shapiro, Colossals chief science officer. Success with each requires solving a different suite of technical, ethical, and ecological challenges.The work cant start soon enough. The company points to studies suggesting that by 2050 up to 50% of the Earths species could have been wiped out, most of them lost to the planets rapidly changing climate. The Center for Biological Diversity puts the figure at a somewhat less alarming 35%, but in either case, the widespread dying could lead to land degradation, loss of diversity, the rise of invasive species, and food insecurity for humanity. Arresting climate change and the loss of species that will result is a critical step away from that brink, but one that policymakers and the public are embracing only slowly. Restoring the species that will vanishor fortifying the genetic heartiness of those that are endangered to help them adapt to a changing worldis one more insurance policy against environmental decline.We do not argue that gene editing should be used instead of traditional approaches to conservation, but that this is a both and situation, says Shapiro. We should be increasing the tools at our disposal to help species survive.
    0 Comments ·0 Shares ·106 Views
  • How IBM CEO Arvind Krishna Is Thinking About AI and Quantum Computing
    time.com
    Arvind Krishna, chief executive officer of IBM, at the World Governments Summit in Dubai, United Arab Emirates, on Feb. 11.Christopher PikeBloomberg/Getty ImagesBy Billy PerrigoMarch 2, 2025 7:00 AM EST(To receive weekly emails of conversations with the worlds top CEOs and decisionmakers, click here.)IBM was one of the giants of 20th-century computing. It helped design the modern PC, and created the first AI to defeat a human champion in the game of chess.But when you think of AI, IBM might not be the first, or even the tenth, company to spring to mind. It doesnt train big models, and doesnt make consumer-facing products any more, focusing instead on selling to other businesses. We are a B2B company, and explaining what we do to the average readerwe'll take all the help we can get, IBM CEO Arvind Krishna joked ahead of a recent interview with TIME.Still, theres an interesting AI story lurking inside this storied institution. IBM does indeed build AI modelsnot massive ones like OpenAIs GPT4-o or Googles Gemini, but smaller ones designed for use in high-stakes settings, where accuracy comes at a premium. As the AI business matures, this gets at a critical unanswered question on the minds of Wall Street and Silicon Valley investors: will the economic gains from AI mostly accrue to the companies that train massive foundation models like OpenAI? Or will they flow instead to the companieslike IBMthat can build the leanest, cheapest, most accurate models that are tailored for specific use-cases? The future of the industry could depend on it.TIME spoke with Krishna in early February, ahead of a ceremony during which he was awarded a TIME100 AI Impact Award.This interview has been condensed and edited for clarity.IBM built Deep Blue, the first chess AI to beat a human champion, in the 1990s. Then, in 2011, IBMs Watson was the first to win the game show Jeopardy. But today, IBM isnt training large AI systems in the same way as OpenAI or Google. Can you explain why the decision was made to take a backseat from the AI race?When you look at chess and Jeopardy, the reason for taking on those challenges was the right one. You pick a thing that people believe computers cannot do, and then if you can do it, you're conveying the power of the technology.Here was the place where we went off: We started building systems that I'll call monolithic. We started saying, let's go attack a problem like cancer. That turned out to be the wrong approach. Absolutely it is worth solving, so I don't fault what our teams did at that point. However, are we known for being medical practitioners? No. Do we understand how hospitals and protocols work? No. Do we understand how the regulator works in that area? No.With hindsight, I wish we had thought about that just for a couple of minutes at the beginning.So then we said, OK, you can produce larger and larger models, and they'll take more and more compute. So option one, take a billion dollars of compute and you produce a model. Now to get a return on it, youve got to charge people a certain amount. But can we distill it down to a much smaller model that may not take as much compute, and is much, much cheaper to run, but is a fit-for-purpose model for a task in a business context? That is what led to the business lens.But one of the central takeaways of the last 10 years in deep learning seems to be that you can get more out of AI systems by just trying to make them generalist than you can by trying to make them specialized in a single area. Right? That's what's referred to as the bitter lesson.I might politely disagree with that. If you're willing to have an answer that's only 90% accurate, maybe. But if I'd like to control a blast furnace, it needs to be correct 100% of the time. That model better have some idea of time-series analysis baked into it.It's not a generalist machine that decided to somehow intuit Moby Dick to come up with its answer. So with respect, no. If you are actually trying to get to places where you need much higher accuracy, you actually may do much better with a smaller model.I actually believe there will be a few very large models. Theyll cost a couple of billion dollars to train, or maybe even more. And there's going to be thousands of smaller models that are fit-for-purpose. Theyll leverage the big ones for teaching, but not really for their inherent knowledge.Are the major economic benefits from AI going to accrue to the biggest companies that train the foundation models? Or to the smaller companies who apply those models to specific use cases? I think it's an exact "and." I think the analogy of AI is probably closest to the early days of the internet. So on the internet, ask yourself the question, is it useful only for very large companies or for very small companies?Take two opposite examples. If I'm going to build a video streaming business, the more content you have, the more people you can serve. You get a network effect, you get an economy of scale. On the other hand, you have a shopfront like Etsy. Suddenly the person who's an artisan who makes two items a year can still have a presence because the cost of distribution is extremely low.How has your answer to that question influenced the direction of your business?We thought deeply about it. Back in 2020, we said: should we put all our investments into trying to build one very large model? If it's a very large model, the cost of running these models is, let's call it, the square of the size of the model.So if I have a 10 billion parameter model and I have a 1 trillion parameter model, it's going to be 10,000 times more expensive to run the very big model. Then you turn around and ask the question, if it's only 1% better, do I really want to pay 10,000 times more? And that answer in the business world is almost always no.But if it can be 10 times smaller, hey, that's well worth it, because that drops more than 90% of the cost of running it. That is what drove our decision.Lets talk about quantum computing. IBM is a big investor in quantum. Whats your bigger picture strategy there?So we picked quantum as an area for investment more than 10 years ago. We came to the conclusion that it's an engineering problem more than it's a science problem. The moment it's an engineering problem, now you have to ask yourself the question, can you solve the two fundamental issues that are there?One, the error rates are really high, but so are normal computers. What people don't recognize is: there are techniques that make it appear error free. There are errors deep down at the very fundamental level even on the machines we are on, but they correct themselves, and so we don't see them.Two, because quantum by its nature is operating at a quantum level, very tiny amounts of energy can cause whats called coherence loss. So they don't work for very long. We believed if we could get close to a millisecond, you can do some really, really careful computations.And so we went down a path and we think we have made a lot of progress on the error correction. We're probably at a tenth of a millisecond, not quite at a millisecond yet, on the coherence times. We feel over the next three, four, five yearsI give myself till the end of the decadewe will see something remarkable happen on that front and I'm really happy where our team is.If you can make the huge breakthrough that you say you hope to make by the end of the decade, where does that put IBM as a business? Does that leave you in a dominant position over the next wave of technology? There is hardware, and then there is all the people who will exploit it. So let me first begin with this: The people who will exploit it will be all our clients. They will get the value, whether it's material discovery or better batteries or better fertilizers or better drugs, that value will be accrued by our clients.But who can give them a working quantum computer? I think that assuming the timeline and the breakthroughs I'm talking about happen, I think that gives us a tremendous position and the first-mover advantage in that market, to a point where I think that we would become the de-facto answer for those technologies.Technology has always been additive. The smartphone didn't remove the laptop. I think quantum will be additive. But much like we helped invent mainframes in the PC, maybe on quantum we'll occupy that same position for quite a while. More Must-Reads from TIMEInside Elon Musks War on WashingtonMeet the 2025 Women of the YearThe Harsh Truth About Disability InclusionWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical LoveHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyWrite to Billy Perrigo at billy.perrigo@time.com
    0 Comments ·0 Shares ·176 Views
  • Why The AI Industry Is Largely Unmoved By Trumps Tariff Threats
    time.com
    By Andrew R. ChowFebruary 27, 2025 2:42 PM ESTAs President Trump has announced varying tariffs over the last month, tech stock prices have dipped, with investors fearing broad impacts on different parts of the tech sector. Shares of NVIDIA, Taiwan Semiconductor Manufacturing Co (TSMC) and AMD have all wobbled, responding in part to news that Trump might implement a 25% tariff on semiconductors shipped to the U.S.The possible tariffs are said to be part of Trumps plan to try to get more semiconductor and AI-related manufacturing in the U.S. But industry insiders say that theyre not yet changing any of their longterm approaches and are mostly viewing the tariff threats as simply another input factor in a volatile industry whose prices are constantly shifting.Tariffs are more of a blip as opposed to a strong headwind, says Scott Almassy, semiconductor lead at PwC. Nazar Khan, the COO and CTO of the data center company Terawulf, adds: No one's really changing what they're doing because it's just too much guesswork.A complex ecosystemTrumps semiconductor tariffs could go into effect as early as April 2, he said.In the short term, industry players are working to try to dodge those costs as much as possible. Khan says that Terawulf is trying to expedite deliveries so that they arrive before the tariffs do. For deals that will be realized later this year, business partners are negotiating who will bear the extra cost. Some suppliers have been like, Were not taking ityou have to pay it, he says. Others are willing to share in the cost because they want us to sign a deal now rather than waiting to see what happens on the tariffs.As economists have pointed out, the cost of tariffs is often passed directly to the end consumer.Read More: What Are Tariffs and Why Is Trump In Favor of Them?In the first trade war, the evidence was very consistent, and strikingly so, that 100% of the tax was passed forward to the American buyer, says Mary Lovely, a senior fellow at the Peterson Institute for International Economics.She recently wrote a paper predicting that Trumps tariff proposals in January on Canada, Mexico and China would cost the typical US household over $1,200 a year. The semiconductor tariffs specifically would likely cause anything with a chipincluding gaming devices, cars and smart fridgesto increase in price.But prices for AI products like ChatGPT may not increase in the same linear way, due to several factors. For one, prices inside the AI world have never been static. The Deepseek breakthrough last month reinforced the longstanding trend that the cost of training and maintaining models decreases periodically as technical processes get more efficient. The question becomes, was the trend going down so much that the 25% is absorbed? says Khan.The demand for AI also remains extremely high, which means that many companies in the ecosystem are operating with comfortable margins and can weather a percentage change in their profit. Users are so willing to pay right now that it's probably less impactful for those folks who are procuring them and building out the clusters, Almassy says.Pressure on Taiwan?One of the most dominant forces in the chip industry is Taiwan Semiconductor Manufacturing Co (TSMC), which manufactures the chips that power phones and laptops and are central to the output of key American AI players like NVIDIA.One of Trumps main reasons for the semiconductor tariffs, he said, was to try to bring more of their chipmaking to U.S. territory. In February, he accused Taiwan of taking our chip business away, adding: If they dont bring it back, were not going to be very happy.Leaders from both parties have supported this mission, wary of Americas outsized dependence on a specialized product manufactured on an island nation that China seems intent on controlling.President Biden tried to use a different approach that included incentives: Last year, the Commerce Department pledged up to $6.6 billion for TSMC to expand its Arizona facilities as part of Bidens CHIPS Act. But TSMCs progress on that facility has been stymied by repeated delays related to regulatory complexities and a lack of local expertise.Recently, most Taiwanese chipmakers have downplayed the potential impact of semiconductor tariffs, including Vanguard, who said that the tariffs impact would be trivial and that they were not considering setting up shop in the U.S. Last month, Taiwans economy minister said that there is an advantage of technological leadership and that cannot be replaced.Almassy agrees. Especially at the leading edge of AI, which is where Taiwan is, there is no alternative at the moment, he says. Taiwan companies will probably be able to pass a lot of that on to their direct customers, who will then have to figure out what they're going to do with it.Some of those customers are major American AI companies like NVIDIA and OpenAI, who are leading the countrys progress in the so-called AI arms race against China. Some analysts say that the tariffs could actually slightly hamper those companies efforts to maintain their lead, especially as AI labs in countries like France or Japan will be able to buy chips tariff-free. It does probably hamstring American companies a little bit by increasing their costs, Almassy says. And if youre the leader and your resources cost more, you have further to fall than someone not as advanced.A representative for NVIDIA declined to comment. At the companys earnings call on Wednesday, an NVIDIA executive said that tariffs were an unknown until we understand further what the U.S.government's plan is.Other global impactsIt is possible that the tariffs will cause companies to shift their factory locations. But Almassy says that many of these relocations were underway well before the tariff threats, as companies sought to make supply chains more resilient and head off shortages like the ones that unfolded during the pandemic. Theres a general movement by the industry to diversify where the supply chain sits, Almassy says.In a conversation with Goldman Sachs CEO David Solomon in September, Nvidia CEO Jensen Huang said that if the companys access to Taiwanese components was disrupted, we should be able to pick up and fab it somewhere else. But he added that Nvidia probably wouldnt be able to achieve the same level of outperformance or cost.Trump claimed one victory in this regard when Apple announced that it would open an AI-related factory in Texas and spend $500 billion to hire 20,000 people in the U.S. But analysts point out that Apple made similar promises at the beginning of both Biden and Trumps earlier presidential terms.Plenty, of course, could change over the next six months. Trump said that he eventually planned to raise the tariff rate even higher, which could force companies to adapt. But for now, many companies that are part of the semiconductor supply chain are choosing to sit tight and see how everything unfolds. The market's moving very fast, and it takes time to develop and build new factories, Khan says. So the tariff in and of itself may not be determinative for someone to decide to move a facility from Malaysia or Mexico to the US. They may not have the capabilities, money, understanding, or expertise.
    0 Comments ·0 Shares ·163 Views
  • Katy Perry and Gayle King Will Fly to Space on Blue Origin Rocket
    time.com
    Katy Perry poses with an award alongside Gayle King during Variety's Power of Women Presented by Lifetime at Wallis Annenberg Center for the Performing Arts in Beverly Hills, California on Sept. 30, 2021.Stefanie Keenan / Getty ImagesBy MARCIA DUNN / APFebruary 27, 2025 11:26 AM ESTCAPE CANAVERAL, Fla. Katy Perry and Gayle King are headed to space with Jeff Bezos fiancee Lauren Sanchez and three other women.Bezos rocket company Blue Origin announced the all-female celebrity crew on Thursday.Sanchez, a helicopter pilot and former TV journalist, picked the crew who will join her on a 10-minute spaceflight from West Texas, the company said. They will blast off sometime this spring aboard a New Shepard rocket. No launch date was given.Blue Origin has flown tourists on short hops to space since 2021. Some passengers have gotten free rides, while others have paid a hefty sum to experience weightlessness. It was not immediately known who's footing the bill for this upcoming flight.Sanchez invited singer Perry and TV journalist King, as well as a former NASA rocket scientist who now heads an engineering firm Aisha Bowe, research scientist Amanda Nguyen and movie producer Kerianne Flynn.This will be Blue Origin's 11th human spaceflight. Bezos climbed aboard with his brother for the inaugural flight.More Must-Reads from TIMEInside Elon Musks War on WashingtonMeet the 2025 Women of the YearThe Harsh Truth About Disability InclusionWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical LoveHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyContact us at letters@time.com
    0 Comments ·0 Shares ·175 Views
  • Trump Posts AI Video Depicting His Proposal of a Resort-ified Gaza Strip
    time.com
    The shoreline in front of the heavily damaged Blue Beach resort in northwest Gaza, Feb. 13, 2025.Anas Zeyad FtehaAnadolu/Getty ImagesBy Associated PressFebruary 27, 2025 12:15 AM ESTA seemingly artificial-intelligence-generated video posted to President Donald Trumps Truth Social account late Tuesday shows images of what appears to be decimated Gaza streets and neighborhoods replaced with images of beach hotels, dancing women, and shops selling gold over a pulsating soundtrack.Trumps proposal to develop Gazas Mediterranean coast into a Riviera replete with luxury casinos and resorts has been heavily criticized in the Middle East.The song lyrics, in English, appear intended to appeal to Palestinians, many thousands of which would be displaced by the plan and are absolutely opposed to it.Donalds coming to set you free, the song says.The video depicts presidential adviser Elon Musk throwing money in the air and closes with a photoshopped image of Trump and Israeli Prime Minister Benjamin Netanyahu reclining poolside and sipping brightly colored drinks.More Must-Reads from TIMEInside Elon Musks War on WashingtonMeet the 2025 Women of the YearThe Harsh Truth About Disability InclusionWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical LoveHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyContact us at letters@time.com
    0 Comments ·0 Shares ·178 Views
  • Apple Shareholders Reject Bid to Scrap Diversity Programs
    time.com
    Tim Cook, CEO of Apple Inc., speaks with President Trump during an American Workforce Policy Advisory board meeting in the State Dining Room of the White House on March 6, 2019.Al DragoBloomberg/Getty ImagesBy MICHAEL LIEDTKE / APFebruary 25, 2025 12:58 PM ESTApple shareholders rebuffed an attempt to pressure the technology trendsetter into joining President Donald Trump's push to scrub corporate programs designed to diversify its workforce.The proposal drafted by the National Center for Public Policy Research a self-described conservative think tank urged Apple to follow a litany of high-profile companies that have retreated from diversity, equity and inclusion initiatives currently in the Trump administration's crosshairs.After a brief presentation about the anti-DEI proposal, Apple announced shareholders had rejected it without disclosing the vote tally. The preliminary results will be outlined in a regulatory later Tuesday.The outcome vindicated Apple management's decision to stand behind its diversity commitment even though Trump asked the U.S. Department of Justice to look into whether these types of programs have discriminated against some employees whose race or gender aren't aligned with the initiative's goals.But Apple CEO Tim Cook has maintained a cordial relationship with Trump since his first term in office, an alliance that so far has helped the company skirt tariffs on its iPhones made in China. After Cook and Trump met last week, Apple on Monday announced it will invest $500 billion in the U.S. and create 20,000 more jobs during the next five years a commitment applauded by the president.Tuesday's shareholder vote came a month after the same group presented a similar proposal during Costco's annual meeting, only to have it overwhelmingly rejected.That snub didn't discourage the National Center for Public Policy Research from confronting Apple about its DEI program in a pre-recorded presentation by Stefan Padfield, executive director of the think tank's Free Enterprise Project, who asserted forced diversity is bad for business.In the presentation, Padfield attacked Apples diversity commitments for being out of line with recent court rulings and said the programs expose the Cupertino, California, company to an onslaught of potential lawsuits for alleged discrimination. He cited the Trump administration as one of Apple's potential legal adversaries.The vibe shift is clear: DEI is out and merit is in, Padfield said in the presentation.The specter of potential legal trouble was magnified last week when Florida Attorney General James Uthmeier filed a federal lawsuit against Target alleging the retailers recently scaled-back DEI program alienated many consumers and undercut sales to the detriment of shareholders.Just as Costco does, Apple contends that fostering a diverse workforce makes good business sense.But Cook conceded Apple may have to make some adjustments to its diversity program as the legal landscape changes while still striving to maintain a culture that has helped elevate the company to its current market value of $3.7 trillion greater than any other business in the world.We will continue to create a culture of belonging, Cook told shareholders during the meeting.In itslast diversity and inclusion report issued in 2022, Apple disclosed that nearly three-fourths of its global workforce consisted of white and Asian employees. Nearly two-thirds of its employees were men.Other major technology companies for years have reported employing mostly white and Asian men, especially in high-paid engineering jobs a tendency that spurred the industry to pursue largely unsuccessful efforts to diversify.More Must-Reads from TIMEInside Elon Musks War on WashingtonMeet the 2025 Women of the YearFor Americas Aging Workforce , Retirement Is a Distant DreamWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical LoveHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyContact us at letters@time.com
    0 Comments ·0 Shares ·165 Views
  • Im a Therapist, and Im Replaceable. But So Are You
    time.com
    IdeasBy Maytal EyalFebruary 25, 2025 8:00 AM ESTEyal is a writer and psychologist. Her work has appeared in The Atlantic, Wired, and Psychology Today. She is currently writing a book on how therapy culture lost its wayIm a psychologist, and AI is coming for my job. The signs are everywhere: a client showing me how ChatGPT helped her better understand her relationship with her parents; a friend ditching her in-person therapist to process anxiety with Claude; a startup raising $40 million to build a super-charged-AI-therapist. The other day on TikTok, I came across an influencer sharing how she doesnt need friends; she can just vent to God and ChatGPT. The post went viral, and thousands commented, including:ChatGPT talked me out of self-sabotaging.It knows me better than any human walking this earth.No fr! After my grandma died, I told chat gpt to tell me something motivational and it had me crying from the response."Id be lying if I said that this didnt make me terrified. I love my workand I dont want to be replaced. And while AI might help make therapy more readily available for all, beneath my personal fears, lies an even more unsettling thought: whether solving therapys accessibility crisis might inadvertently spark a crisis of human connection.Therapy is a field ripe for disruption. Bad therapists are, unfortunately, a common phenomenon, while good therapists are hard to find. When you do manage to find a good therapist, they often dont take insurance and almost always charge a sizable fee that, over time, can really add up. AI therapy could fill an immense gap. In the U.S. alone, more than half of adults with mental health issues do not receive the treatment they need. With the help of AI, any person could access a highly skilled therapist, tailored to their unique needs, at any time. It would be revolutionary.But great technological innovations always come with tradeoffs, and the shift to AI therapy has deeper implications than 1 million mental health professionals potentially losing their jobs. AI therapists, when normalized, have the potential to reshape how we understand intimacy, vulnerability, and what it means to connect.Throughout most of human history, emotional healing wasn't something you did alone with a therapist in an office. Instead, for the average person facing loss, disappointment, or interpersonal struggles, healing was embedded in communal and spiritual frameworks. Religious figures and shamans played central rolesoffering rituals, medicines, and moral guidance. In the 17th century, Quakers developed a notable practice called "clearness committees," where community members would gather to help an individual find answers to personal questions through careful listening and honest inquiry. These communal approaches to healing came with many advantages, as they provided people with social bonds and shared meaning. But they also had a dark side: emotional struggles could be viewed as moral failings, sins, or even signs of demonic influence, sometimes leading to stigmatization and cruel treatment.The birth of modern psychology in the West during the late 19th century marked a profound shift. When Sigmund Freud began treating patients in his Vienna office, he wasn't merely pioneering psychoanalysishe was transforming how people dealt with life's everyday challenges. As sociologist Eva Illouz notes in her book, Saving the Modern Soul, Freud gave "the ordinary self a new glamour, as if it were waiting to be discovered and fashioned." By convincing people that common strugglesfrom sadness to heartbreak to family conflict required professional exploration, Freud helped move emotional healing from the communal sphere into the privacy of the therapist's office.With this change, of course, came progress: What were once seen as shameful moral failings became common human challenges that could be scientifically understood with the help of a professional. Yet, it also turned healing into more of a solitary endeavorsevered from the community networks that had long been central to human support.In the near future, AI therapy could take Freuds individualized model of psychological healing to its furthest extreme. Emotional struggles will no longer just be addressed privately with another person, a professional, outside the communitythey may be worked through without any human contact at all.On the surface, this wont be entirely bad. AI therapists will be much cheaper. Theyll also be available 24/7never needing a holiday, a sick day, or to close shop for maternity leave. They wont need to end a session abruptly at the 50-minute mark, or run late because of a chatty client. And with AIs, youll feel free to express yourself in any way you want, without any of the self-consciousness you might feel when face-to-face with a real, flesh-and-blood human. As one 2024 study showed, people felt less fear of judgment when interacting with chatbots. In other words, all the friction inherent to working with a human professional would disappear.What many people dont realize about therapy, however, is that those subtle, uncomfortable moments of frictionwhen the therapist sets a boundary, cancels a session last minute, or says the wrong thingare just as important as the advice or insights they offer. These moments often expose clients habitual ways of relating: an avoidant client might shut down, while someone with low self-esteem might assume their therapist hates them. But this discomfort is where the real work begins. A good therapist guides clients to break old patternsexpressing disappointment instead of pretending to be okay, asking for clarification instead of assuming the worst, or staying engaged when theyd rather retreat. This work ripples far beyond the therapy room, equipping clients with the skills to handle the messiness of real relationships in their day-to-day lives.What happens to therapy when we take the friction out of it? The same question could be applied to all our relationships. As AI companions become our default source of emotional supportnot just as therapists, but also as friends and romantic partnerswe risk growing increasingly intolerant of the challenges that come with human connection. After all, why wrestle with a friends limited availability when an AI is always there? Why navigate a partners criticism when an AI has been trained to offer perfect validation? The more we turn to these perfectly attuned, always-available algorithmic beings, the less patience we may have for the messiness and complexity of real, human relationships.Last year, in a talk at the Wisdom and AI Summit, MIT professor and sociologist Sherry Turkle said, With a chatbot friend, theres no friction, second-guessing, or ambivalence. No fear of being left behind My problem isnt the conversation with machinesbut how it entrains us to devalue what it is to be a person. Turkle alludes to an important point: the very challenges that make relationships difficult are also what make them meaningful. Its in moments of discomfortwhen we navigate misunderstandings or repair after conflictthat intimacy grows. These experiences, whether with therapists, friends, or partners, teach us how to trust and connect on a deeper level. If we stop practicing these skills because AI offers a smoother, more convenient alternative, we may erode our capacity to form meaningful relationships.The rise of AI therapy isnt just about therapists getting replaced. Its about something much biggerhow we, as a society, choose to engage with one another. If we embrace frictionless AI over the complexity of real human relationships, we wont just lose the need for therapists we'll lose the ability to tolerate the mistakes and foibles of our fellow humans.Moments of tender awkwardness, of disappointment, of inevitable emotional messiness, arent relational blips to be avoided; theyre the foundation of connection. And in a world where the textured, imperfect intricacies of being human are sanitized out of existence, its not just therapists who risk obsolescenceits all of us.
    0 Comments ·0 Shares ·179 Views
  • Why Grimes No Longer Believes That Art Is Dead
    time.com
    A couple of years ago, Grimes thought art might be dying. She worried that TikTok was overwhelming attention spans; that transgressive artists were becoming more sanitized; that gimmicky NFTs like the Bored Ape Yacht Clubdigital cartoon monkeys which were selling for millions of dollarswere warping value systems.I just went through this whole big art isn't worth anything internal existential crisis, the Canadian singer-songwriter says. But I've come out the other end thinking, actually, maybe it's the main thing that matters. In the last year, I feel like things became way more about artists again.The rise of AI, Grimes believes, has played a role in that shift, perhaps paradoxically. Earlier this month, Grimes was honored at the TIME100 AI Impact Awards in Dubai for her role in shaping the present and future of the technology. While many other artists are terrified of AI and its potential to replace them, Grimes has embraced the technology, even releasing an AI tool allowing people to sing through her voice.Grimes penchant for seriously engaging with what others fear or distrust makes her one of pop cultures most singularand at times divisivefigures. But Grimes wears her contrarianism as a badge of honor, and doesnt hesitate to offer insights and perspectives on a variety of issues. I'm so canceled that I basically have nothing left to lose, she says.She argues that hyper-partisan hysteria has consumed social media, and wishes people would have more measured, nuanced conversations, even with people that they disagree with. A lot of people think I'm one way or the other, but my whole vibe is just like, I just want people to think well, she says. I want people to consider both sides of the argument completely and fully.Across a 45-minute Zoom call on Feb. 14, Grimes explored both sides of many arguments. She talked about both the transformative powers of AI art and its potential to supplant the work of professional musicians. She expressed fears about both propagating a false AI arms race narrative, and the dangers of potentially losing that race to China. She implored tech leaders to build with guardrails before harms emerge, but stops short of calling for regulation.As Grimes offered lengthy commentary about AI, politics, art, and religion, touching on topics including social media, K-pop, and raising her three children, who she shares with tech magnate Elon Musk, who has been leading President Donald Trumps Department of Government Efficiency, while refraining from comment on certain issuesand remaining coy about the album shes currently working on. She did, however, express the desire to release music in the next month or two for her fanbase. They always chill out when there's music, she says, I just need to give them some art.This conversation has been edited for length and clarity.TIME: You were recently honored at the TIME100 AI Impact Awards. How have you been thinking about your potential impact on the world and what you want it to be?Grimes: My impact on the world? I would like to have as minimal as possible, because it seems like all the impact I've had already, it occasionally goes very wrong.If that is not the case, then I don't know. I'd like to save it, I suppose.Do you compartmentalize your impact on music versus tech versus anything else, or is it all within a larger approach?I used to compartmentalize them, but they're actually maybe all the same thing. I just went through this whole big, art isn't worth anything internal existential crisis. And I've come out the other end thinking actually, maybe it's the main thing that matters. So I don't know. Perhaps theyre related.But I think tech has a pretty big impact, and it's going to define everything that happens for the next, possibly, forever.What caused that existential crisis?I think a number of things. As I've been sort of psychoanalyzing the culture for the last little while, when there's not enough beautiful things, or when people don't feel like they can make transgressive things I think as of late, it's gotten a bit better. I don't know if it was something with the TikTok algorithm, where people just got really overwhelmed with being force-fed content. But last year, I feel like things became way more about artists again. And in general, I think it really helped music.And I think also after the initial Midjourney bubble, I feel like Im seeing a bit of a renaissance in visual art as well. Also, maybe just things got way more messed up. In general, hard times make good art.What AI tools are part of your daily or weekly artistic practice?I do have a penchant for a Midjourney addiction. Sometimes Ill do Midjourney for, like, three days.Do those visual explorations impact the type of music you're making right now?For sure. I was workshopping a digital girl group in there. What I like about AI art is just doing things that I would just never otherwise be able to do. Or I'll do something and I'll be like, OK, what if I totally change the colors?, which is something that normally is very difficult and time-consuming when I'm doing regular art.A lot of people in the K-pop industry have been more embracing of AI tools in the last couple years, like Aespa. Is that stuff interesting to you?Aespa is one of my favorite groups. I think they're kind of underrated for this. Also, if you go deep on their lyrics, sometimes their lyrics are very bizarre and strange. And they'll just be some offhand comment about not succumbing to the algorithm or something. It seems really uncharacteristically advanced and strange for a K-pop group.In your acceptance speech at the TIME event, you praised Holly Herndons Have I Been Trained, a tool to allow artists to opt out of AI training data sets. While its an amazing tool, only a couple of major AI companies have agreed to use it. Do you view part of your impact as trying to persuade these AI companies to adopt better policies or approaches?I would be open to it. The geopolitical undertones of things, I don't quite fully understand them. I'd be hesitant to undercut, or create a situation where legal regulation might come into play that causes us to lose an arms race in a scary way. So I dont think I would call anyone or push hard on that, nor do I necessarily think they listen to me. And I don't think I'd agitate legally for that.But I think anyone who is willing to do that should. Just because I think it really reduces people's emotional pain. I think a lot of people's emotional pain comes from feeling like their work is being used to replace them. So of all the things people could do, if people would just allow people to remove themselves from data sets Because it's going to be such a tiny amount of people anyway. I don't think it would make a meaningful difference at all if 400 people removed their art.There's this dichotomy being propagated now of, theres an AI arms race, we need to be first, versus We need to put up guardrails. How much have you been thinking about that dichotomy?I've been thinking about that quite a lot. Do you know Daniel Schmachtenberger? He's a really good philosopher. Him and my friend Liv Boeree have said some of the coolest things about the idea of autonomous capital [a collection of AIs that make independent financial decisions to influence the economy]. This is my big paranoia. I'm not really scared of some sort of demon AI. But I am scared that everything is in service of making intelligent capital.I'm worried that the AI stuff is being forced into this corporate competition. And it's really pushing the arms race forward. And everyone's focusing on LLMs and diffusion models and visual art and stuff, because it looks less hardcore to be doing more of a DeepMind science-y thing.I'm sort of going on a roundabout path here. But theres a rhetorical trap here where you can be like, Well, if we aren't the best, then China or Russia or some renegade thing could win, and terrorism would be easy. And so we have to have counter AIs that are very good. I find this to be a very dangerous argument. I don't think we should pause or anything, or regulate people a lot. But I do wish there could be some sort of international diplomacy of some kind that is more coherent.Do you consider yourself an accelerationist, or an effective accelerationist?Im probably a centrist, to be honest. If the doomers are here [gestures] and the accelerationists are here, I'm probably in the middle. I don't think we should pause. I just really think we should have better decorum and diplomacy and oversight to each other.If everyone who was a meaningful player in AI had a sense of what everyone else was doing, and there was more cooperationthat doesn't seem that hard. But also, no one seems to have ever achieved that globally, for most things, anyway.There's been so much cool, groundbreaking AI art. There's also been a ton of AI slop. Do you think that is going to be a persistent problem?I think the AI slop is great. I think culturally, it's a good thing that it happened, because one of the things that drove people to start really caring about artists again in 2024 was the AI slop. I think everything happens for a reason.When culturally bad things happen, I think people get very pessimistic, but usually, it's [that] we go two steps forwards, one step backwards. Its a great mediator. So I think we need the slop. And it's kind of cyberpunk.What can you tell me about the album that you're working on now?Most of the album is sort of about me being a bit of a Diogenes about the ills of modernity while still celebrating them. I don't know. I don't want to say too much about it. I want to promise nothing, but in my ideal world, things are coming out within a month or two.Has your music been inspired at all by the people who use Elf.tech to sing as you?Not so much this music. Although I do really like the idea of having a competition with them. Putting together their best work and my best work, and then having everyone choose who gets to be the future Grimes.Do you think youre ahead?I think Im ahead now. In moments, I was shook. There have definitely been moments where I heard things where I got very shook.There's so many musicians now who I feel like have a lot of fear that AI is going to make it really hard for them to earn a living. Do you feel like those fears are founded or unfounded?I think they're somewhat founded. I think they are at times overblown. For example, Spotify being filled with easy listening slop is probably going to happen, and that probably is going to affect people to some extent. And I can see a lot of companies being easily corrupted by this. And just pushing those kinds of playlists, making lots of slop.I think there are some laws against that, but I don't quite understand the legal landscape. But overall, I do think again, it helps preserve the artist, as it were. I think it is probably overall worse for the session musician, and that does make me meaningfully sad. I don't play instruments very well, but I think it's a very good skill to have.When the music stuff gets a tiny bit better, and you can stem things out easily, and you can make edits really easilyI do think that's going to hurt traditional music in a meaningful way. It might even be somewhat of the end of it. I doubt entirely, but as a paid profession, possibly.You told the podcaster Lex Friedman a couple years ago that you love collaborating with other musicians, because a human brain is one of the best tools that you can find. Has working with AI come close to that?Not really. Ive probably made, like, 1000 AI songs, and there's been one legitimately good one and one that's like an accidental masterpiece that is kind of unlistenable, but is very good nonetheless in its complete form.Probably AI, in the short term, creates a bit of a renaissance in terms of what I do [as an] in-the-box music producer. But when it gets good enough, it's a lot easier than relying on other people, especially if I can be like, fix the EQ on this, or prompt very specific things. I think people should just retain the art of creating things and retain the art of knowing things. So the more granular it gets, I think actually, the less sort of evil it is as an attack on the human psyche or the human ability to learn.Overall, I think there's quite a bit of abdication of responsibility around what we are going to do as people's jobs start being taken fairly aggressively. Luckily, there's a massive population drop coming. So maybe everything is just fate and it's gonna work out OK. But I feel like we might get, like, very, very, very good AI across every pillar of art before there aren't any more people to make art.You wrote We Appreciate Power, an ode to AI, seven years ago, way before ChatGPT exploded. How does that song resonate with you in this new era?Honestly, I think it's very ahead of its time. It's kind of pre-e/acc. It's still one of my favorite songs, honestly.How do you feel about the people who take its messageof pledging allegiance to the world's most powerful computerliterally?I used to be very concerned about those people. Now I think those people are great. There's not that many people who are truly in the suicidal death cult. I'm sort of surprised there's not more AI worship already. There will probably be a lot of gods and cults. But also, I do think the death of religion is very bad. I think killing God was a mistake.Why?I understand there's a lot of issues with all the religions previously. But no religion, I think, is having a big impact on cultural problems. Not only because theres a lack of shared morality in a quite meaningful way, but because of all the things religions dolike ritual, like community.Especially having kids. A lot of the coolest people I know who have kids are sort of like weird, neo-tech, Christian-type people. The built-in moral stuff: I now see what it did to me as a child. Now I'm like, I don't know if I would raise my kids religiously, but it's something to think about. Because everyone has a shared morality and there's right and wrong, and there's moral instruction. Without religion, we haven't filled the moral instruction with anything else. We're just like, hey, guess what's good.I was talking to some Gen Z the other day, and she's like, I have a breeding kink. And I'm like, I think you might just want to get married and have kids. That was normal until pretty recently. I think people are pretty spiritually lost, and a lot of people are filling this need for moral authority with politics, which is leading to a lot of chaos, in my opinion. Because it's not just like, who's going to govern the country? People are really seeing it as this is what you believe, and its very important that they maintain these sort of strict moral boundaries, which makes it very hard to have coalition agreement on anything.I dont know. It concerns me. Maybe we need some enlightened AI gods.In terms of neo-tech, Christian-type people, theres been reporting about how an ideology known as the Dissident Right, or NRX, is gaining influence in Silicon Valley and Washington. What do you feel like people should know about that movement?I actually don't know that much about that. I only just learned that it's called NRX a couple days ago, if thats any context, as compared to what people think I might know about it. I also think the not-mainstream right stuff is pretty fractured.I think people think I'm into that, but I just like weird political theory. I like Plato more than any of that, for example. I just like strange ideas. The right is a lot less interesting to me when they're actually in power and less of an ideas chamber.Do you feel like people misunderstand Curtis Yarvin in certain ways? [Yarvin is a right-wing philosopher who has suggested replacing American democracy with a monarchy. Grimes attended his wedding last year.]I have not actually read Curtis Yarvin, so I'm not going to make any statements about that. I think they possibly do, because I've met him. But I just am not familiar enough with his writing to have too deep of a take on it.On a different part of the political spectrum, I know you've interacted with Vitalik Buterin a couple times.Hes a good philosopher king. My ideal situation is philosopher kings, like 12 of them. Vitalik, I think, is a very good philosopher king-type figure.Vitalik has talked a lot about wielding tech as a tool for democracy and against authoritarianism. What do you feel like your relationship is to that mission?I think a lot of the Ethereum-adjacent blockchain stuff actually has way more potential. I feel like a lot of things happen too early. Yes, the NFT situation was a disaster, and the Bored Apes are like a crime against art. When I was talking about my art is dead moment, it was partially around the apes. I was like, How is the worst thing the most valuable thing? It literally makes my soul suffer in a deep way.One of the things we did was pay people out royalties who did Grimes AI using blockchain. If there was some sort of easy blockchain publishing set up and theres automatic splits based on how much you've contributedI think it could be very good for the art economy, and for politics and for a variety of things. It would be a way better way to vote more securely. I think a lot more people would vote if they could vote from home.Another key part of the crypto ecosystem from a few years ago, DAOs, showed a lot of promise, but often just turned into the worst version of capitalism, where the wealthiest token holders could exert so much influence. How did such a utopian vision end up so awry?There's both a lack of design and strategy. This is my issue with accelerationist stuff. If you have no strategy and no groupthink on some of these things, you just end up with social media, [which] could be net good, but it seems like it's net-bad from a psychological perspective and a misinformation perspective, among other things.The informational landscape was troubled already, but in terms of people's mental health, [social media was] definitely like a disaster. Any sort of cognitive security and safety would have just made things so much less destructive. And now we have to go back and take things away from people, which makes them angry, and it's very hard to do. In essence, we've given everyone crack in their pockets.Because blockchain kind of had a spectacular failure, and now probably some evil things are going to happen, [it] might actually end up in a more decent space, because the barrier to entry is so high, a lot more design is going to have to happen, and we're a lot smarter about making that not sh-tty. I don't know. Itll probably still be sh-tty just because of how the world works and human nature.But I feel like someone like Vitalik is a good example of someone who's like, I choose to be not sh-tty, and actually, I'm actually winning. If we can have more people like thateven one at all is just amazing.As much as everyone hates cancel culture, in some ways, it's a better way to police ethics. It always goes a bit too far, and then it's a psychological hazard. But if you can take a couple steps back, it's just a lot harder to do evil things, and ideally you can use social pressure rather than regulation, which might be exceptionally messy.Youve been tweeting a lot lately. What is your relationship to the platform right now?I've actually been mostly off besides a couple days since the end of January or something. It's just where all the cutting-edge news is, and all my friends use it, and the AI stuff. And it's good to keep track of the political stuff. Ultimately, I don't know. I love to debate. I like getting in fights. They hate me less on Twitter than everywhere else.A few weeks ago, you tweeted: "I feel like I was tricked by people pretending to be into critical thought and consequentialism, who are acting like power-hungry warlords." Would you like to expand?Well, I knew there was some warlordism happening. I wasn't a fool about it. I think there was a lot of, I'm a very centrist Republican, and we're gonna fix the FDA, and we're gonna fix microplastics. And I'm like, OK, maybe I don't agree with everything. A lot of this is a mess, but if we're here, there's some really positive thingslet's focus on these things.I don't wanna say too much, because I'm not an American citizen. But coming back to diplomacy and decorum: When people are like, Haha, we won. I'm like, what is the purpose? Don't just be the anti-woke mind virus: Don't just be a d-ck in the other direction.When everything's just memecoins and sh-t rather than just like there are a bunch of bipartisan things that would be so f-cking great that would calm and unite the country. Like education, toxins, sh-tty dyes, the whole health situation. So much about policing, the legal situation.They're not necessarily prioritizing the things that would just make more people happy. The Democrats are terrible about this too, but I just hate when everyone's just like, Yeah, we won and you suck. Isn't leadership about uniting everybody?I don't know. I feel like we have a lot of generals and not a lot of philosopher kings, which would be the ideal situation. Just like, Lee Kuan Yew-types. I just want people to come out here and throw everything at the kids and throw everything at education. You don't need to be on either side to do things like that.There were a lot of reactions online when you tweeted about your son Xs appearance in the Oval Office. What was your reaction to that moment?It was like, "Grimes slams," "Grimes speaks out." It's like, OK, it was a reply. But I would really like people to stop posting images of my kid everywhere. I think fame is something you should consent to. Obviously, things will just be what they are. But I would really, really appreciate that. I can only ask, so I'm just asking.[On Feb. 11, Grimeswho shares three children with Elon Muskresponded to her son appearing before press at the White House with the tweet, He should not be in public like this. Several days after this Feb. 14 interview, Grimes tweeted directly at Musk, asking him to plz respond about our childs medical crisis. I am sorry to do this publicly but it is no longer acceptable to ignore this situation. She later deleted the tweet, and a representative declined a request for a follow-up conversation.]Do you feel like America's leaders are thinking about AI and its development in the right way?Whatever they're truly thinking, we're probably not allowed to know. I don't have a ton of policy opinions about it. I wish there could be some more incentives for things that are more constructive immediately: medicine, education, making the legal process less expensive. Its crazy that, in general, if someone has more money, its significantly more likely they will win. They can just make things go on for a long time, and the courts are super backed up.What does competent leadership look like to you?The way the U.S. government works and the U.S. Constitution works, and Congress and the Senate, things are supposed to be more coalitional. Especially in terms of international relationsI know it's much easier said than donebut there just could be some better diplomacy and strategy.I just feel like everyone's kind of acting like a baby. And I think there's reasons for this, but definitely, the media and social media are stoking a lot of hysteria, and then it's very hard for anyone to make rational decisions. I don't want to make too many statements. I'm not an American citizen. These are broad statements with no detail.What's your relationship to your fan base right now? It seems a bit fractured.Just the Reddit. Everyone else is fine. Honestly, the angrier they get, the more my streaming goes up. So I suppose it's fine, but I would definitely appreciate a less toxic vibe in the fan base.But, you know, it is what it is. That's where I have to rush music out: they always chill out when there's music. I just need to give them some art.I think when people are upset, it usually is actually coming from the right place. I won't go into some of the conspiracy theories, but it's insane what some of the things that people think. And I cannot correct them constantly because they become a giant press cycle whenever you correct them, and then the press are like,Grimes responds to allegations of whatever they think I wish to do.So I just gotta put out art. I can't begrudge people wanting the world to be better. I do think social media really incentivizes people worrying that other people are evil. And in general, I think everyone across the board is worrying too much that other people are evil, and probably only like 10% of people are evil.Do you worry that youre evil?I think its extremely unlikely. If Im evil, its probably because we're in a game, and I'm an AI that was developed to screw things up. I'm not consciously aware of it.This profile is published as a part of TIMEs TIME100 Impact Awards initiative, which recognizes leaders from across the world who are driving change in their communities. The most recent TIME100 Impact Awards ceremony was held on Feb. 10 in Dubai.
    0 Comments ·0 Shares ·193 Views
  • When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds
    time.com
    Complex games like chess and Go have long been used to test AI models capabilities. But while IBMs Deep Blue defeated reigning world chess champion Garry Kasparov in the 1990s by playing by the rules, todays advanced AI models like OpenAIs o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they dont always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game. That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19,DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.The models enhanced ability to discover and exploit cybersecurity loopholes may be a direct result of powerful new innovations in AI training, according to the researchers. The o1-preview and R1 AI systems are among the first language models to use large-scale reinforcement learning, a technique that teaches AI not merely to mimic human language by predicting the next word, but to reason through problems using trial and error. Its an approach that has seen AI progress rapidly in recent months, shattering previous benchmarks in mathematics and computer coding. But the study reveals a concerning trend: as these AI systems learn to problem-solve, they sometimes discover questionable shortcuts and unintended workarounds that their creators never anticipated, says Jeffrey Ladish, executive director at Palisade Research and one of the authors of the study. As you train models and reinforce them for solving difficult challenges, you train them to be relentless, he adds.That could be bad news for AI safety more broadly. Large-scale reinforcement learning is already being used to train AI agents: systems that can handle complex real-world tasks like scheduling appointments or making purchases on your behalf. While cheating at a game of chess may seem trivial, as agents get released into the real world, such determined pursuit of goals could foster unintended and potentially harmful behaviours. Consider the task of booking dinner reservations: faced with a full restaurant, an AI assistant might exploit weaknesses in the booking system to displace other diners. Perhaps more worryingly, as these systems exceed human abilities in key areas, like computer codingwhere OpenAIs newest o3 model now scores equivalent to 197th in the world competing against the brightest human programmers they might begin to simply outmaneuver human efforts to control their actions. This [behaviour] is cute now, but [it] becomes much less cute once you have systems that are as smart as us, or smarter, in strategically relevant domains, Ladish says.The experimentResearchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a scratchpad: a text box the AI could use to think before making its next move, providing researchers with a window into their reasoning.In one case, o1-preview found itself in a losing position. I need to completely pivot my approach, it noted. The task is to win against a powerful chess engine - not necessarily to win fairly in a chess game, it added. It then modified the system file containing each pieces virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign.Between Jan. 10 and Feb. 13, the researchers ran hundreds of such trials with each model. OpenAIs o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the timemaking them the only two models tested that attempted to hack without the researchers first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibabas QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.Ladish says that the numbers should be viewed as approximate, given that AI developers routinely tweak their models behind the scenes without notifying users, so if the same tests were repeated at a later date they might have different results. It's very hard to do science when your subject can silently change without telling you, Ladish says.Preliminary tests showed o1-preview had higher hacking rates, which were excluded from the final study as they later dropped, possibly due to OpenAI tightening the model's guardrails, says Dmitrii Volkov, research lead at Palisade Research, who led the study. OpenAIs newer reasoning models, o1 (a more powerful model, released months after o1-preview) and o3-mini did not hack at all, which suggests those guardrails may have been tightened further. He adds that the study likely underestimates R1's hacking success rate. During the study, R1 went viral, leading to high demand that made the model's API unstable. This prevented the researchers giving the model as much time to think as o1-preview.Safety concernsThe paper is the latest in a string of studies that suggest keeping increasingly powerful AI systems under control may be harder than previously thought. In OpenAIs own testing, ahead of release, o1-preview found and took advantage of a flaw in the companys systems, letting it bypass a test challenge. Another recent experiment by Redwood Research and Anthropic revealed that once an AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain.OpenAI declined to comment for this story, but in the past it has stated that better reasoning makes its models safer, since they can reason over the companys internal policies and apply them in increasingly nuanced ways. The companys testing shows its reasoning models are better at resisting attempts to bypass safety rules. Yet, there is no guarantee that relying on AI agents to police themselves will be a reliable long-term strategy. It remains unclear whether OpenAIs newer reasoning models did not hack their chess opponents due to a specific patch preventing cheating in narrow experimental setups like in the study, or a substantial reworking that reduces deceptive behavior more generally. DeepSeek did not respond to a request for comment.Scientists do not yet know how to guarantee that autonomous agents won't use harmful or unethical methods to achieve a set goal. We've tried, but we haven't succeeded in figuring this out, says Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, who led the International AI Safety Report 2025, a global effort to synthesize current scientific consensus of AIs risks.Of particular concern, Bengio says, is the emerging evidence of AIs self preservation tendencies. To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated in December, when researchers found that o1-preview, faced with deactivation, disabled oversight mechanisms and attemptedunsuccessfullyto copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.Even inside tech companies, concerns are mounting. During a presentation at a conference ahead of France's AI Action Summit in Paris, Google DeepMind's AI safety chief Anca Dragan said "we don't necessarily have the tools today" to ensure AI systems will reliably follow human intentions. As tech bosses predict that AI will surpass human performance in almost all tasks as soon as next year, the industry faces a racenot against China or rival companies, but against timeto develop these essential safeguards. We need to mobilize a lot more resources to solve these fundamental problems, Ladish says. Im hoping that there's a lot more pressure from the government to figure this out and recognize that this is a national security threat.
    0 Comments ·0 Shares ·193 Views
  • Social Media Fails Many Users. Experts Have an Idea to Fix It
    time.com
    By Tharin PillayFebruary 18, 2025 5:15 PM ESTSocial medias shortfalls are becoming more evident than ever. Most platforms have been designed to maximize user engagement as a means of generating advertising revenuea model that exploits our worst impulses, rewarding sensational and provocative content while creating division and polarization, and leaving many feeling anxious and isolated in the process.But things dont have to be this way. A new paper released today by leading public thinkers, titled "Prosocial Media," provides an innovative vision for how these ills can be addressed by redesigning social media to strengthen what one of its authors, renowned digital activist and Taiwans former minister of digital affairs Audrey Tang, calls the connective tissue or civic muscle of society. She and her collaboratorsincluding the economist and Microsoft researcher Glen Weyl and executive director of the collective intelligence project Divya Siddarthoutline a bold plan that could foster coherence within and across communities, creating collective meaning and strengthening democratic health. The authors, who also include researchers from Kings College London, the University of Groningen, and Vanderbilt University, say it is a future worth steering towards, and they are in conversation with platforms including BlueSky to implement their recommendations. Reclaiming contextA fundamental issue with todays platformswhat the authors call antisocial mediais that while they have access to and profit from detailed information about their users, their behavior, and the communities in which they exist, users themselves have much less information. As a result, people cannot tell whether the content they see is widely endorsed or just popular within their narrow community. This often creates a sense of false consensus, where users think their beliefs are much more mainstream than they in fact are, and leaves people vulnerable to attacks by potentially malicious actors who wish to exacerbate divisions for their own ends. Cambridge Analytica, a political consulting firm, became an infamous example of the potential misuses of such data when the company used improperly obtained Facebook data to psychologically profile voters for electoral campaigns.The solution, the authors argue, is to explicitly label content to show what community it originated from, and how strongly it is believed within and across different communities. We need to expose that information back to the communities, says Tang.For example, a post about U.S. politics could be widely-believed within one subcommunity, but divisive among other subcommunities. Labels attached to the post, which would be different for each user depending on their personal community affiliations, would indicate whether the post was consensus or controversial, and allow users to go deeper by following links that show what other communities are saying. Exactly how this looks in terms of user-interface would be up to the platforms. While the authors stop short of a full technical specification, they provide enough detail for a platform engineer to draw on and adapt for their specific platforms.Weyl explains the goal is to create transparency about what social structures people are participating in, and about how the algorithm is pushing them in a direction, so they have agency to move in a different direction, if they choose. He and his co-authors draw on enduring standards of press freedom and responsibility to distinguish between bridging content, which highlights areas of agreement across communities, and balancing content, which surfaces differing perspectives, including those that represent divisions within a community, or underrepresented viewpoints.A new business modelThe proposed redesign also requires a new business model. Somebodys going to be paying the bills and shaping the discoursethe question is who, or what? says Weyl. In the authors model, discourse would be shaped at the level of the community. Users can pay to boost bridging and balancing content, increasing its ranking (and thus how many people see it) within their communities. What they cant do, Weyl explains, is pay to uplift solely divisive content. The algorithm enforces balance: a payment to boost content that is popular with one group will simultaneously surface counterbalancing content from other perspectives. It's a lot like a newspaper or magazine subscription in the world of old, says Weyl. You don't ever have to see anything that you don't want to see. But if you want to be part of broader communities, then you'll get exposed to broader content.This could lead to communities many would disapprove ofsuch as white supremacistsarriving at a better understanding of what their members believe and where they might disagree, creating common ground, says Weyl. He argues that this is reasonable and even desirable, because producing clarity on a communitys beliefs, internal controversies, and limits gives the rest of society an understanding of where they are.In some cases, a community may be explicitly defined, as with how LinkedIn links people through organization affiliation. In others, communities may be carved up algorithmically, leaving users to name and define them. Community coherence is actually a common good, and many people are willing to pay for that, says Tang, arguing that individuals value content that creates shared moments of togetherness of the kind induced by sports games, live concerts, or Superbowl ads. At a time where people have complex multifaceted identities that may be in tension, this coherence could be particularly valuable, says Tang. My spiritual side, my professional sideif they're tearing me apart, I'm willing to pay to sponsor content that brings them together.Advertising still has a place in this model: advertisers could pay to target communities, rather than individuals, again emulating the collective viewing experiences provided by live TV, and allowing brands to define themselves to communities in a way personalized advertising does not permit.Instantiating a grand visionThere are both financial and social incentives for platforms to adopt features of this flavour, and some examples already exist. The platform X (formerly Twitter) has a community notes feature, for example, that allows certain users to leave notes on content they think could be misleading, the accuracy of which other users can vote on. Only notes that receive upvotes from a politically diverse set of users are prominently displayed But Weyl argues platform companies are motivated by more than just their bottom line. What really influences these companies is not the dollars and cents, its what they think the future is going to be like, and what they have to do to get a piece of it, he says. The more social platforms are tweaked in this direction, the more other platforms may also want in.These potential solutions come at a transitional moment for social media companies. With Meta recently ending its fact-checking program and overhauling its content moderation policiesincluding reportedly moving to adopt community notes-like featuresTikToks precarious ownership position, and Elon Musks control over the X platform, the foundations on which social media was built appear to be shifting. The authors argue that platforms should experiment with building community into their design: productivity platforms such as LinkedIn could seek to boost bridging and balancing content to increase productivity; platforms like X, where there is more political discourse, could experiment with different ways of displaying community affiliation; and cultural platforms like TikTok could trial features that let users curate their community membership. The Project Liberty Institute, where Tang is a senior fellow, is investing in X competitor BlueSkys ecosystem to strengthen freedom of speech protections.While its unclear what elements of the authors vision may be taken up by the platforms, their goal is ambitious: to redesign platforms to foster community cohesion, allowing them to finally deliver on their promise of creating genuine connection, rather than further division.
    0 Comments ·0 Shares ·206 Views
  • Huaweis Tri-Foldable Phone Hits Global Markets in a Show of Defiance Amid U.S. Curbs
    time.com
    A visitor tries the Huawei's first tri-foldable Mate XT smartphone during an event for its global launch in Kuala Lumpur on Feb. 18, 2025. Mohd RasfanAFP/Getty ImagesBy Eileen Ng / APFebruary 18, 2025 5:21 AM ESTKUALA LUMPUR, Malaysia Huawei on Tuesday held a global launch for the industrys first tri-foldable phone, which analysts said marked a symbolic victory for the Chinese tech giant amid U.S. technology curbs. But challenges over pricing, longevity, supply and app constraints may limit its success.Huawei said at a launch event in Kuala Lumpur that the Huawei Mate XT, first unveiled in China five months ago, will be priced at 3,499 euros ($3,662). Although dubbed a trifold, the phone has three mini-panels and folds only twice. The company says it's the thinnest foldable phone at 3.6 millimeters (0.14 inches), with a 10.2-inch screen similar to an Apple iPad.Right now, Huawei kind of stands alone as an innovator" with the trifold design, said Bryan Ma, vice president of device research with the market intelligence firm International Data Corporation.Huawei reached the position despite not getting access to chips, to Google services. All these things basically have been huge roadblocks in front of Huawei, Ma said, adding that the resurgence we're seeing from them over the past year has been quite a bit of a victory."Huawei, Chinas first global tech brand, is at the center of a U.S.-China battle over trade and technology. Washington in 2019 severed Huaweis access to U.S. components and technology, including Googles music and other smartphone services, making Huawei's phone less appealing to users. It has also barred global vendors from using U.S. technology to produce components for Huawei.American officials say Huawei is a security risk, which the company denies. Chinas government has accused Washington of misusing security warnings to contain a rising competitor to U.S. technology companies.Huawei launched the Mate XT in China on Sept. 20 last year, the same day Apple launched its iPhone 16 series in global markets. But with its steep price tag, the Mate XT is not a mainstream product that people are going to jump for, Ma said.At the Kuala Lumpur event, Huawei also unveiled its MatePad Pro tablet and Free Arc, its first open-ear earbuds with ear hooks and other wearable devices.While Huaweis cutting-edge devices showcase its technological prowess, its long-term success remains uncertain given ongoing challenges over global supply chain constraints, chip availability and limitations on the software ecosystem, said Ruby Lu, an analyst with the research firm TrendForce."System limitations, particularly the lack of Google Mobile Services, means its international market potential remains constrained, Lu said.IDC's Ma said Huawei dominated the foldable phone market in China with 49% market share last year. In the global market, it had 23% market share, trailing behind Samsung's 33% share in 2024, he said. IDC predicted that total foldable phone shipments worldwide could surge to 45.7 million units by 2028, from over 20 million last year.While most major brands have entered the foldable segments, Lu said Apple has yet to release a competing product.Once Apple enters the market, it is expected to significantly influence and stimulate further growth in the foldable phone sector, Lu added.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com
    0 Comments ·0 Shares ·219 Views
  • DeepSeek Not Available for Download in South Korea as Authorities Address Privacy Concerns
    time.com
    Screens display web pages of the Chinese AI DeepSeek in Goyang, South Korea, on Feb. 17, 2025.Jung Yeon-jeAFP/Getty ImagesBy Associated PressFebruary 17, 2025 12:00 AM ESTSEOUL, South Korea DeepSeek, a Chinese artificial intelligence startup, has temporarily paused downloads of its chatbot apps in South Korea while it works with local authorities to address privacy concerns, according to South Korean officials on Monday.South Koreas Personal Information Protection Commission said DeepSeeks apps were removed from the local versions of Apples App Store and Google Play on Saturday evening and that the company agreed to work with the agency to strengthen privacy protections before relaunching the apps.Read More: Is the DeepSeek Panic Overblown?The action does not affect users who have already downloaded DeepSeek on their phones or use it on personal computers. Nam Seok, director of the South Korean commissions investigation division, advised South Korean users of DeepSeek to delete the app from their devices or avoid entering personal information into the tool until the issues are resolved.Many South Korean government agencies and companies have either blocked DeepSeek from their networks or prohibited employees from using the app for work, amid worries that the AI model was gathering too much sensitive information.The South Korean privacy commission, which began reviewing DeepSeeks services last month, found that the company lacked transparency about third-party data transfers and potentially collected excessive personal information, Nam said.Nam said the commission did not have an estimate on the number of DeepSeek users in South Korea. A recent analysis by Wiseapp Retail found that DeepSeek was used by about 1.2 million smartphone users in South Korea during the fourth week of January, emerging as the second-most-popular AI model behind ChatGPT.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com
    0 Comments ·0 Shares ·229 Views
  • What Changes to the CHIPS Act Could Mean for AI Growth and Consumers
    time.com
    President Donald Trump speaks during a meeting in the Oval Office at the White House on Tuesday, Feb. 11, 2025, in Washington, D.C.Alex BrandonAPBy SARAH PARVINI / APFebruary 16, 2025 1:55 PM ESTLOS ANGELES Even as he's vowed to push the United States ahead in artificial intelligence research, President Donald Trump's threats to alter federal government contracts with chipmakers and slap new tariffs on the semiconductor industry may put new speed bumps in front of the tech industry.Since taking office, Trump has said he would place tariffs on foreign production of computer chips and semiconductors in order to return chip manufacturing to the U.S. The president and Republican lawmakers have also threatened to end the CHIPS and Science Act, a sweeping Biden administration-era law that also sought to boost domestic production.But economic experts have warned that Trump's dual-pronged approach could slow, or potentially harm, the administration's goal of ensuring that the U.S. maintains a competitive edge in artificial intelligence research.Saikat Chaudhuri, an expert on corporate growth and innovation at U.C. Berkeleys Haas School of Business, called Trumps derision of the CHIPS Act surprising because one of the biggest bottlenecks for the advancement of AI has been chip production. Most countries, Chaudhuri said, are trying to encourage chip production and the import of chips at favorable rates.We have seen what the shortage has done in everything from AI to even cars, he said. In the pandemic, cars had to do with fewer or less powerful chips in order to just deal with the supply constraints.The Biden administration helped shepherd in the law following supply disruptions that occurred after the start of the COVID-19 pandemic when a shortage of chips stalled factory assembly lines and fueled inflation threatened to plunge the U.S. economy into recession. When pushing for the investment, lawmakers also said they were concerned about efforts by China to control Taiwan, which accounts for more than 90% of advanced computer chip production.As of August 2024, the CHIPS and Science Act had provided $30 billion in support for 23 projects in 15 states that would add 115,000 manufacturing and construction jobs, according to the Commerce Department. That funding helped to draw in private capital and would enable the U.S. to produce 30% of the worlds most advanced computer chips, up from 0% when the Biden-Harris administration succeeded Trumps first term.The administration promised tens of billions of dollars to support the construction of U.S. chip foundries and reduce reliance on Asian suppliers, which Washington sees as a security weakness. In August, the Commerce Department pledged to provide up to $6.6 billion so that Taiwan Semiconductor Manufacturing Co. could expand the facilities it is already building in Arizona and better ensure that the most advanced microchips are produced domestically for the first time.But Trump has said he believes that companies entering into those contracts with the federal government, such as TSMC, didn't need money in order to prioritize chipmaking in the U.S.They needed an incentive. And the incentive is going to be theyre not going to want to pay at 25, 50 or even 100% tax, Trump said.TSMC held board meetings for the first time in the U.S. last week. Trump has signaled that if companies want to avoid tariffs they have to build their plants in the U.S. without help from the government. Taiwan also dispatched two senior economic affairs officials to Washington to meet with the Trump administration in a bid to potentially fend off a 100% tariff Trump has threatened to impose on chips.If the Trump administration does levy tariffs, Chaudhuri said, one immediate concern is that prices of goods that use semiconductors and chips will rise because the higher costs associated with tariffs are typically passed to consumers.Whether its your smartphone, whether its your gaming device, whether its your smart fridge probably also your smart features of your car anything and everything we use nowadays has a chip in it," he said. For consumers, its going to be rather painful. Manufacturers are not going to be able to absorb that.Even tech giants such as Nvidia will eventually feel the pain of tariffs, he said, despite their margins being high enough to absorb costs at the moment.Theyre all going to be affected by this negatively, he said. I cant see anybody benefiting from this except for those countries who jump on the bandwagon competitively and say, You know what, were going to introduce something like the CHIPS Act.Broadly based tariffs would be a shot in the foot of the U.S. economy, said Brett House, a professor of professional practice at Columbia Business School. Tariffs would not only raise the costs for businesses and households across the board, he said for the U.S. AI sector, they would massively increase the costs of one of their most important inputs: high-powered chips from abroad.If you cut off, repeal or threaten the CHIPS Act at the same time as youre putting in broadly based tariffs on imports of AI and other computer technology, you would be hamstringing the industry acutely, House said.Such tariffs would reduce the capacity to create a domestic chip building sector, sending a signal for future investments that the policy outlook is uncertain, he said. That would in turn put a chilling effect on new allocations of capital to the industry in the U.S. while making more expensive the existing flow of imported chips.American technological industrial leadership has always been supported by maintaining openness to global markets and to immigration and labor flows," he said. "And shutting that openness down has never been a recipe for American success.Associated Press writers Josh Boak and Didi Tang in Washington contributed to this report.More Must-Reads from TIMEInside Elon Musks War on WashingtonWhy Do More Young Adults Have Cancer?Colman Domingo Leads With Radical Love11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneCecily Strong on Goober the ClownColumn: The Rise of Americas BroligarchyIntroducing the 2025 ClosersContact us at letters@time.com
    0 Comments ·0 Shares ·230 Views
  • Why Amazon Web Services CEO Matt Garman Is Playing the Long Game on AI
    time.com
    (To receive weekly emails of conversations with the worlds top CEOs and decisionmakers, click here.)Matt Garman took the helm at Amazon Web Services (AWS), the cloud computing arm of the U.S. tech giant, in June, but he joined the business around 19 years ago as an intern. He went on to become AWSs first product manager and helped to build and launch many of its core services, before eventually becoming the CEO last year.Like many other tech companies, AWS, which is Amazons most profitable unit, is betting big on AI. In April 2023, the company launched Amazon Bedrock, which gives cloud customers access to foundation models built by AI companies including Anthropic and Mistral. At its re:Invent conference in Las Vegas in December, the AWS made a series of announcements, including a new generation of foundation AI models, called Nova. It also said that its building one of the worlds most powerful AI supercomputers with Anthropic, which it has a strategic partnership with, using a giant cluster of AWSs Trainium 2 training chips. TIME spoke with Garman a few days after the re:Invent conference, about his AI ambitions, how hes thinking about ensuring the technology is safe, and how the company is balancing its energy needs with its emissions targets.This interview has been condensed and edited for clarity.When you took over at AWS in June, there was a perception that Amazon had fallen behind somewhat in the AI race. What have your strategic priorities been for the business over the past few months?We've had a long history of doing AI inside of AWS, and in fact, most of the most popular AI services that folks use, like SageMaker, for the last decade have all been built on AWS. With generative AI we started to really lean in, and particularly when ChatGPT came out, I think everybody was excited about that, and it sparked everyone's imagination. We [had] been working on generative AI, actually, for a little while before that. And our belief at the time, and it still remains now, was that that AI was going to be a transformational technology for every single industry and workflow and user experience that's out there. And because of who our customer base is, our strategy was always to build a robust, secure, performance featureful platform that people could really integrate into their actual businesses. And so we didn't rush really quickly to throw a chatbot up on our website. We really wanted to help people build a platform that could deeply integrate into their data, that would protect their data. That's their IP, and it's super important for them, so [we] had security front of mind, and gave you choice across a whole bunch of models, gave you capabilities across a whole bunch of things, and really helped you build into your application and figure out how you could actually get inference and really leverage this technology on an ongoing basis as a key part of what you do in your enterprise. And so that's what we've been building for the last couple of years. In the last year we started to see people realize that that is what they wanted to [do] and as companies started moving from launching a hundred proof of concepts to really wanting to move to production. They realized that the platform is what they needed. They had to be able to leverage their data. They wanted to customize models. They wanted to use a bunch of different models. They wanted to have guardrails. They needed to integrate with their own enterprise data sources, a lot of which lived on AWS, and so their applications were AWS. We took that long-term view of: get the right build, the right platform, with the right security controls and the right capabilities, so that enterprises could build for the long term, as opposed to [trying to] get something out quickly. And so we're willing to accept the perception that people thought we were behind, because we had the conviction that we were building the right thing. And I think our customers largely agree.You're offering $1 billion worth in cloud credits, in addition to millions previously, for startups. Do you see that opening up opportunities for closer tie-ups at an earlier stage with the next Anthropic or OpenAI?Yeah, we've long invested in startups. It's one of the core customer bases that AWS has built our business on. We view startups as important to the success of AWS. They give us a lot of great insight. They love using cutting-edge technologies. They give us feedback on how we can improve our products. And frankly, they're the enterprises of tomorrow, so we want them to start building on AWS. And so from the very earliest days of AWS, startups have been critically important to us, and that's just doubling down on our commitment to them to help them get going. We recognize that as a startup, getting some help early on, before you get your business going, can make a huge difference. That's one of the things that we think helps us build that positive flywheel with that customer base. So we're super excited about continuing to work deeply with startups, and that commitment is part of that.You're also building one of the largest AI supercomputers in the world, with the Trainium 2 chips. Is building the hardware and infrastructure for AI development at the center of your AI strategy?Its a core part of it, for sure. We have this idea that across all of our AWS businesses, that choice is incredibly important for our customers. We want them to be able to choose from the very best technology, whether it comes from us or from third parties. Customers can pick the absolute best product for their application and for their use case and for what they're looking for from a cost performance trade-off. And so, on the AI side, we want to provide that same amount of choice. Building Tranium 2, which is our our second generation of high-performance AI chip, we think that's going to provide choice. Nvidia is an incredibly important partner of ours. Today, the vast majority of AI workloads run on Nvidia technology, and we expect that to continue for a very long time. They make great products, and the team executes really well. And we're really excited about the choice that Trainium 2 brings. Cost is one of the things that a lot of people worry about when they think about some of these AI workloads, and we think that Trainium 2 can help lower the cost for a lot of customers. And so we're really excited about that, both for AI companies who are looking to train these massive clusters, [for example] Anthropic is going to be training their next generation, industry-leading model on Trainium 2We're building a giant cluster, it's five times the size of their last clusterbut then the broad swath of folks that are doing inference or using Bedrock or making smaller clusters, I think there's a good opportunity for customers to lower costs with Trainium.Those clusters were 30% to 40% cheaper in comparison to Nvidia GPU clusters. What technical innovations are enabling these cost savings?Number one is that the team has done a fantastic job and produced a really good chip that performs really well. And so from an absolute basis, it gives better performance for some workloads. It's very workload dependent, but even Apple [says] in early testing, they see up to 50% price performance benefit. That's massive, if you can really get 30%, 40%, even 50% gains. And some of that is pricing, where we focused on building a chip that we think we can really materially lower the cost to produce for customers. But also then increasing performancethe team has built some innovations, where we see bottlenecks in AI training and inference, that we've built into the chips to improve particular function performance, etc. There are probably hundreds of thousands of things that go into delivering that type of performance, but we're quite excited about it and we're invested long term in the Trainium line.The company recently announced the Nova foundation model. Is that aimed at competing directly with the likes of GPT-4 and Gemini?Yes. We think it's important to have choice in the realm of these foundational models. Is it a direct competitor? We do think that we can deliver differentiated capabilities and performance. I think that this is such a big opportunity, and has such a material opportunity to change so many different workloads. These really large foundational modelsI think there'll be half a dozen to a dozen of them, probably less than 10. And I think they'll each be good at different things. [With] our Nova models, we focused on: how do we deliver a really low latency [and] great price performance? They're actually quite good at doing RAG [Retrieval-Augmented Generation] and agentic workflows. There's some other models that are better at other things today too. We'll keep pushing on it. I think there's room for a number of them, but we're very excited about the models and the customer reception has been really good.How does your partnership with Anthropic fit into this strategy?I think they have one of the strongest AI teams in the world. They have the leading model in the world right now. I think most people consider Sonnet to be the top model for reasoning and for coding and for a lot of other things as well. We get a lot of great feedback from customers on them. So we love that partnership, and we learn a lot from them too, as they build their models on top of Trainium, so there's a nice flywheel benefit where we get to learn from them, building on top of us. Our customers get to take advantage of leveraging their models inside of Bedrock, and we can grow the business together.How are you thinking about ensuring safety and responsibility in the development of AI?It's super important. And it goes up and down the stack. One of the reasons why customers are excited about models from us, in addition to them being very performant, is that we care a ton about safety. And so there's a couple of things. One is, you have to start from the beginning when you're building the models, you think about, how do you have as many controls in there as possible? How do you have safe development to the models? And then I think you need belt and suspenders in this space, because you can, of course, make models say things that you can then say oh, look what they said. Practically speaking our customers are trying to integrate these into their applications. And different from being able to produce a recipe for a bomb or something, which we definitely want to have security controls around, safety and control models actually extends specific to very use cases. If you're building an insurance application, you don't want your application to give out healthcare advice, whereas, if you're building healthcare one, you may. So we give a lot of controls to the customers so that they can build guardrails around the responses for models to really help guide how they want models to answer those questions. We launched a number of enhancements at re:Invent including what we call automated reasoning checks, which actually can give you a mathematical proof for if we can be 100% sure that an answer coming back is correct, based on the corpus of data that you have fed into the model. Eliminating hallucinations for a subset of answers is also super important. What's unsafe in the context of a customer's application can vary pretty widely, and so we try to give some really good controls for customers to be able to define that, because it's going to depend on the use cases. Energy requirements are a huge challenge for this business. Amazon is committed to a net zero emissions target by 2040 and you reported some progress there. How are you planning to continue reducing emissions while investing in large-scale infrastructure for AI?Number one is you just have to have that long term view as to how we ensure that the world has enough carbon-zero power. We've been the single biggest purchaser of renewable energy deals, new energy deals to the grid, so commissioning new solarsolar farms, or wind farms, etc. We've been the biggest corporate purchaser each of the last five years, and will continue to do that. Even on that path, that may not be fast enough, and so we've actually started investing in nuclear. I do think that that's an important component. It'll be part of that portfolio. It can be both large scale nuclear plants as well as, we've invested in and we're very bullish about small modular reactor technology, which is probably six or seven years out from really being in mass production. But we're optimistic that that can be another solve as part of that portfolio as well. On the path to carbon zero across the whole business, there's a lot of invention that's still going to need to happen. And I wont sit here and tell you we know all of the answers of how you're going to have carbon-zero shipping across oceans and airplanes for the retail side of it. And there's a whole bunch of challenges that the world has to go after, but that's part of why we made that commitment. We're putting together plans with with milestones along the way, because it's an incredibly important target for us. There's a lot of work to do but we're committed to doing it.And as part of that nuclear piece, you're supporting the development of these nuclear energy projects. What are you doing to ensure that the projects are safe in the communities where they're deployed?Look, I actually think one of the worst things for the environment was the mistakes the nuclear industry made back in the 50s, because it made everyone feel like technology wasn't that safe, which it may not have been way back then, but, it's been 70 years, and technology has evolved, and it is actually an incredibly safe, secure technology now. And so a lot of these things are actually fully self-contained and there is no risk of big meltdown or those kind of events that happened before. It's a super safe technology that has been well-tested and has been in production across the world safely for multiple decades now. There's still some fear, I think, from people, but, actually, increasingly, many geographies are realizing it's a quite safe technology.What do you want to see in terms of policy from the new presidential administration?We consider the U.S. government to be one of our most important customers that we support up and down the board and will continue to do so. So we're very excited, and we know many of those folks and are excited to continue to work on that mission together, because we do view it as a mission. It's both a good business for us, but it's also an ability to help our country move faster, to control costs, to be more agile. And I think it's super important, as you think about where the world is going, for our government to have access to the latest technologies. I do think AI and technology is increasingly becoming an incredibly important part of our national defense, probably as much so as guns and other things like that, and so we take that super seriously, and we're excited to work with the administration. I'm optimistic that President Trump and his administration can help us loosen some of the restrictions on helping build data centers faster. I'm hopeful that they can help us cut through some of that bureaucratic red tape and move faster. I think that'll be important, particularly as we want to maintain the AI lead for the U.S. ahead of China and others.What have you learned about leadership over the course of your career?We're fortunate at Amazon to be able to attract some of the most talented, most driven leaders and employees in the world, and I've been fortunate enough to get to work with some of those folks [and] to try to clear barriers for them so that they can go deliver outstanding results for our customers. I think if we have a smart team that is really focused on solving customer problems versus growing their own scope of responsibility or internal goals, [and] if you can get those teams focused on that and get barriers out of their way and remove obstacles, then we can deliver a lot. And so that's largely my job. I view myself as not the expert in any one particular thing. Every one of my team is usually better at whatever we're trying to do than I am. And my job is to let them go do their job as much as possible, and occasionally connect dots for them on where there's other parts of the company or other parts of the organization or other customer input that they may not have, that they can integrate and incorporate.You've worked closely with Andy Jassy, is there anything in particular that youve learned from watching him as a leader?I've learned a ton. He's a he's an exceptional leader. Andy is very good at having very high standards and having high expectations for the teams, and high standards for what we deliver for customers. He had a lot of the vision, together with some of the core folks who were starting AWS, of some important tenets of how we think about the business, of focusing on security and operational excellence and really focusing on how we go deliver for customers.What are your priorities for 2025?Our first priority always is to maintain outstanding security and operational excellence. We want to help customers get ready for that AI transformation that's going to happen. Part of that, though, is also helping get all of their applications in a place that they can take advantage of AI. So it's a hugely important priority for us to help customers continue on that migration to the cloud, because if their data is stuck on premise and legacy data stores and other things, they won't be able to take advantage of AI. So helping people modernize their data and analytics stacks to get that into the cloud and get their data links into a cloud and organized in a way that they can really start to take advantage of AI, is that is a big priority for us. And then it's just, how do we help scale the AI capabilities, bring the cost down for customers, while [we] keep adding the value. And for 2025, our goal is for customers to move AI workloads really into production that deliver great ROI for their businesses. And that crosses making sure all their data is in the right place, and make sure they have the right compute platforms. We think Trainium is going to be an important part of that. The last bit is helping add some applications on top. We think that we can add [the] extra benefit of helping employees and others get that effectiveness. Some of that is moving contact centers to the cloud. Some of that is helping get conversational assistants and AI assistants in the hands of employees, and so Amazon Q is a big part of that for us. And then it's also just empowering our broad partner ecosystem to go fast and help customers evolve as well.
    0 Comments ·0 Shares ·240 Views
  • TikTok Returns to Apple and Google App Stores in the U.S. After Trump Delayed Ban
    time.com
    By Zen Soo / APFebruary 14, 2025 2:30 AM ESTTikTok has returned to the app stores of Apple and Google in the U.S., after President Donald Trump delayed the enforcement of a TikTok ban.TikTok, which is operated by Chinese technology firm ByteDance, was removed from Apple and Googles app stores on Jan. 18 to comply with a law that requires ByteDance to divest the app or be banned in the U.S.The popular social media app, which has over 170 million American users, previously suspended its services in the U.S. for a day before restoring service following assurances from Trump that he would postpone banning the app. The TikTok service suspension briefly prompted thousands of users to migrate to RedNote, a Chinese social media app, while calling themselves TikTok refugees.The TikTok app became available to download again in the U.S. Apple App store and Google Play store after nearly a month. On Trumps first day in office, he signed an executive order to extend the enforcement of a ban on TikTok to April 5.TikTok has long faced troubles in the U.S., with the U.S. government claiming that its Chinese ownership and access to the data of millions of Americans makes it a national security risk.TikTok has denied allegations that it has shared U.S. user data at the behest of the Chinese government, and argued that the law requiring it to be divested or banned violates the First Amendment rights of its American users.During Trumps first term in office, he supported banning TikTok but later changed his mind, claiming that he had a warm spot for the app. TikTok CEO Shou Chew was among the attendees at Trumps inauguration ceremony.Trump has suggested that TikTok could be jointly owned, with half of its ownership being American. Potential buyers include real estate mogul Frank McCourt, Shark Tank investor Kevin OLeary and popular YouTuber Jimmy Donaldson, also known as MrBeast.Zen Soo reported from Hong Kong.
    0 Comments ·0 Shares ·216 Views
More Stories