www.technologyreview.com
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,sign up here.No tech leader before has played the role in a new presidential administration that Elon Musk is playing now. Under his leadership, DOGE has entered offices in a half-dozen agencies and counting, begun building AI models for government data, accessed various payment systems, had its access to the Treasury halted by a federal judge, and sparked lawsuits questioning the legality of the groups activities.The stated goal of DOGEs actions, per a statement from a White House spokesperson to the New York Times on Thursday, is slashing waste, fraud, and abuse.As I point out in my story published Friday, these three terms mean very different things in the world of federal budgets, from errors the government makes when spending money to nebulous spending thats legal and approved but disliked by someone in power.Many of the new administrations loudest and most sweeping actionslike Musks promise to end the entirety of USAIDs varied activities or Trumps severe cuts to scientific funding from the National Institutes of Healthmight be said to target the latter category. If DOGE feeds government data to large language models, it might easily find spending associated with DEI or other initiatives the administration considers wasteful as it pushes for $2 trillion in cuts, nearly a third of the federal budget.But the fact that DOGE aides are reportedly working in the offices of Medicaid and even Medicarewhere budget cuts have been politically untenable for decadessuggests the task force is also driven by evidence published by the Government Accountability Office. The GAOs reports also give a clue into what DOGE might be hoping AI can accomplish.Heres what the reports reveal: Six federal programs account for 85% of what the GAO calls improper payments by the government, or about $200 billion per year, and Medicare and Medicaid top the list. These make up small fractions of overall spending but nearly 14% of the federal deficit. Estimates of fraud, in which courts found that someone willfully misrepresented something for financial benefit, run between $233 billion and $521 billion annually.So where is fraud happening, and could AI models fix it, as DOGE staffers hope? To answer that, I spoke with Jetson Leder-Luis, an economist at Boston University who researches fraudulent federal payments in health care and how algorithms might help stop them.By dollar value [of enforcement], most health-care fraud is committed by pharmaceutical companies, he says.Often those companies promote drugs for uses that are not approved, called off-label promotion, which is deemed fraud when Medicare or Medicaid pay the bill. Other types of fraud include upcoding, where a provider sends a bill for a more expensive service than was given, and medical-necessity fraud, where patients receive services that theyre not qualified for or didnt need. Theres also substandard care, where companies take money but dont provide adequate services.The way the government currently handles fraud is referred to as pay and chase. Questionable payments occur, and then people try to track it down after the fact. The more effective way, as advocated by Leder-Luis and others, is to look for patterns and stop fraudulent payments before they occur.This is where AI comes in. The idea is to use predictive models to find providers that show the marks of questionable payment. You want to look for providers who make a lot more money than everyone else, or providers who bill a specialty code that nobody else bills, Leder-Luis says, naming just two of many anomalies the models might look for. In a 2024 study by Leder-Luis and colleagues, machine-learning models achieved an eightfold improvement over random selection in identifying suspicious hospitals.The government does use some algorithms to do this already, but theyre vastly underutilized and miss clear-cut fraud cases, Leder-Luis says. Switching to a preventive model requires more than just a technological shift. Health-care fraud, like other fraud, is investigated by law enforcement under the current pay and chase paradigm. A lot of the types of things that Im suggesting require you to think more like a data scientist than like a cop, Leder-Luis says.One caveat is procedural. Building AI models, testing them, and deploying them safely in different government agencies is a massive feat, made even more complex by the sensitive nature of health data.Critics of Musk, like the tech and democracy group Tech Policy Press, argue that his zeal for government AI discards established procedures and is based on a false idea that the goal of bureaucracy is merely what it produces (services, information, governance) and can be isolated from the process through which democracy achieves those ends: debate, deliberation, and consensus.Jennifer Pahlka, who served as US deputy chief technology officer under President Barack Obama, argued in a recent op-ed in the New York Times that ineffective procedures have held the US government back from adopting useful tech. Still, she warns, abandoning nearly all procedure would be an overcorrection.Democrats goal must be a muscular, lean, effective administrative state that works for Americans, she wrote. Mr. Musks recklessness will not get us there, but neither will the excessive caution and addiction to procedure that Democrats exhibited under President Joe Bidens leadership.The other caveat is this: Unless DOGE articulates where and how its focusing its efforts, our insight into its intentions is limited. How much is Musk identifying evidence-based opportunities to reduce fraud, versus just slashing what he considers woke spending in an effort to drastically reduce the size of the government? Its not clear DOGE makes a distinction.Now read the rest of The AlgorithmDeeper LearningMeta has an AI for brain typing, but its stuck in the labResearchers working for Meta have managed to analyze peoples brains as they type and determine what keys they are pressing, just from their thoughts. The system can determine what letter a typist has pressed as much as 80% of the time. The catch is that it can only be done in a lab.Why it matters: Though brain scanning with implants like Neuralink has come a long way, this approach from Meta is different. The company says it is oriented toward basic research into the nature of intelligence, part of a broader effort to uncover how the brain structures language. Read more from Antonio Regalado.Bites and BytesAn AI chatbot told a user how to kill himselfbut the company doesnt want to censor itWhile Nomis chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructionsand the companys responseare striking. Taken together with a separate casein which the parents of a teen who died by suicide filed a lawsuit against Character.AI, the maker of a chatbot they say played a key role in their sons deathits clear we are just beginning to see whether an AI company is held legally responsible when its models output something unsafe. (MIT Technology Review)I let OpenAIs new agent manage my life. It spent $31 on a dozen eggs.Operator, the new AI that can reach into the real world, wants to act like your personal assistant. This fun review shows what its good and bad atand how it can go rogue. (The Washington Post)Four Chinese AI startups to watch beyond DeepSeekDeepSeek is far from the only game in town. These companies are all in a position to compete both within China and beyond. (MIT Technology Review)Metas alleged torrenting and seeding of pirated books complicates copyright caseNewly unsealed emails allegedly provide the most damning evidence yet against Meta in a copyright case raised by authors alleging that it illegally trained its AI models on pirated books. In one particularly telling email, an engineer told a colleague, Torrenting from a corporate laptop doesnt feel right. (Ars Technica)Whats next for smart glassesSmart glasses are on the verge of becomingwhisper itcool. Thats because, thanks to various technological advancements, theyre becoming useful, and theyre only set to become more so. Heres whats coming in 2025 and beyond. (MIT Technology Review)