Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
Weapon of choice?
Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
Grok apparently wasn't an option.
Ashley Belanger
–
May 22, 2025 5:12 pm
|
19
Credit:
Anadolu / Contributor | Anadolu
Credit:
Anadolu / Contributor | Anadolu
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers only
Learn more
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government.
Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January."
The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported.
"We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies."
Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested.
It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4.
Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed.
In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption."
That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government.
In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency.
Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported.
Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment.
Why didn’t DOGE use Grok?
It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.
In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.
"Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place."
Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers."
A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer."
"While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."
Ashley Belanger
Senior Policy Reporter
Ashley Belanger
Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
19 Comments
#musks #doge #used #metas #llama
Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
Weapon of choice?
Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
Grok apparently wasn't an option.
Ashley Belanger
–
May 22, 2025 5:12 pm
|
19
Credit:
Anadolu / Contributor | Anadolu
Credit:
Anadolu / Contributor | Anadolu
Story text
Size
Small
Standard
Large
Width
*
Standard
Wide
Links
Standard
Orange
* Subscribers only
Learn more
An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government.
Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January."
The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported.
"We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies."
Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested.
It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4.
Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed.
In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption."
That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government.
In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency.
Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported.
Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment.
Why didn’t DOGE use Grok?
It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.
In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.
"Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place."
Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers."
A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer."
"While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."
Ashley Belanger
Senior Policy Reporter
Ashley Belanger
Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
19 Comments
#musks #doge #used #metas #llama
·122 Ansichten