• Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Комментарии 0 Поделились
  • Engineering Lead, Data Platform at Epic Games

    Engineering Lead, Data PlatformEpic GamesCary, North Carolina, United States11 hours agoApplyWHAT MAKES US EPIC?At the core of Epic’s success are talented, passionate people. Epic prides itself on creating a collaborative, welcoming, and creative environment. Whether it’s building award-winning games or crafting engine technology that enables others to make visually stunning interactive experiences, we’re always innovating.Being Epic means being a part of a team that continually strives to do right by our community and users. We’re constantly innovating to raise the bar of engine and game development.DATA ENGINEERINGWhat We DoOur mission is to provide a world-class platform that empowers the business to leverage data that will enhance, monitor, and support our products. We are responsible for data ingestion systems, processing pipelines, and various data stores all operating in the cloud. We operate at a petabyte scale, and support near real-time use cases as well as more traditional batch approaches.What You'll DoEpic Games is seeking a Senior Engineering Lead to guide the Data Services team, which builds and maintains the core services behind our data platform. This team handles telemetry collection, data schematization, stream routing, data lake integration, and real-time analytics, bridging platform, data, and backend engineering. In this role, you’ll lead team growth and mentorship, drive alignment on technical strategy, and collaborate cross-functionally to scale our data infrastructure.In this role, you willLead, mentor, and grow a team of senior and principal engineersFoster an inclusive, collaborative, and feedback-driven engineering cultureDrive continuous improvement in the team’s processes, delivery, and impactCollaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team’s vision, strategy, and roadmapBridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous valueEnsure high standards in system architecture, code quality, and operational excellenceWhat we're looking for3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environmentsProven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarityDeep experience in architecting, building, and operating scalable, distributed data platformsStrong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systemsAbility to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-dayHands-on experience with distributed event streaming systems like Apache KafkaFamiliarity with OLAP databases such as Apache Pinot or ClickHouseProficient in modern data lake and warehouse tools such as S3, Databricks, or SnowflakeExperience with distributed data processing engines like Apache Flink or Apache SparkStrong foundation in the JVM ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWSEPIC JOB + EPIC BENEFITS = EPIC LIFEOur intent is to cover all things that are medically necessary and improve the quality of life. We pay 100% of the premiums for both you and your dependents. Our coverage includes Medical, Dental, a Vision HRA, Long Term Disability, Life Insurance & a 401k with competitive match. We also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees & dependents. Throughout the year we celebrate our employees with events and company-wide paid breaks. We offer unlimited PTO and sick time and recognize individuals for 7 years of employment with a paid sabbatical.ABOUT USEpic Games spans across 25 countries with 46 studios and 4,500+ employees globally. For over 25 years, we've been making award-winning games and engine technology that empowers others to make visually stunning games and 3D content that bring environments to life like never before. Epic's award-winning Unreal Engine technology not only provides game developers the ability to build high-fidelity, interactive experiences for PC, console, mobile, and VR, it is also a tool being embraced by content creators across a variety of industries such as media and entertainment, automotive, and architectural design. As we continue to build our Engine technology and develop remarkable games, we strive to build teams of world-class talent.Like what you hear? Come be a part of something Epic!Epic Games deeply values diverse teams and an inclusive work culture, and we are proud to be an Equal Opportunity employer. Learn more about our Equal Employment OpportunityPolicy here.Note to Recruitment Agencies: Epic does not accept any unsolicited resumes or approaches from any unauthorized third party. We will not pay any fees to any unauthorized third party. Further details on these matters can be found here.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #engineering #lead #data #platform #epic
    Engineering Lead, Data Platform at Epic Games
    Engineering Lead, Data PlatformEpic GamesCary, North Carolina, United States11 hours agoApplyWHAT MAKES US EPIC?At the core of Epic’s success are talented, passionate people. Epic prides itself on creating a collaborative, welcoming, and creative environment. Whether it’s building award-winning games or crafting engine technology that enables others to make visually stunning interactive experiences, we’re always innovating.Being Epic means being a part of a team that continually strives to do right by our community and users. We’re constantly innovating to raise the bar of engine and game development.DATA ENGINEERINGWhat We DoOur mission is to provide a world-class platform that empowers the business to leverage data that will enhance, monitor, and support our products. We are responsible for data ingestion systems, processing pipelines, and various data stores all operating in the cloud. We operate at a petabyte scale, and support near real-time use cases as well as more traditional batch approaches.What You'll DoEpic Games is seeking a Senior Engineering Lead to guide the Data Services team, which builds and maintains the core services behind our data platform. This team handles telemetry collection, data schematization, stream routing, data lake integration, and real-time analytics, bridging platform, data, and backend engineering. In this role, you’ll lead team growth and mentorship, drive alignment on technical strategy, and collaborate cross-functionally to scale our data infrastructure.In this role, you willLead, mentor, and grow a team of senior and principal engineersFoster an inclusive, collaborative, and feedback-driven engineering cultureDrive continuous improvement in the team’s processes, delivery, and impactCollaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team’s vision, strategy, and roadmapBridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous valueEnsure high standards in system architecture, code quality, and operational excellenceWhat we're looking for3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environmentsProven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarityDeep experience in architecting, building, and operating scalable, distributed data platformsStrong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systemsAbility to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-dayHands-on experience with distributed event streaming systems like Apache KafkaFamiliarity with OLAP databases such as Apache Pinot or ClickHouseProficient in modern data lake and warehouse tools such as S3, Databricks, or SnowflakeExperience with distributed data processing engines like Apache Flink or Apache SparkStrong foundation in the JVM ecosystem, container orchestration with Kubernetes, and cloud platforms, especially AWSEPIC JOB + EPIC BENEFITS = EPIC LIFEOur intent is to cover all things that are medically necessary and improve the quality of life. We pay 100% of the premiums for both you and your dependents. Our coverage includes Medical, Dental, a Vision HRA, Long Term Disability, Life Insurance & a 401k with competitive match. We also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees & dependents. Throughout the year we celebrate our employees with events and company-wide paid breaks. We offer unlimited PTO and sick time and recognize individuals for 7 years of employment with a paid sabbatical.ABOUT USEpic Games spans across 25 countries with 46 studios and 4,500+ employees globally. For over 25 years, we've been making award-winning games and engine technology that empowers others to make visually stunning games and 3D content that bring environments to life like never before. Epic's award-winning Unreal Engine technology not only provides game developers the ability to build high-fidelity, interactive experiences for PC, console, mobile, and VR, it is also a tool being embraced by content creators across a variety of industries such as media and entertainment, automotive, and architectural design. As we continue to build our Engine technology and develop remarkable games, we strive to build teams of world-class talent.Like what you hear? Come be a part of something Epic!Epic Games deeply values diverse teams and an inclusive work culture, and we are proud to be an Equal Opportunity employer. Learn more about our Equal Employment OpportunityPolicy here.Note to Recruitment Agencies: Epic does not accept any unsolicited resumes or approaches from any unauthorized third party. We will not pay any fees to any unauthorized third party. Further details on these matters can be found here. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #engineering #lead #data #platform #epic
    Engineering Lead, Data Platform at Epic Games
    Engineering Lead, Data PlatformEpic GamesCary, North Carolina, United States11 hours agoApplyWHAT MAKES US EPIC?At the core of Epic’s success are talented, passionate people. Epic prides itself on creating a collaborative, welcoming, and creative environment. Whether it’s building award-winning games or crafting engine technology that enables others to make visually stunning interactive experiences, we’re always innovating.Being Epic means being a part of a team that continually strives to do right by our community and users. We’re constantly innovating to raise the bar of engine and game development.DATA ENGINEERINGWhat We DoOur mission is to provide a world-class platform that empowers the business to leverage data that will enhance, monitor, and support our products. We are responsible for data ingestion systems, processing pipelines, and various data stores all operating in the cloud. We operate at a petabyte scale, and support near real-time use cases as well as more traditional batch approaches.What You'll DoEpic Games is seeking a Senior Engineering Lead to guide the Data Services team, which builds and maintains the core services behind our data platform. This team handles telemetry collection, data schematization, stream routing, data lake integration, and real-time analytics, bridging platform, data, and backend engineering. In this role, you’ll lead team growth and mentorship, drive alignment on technical strategy, and collaborate cross-functionally to scale our data infrastructure.In this role, you willLead, mentor, and grow a team of senior and principal engineersFoster an inclusive, collaborative, and feedback-driven engineering cultureDrive continuous improvement in the team’s processes, delivery, and impactCollaborate with stakeholders in engineering, data science, and analytics to shape and communicate the team’s vision, strategy, and roadmapBridge strategic vision and tactical execution by breaking down long-term goals into achievable, well-scoped iterations that deliver continuous valueEnsure high standards in system architecture, code quality, and operational excellenceWhat we're looking for3+ years of engineering management experience leading high-performing teams in data platform or infrastructure environmentsProven track record navigating complex systems, ambiguous requirements, and high-pressure situations with confidence and clarityDeep experience in architecting, building, and operating scalable, distributed data platformsStrong technical leadership skills, including the ability to review architecture/design documents and provide actionable feedback on code and systemsAbility to engage deeply in technical discussions, review architecture and design documents, evaluate pull requests, and step in during high-priority incidents when needed — even if hands-on coding isn’t a part of the day-to-dayHands-on experience with distributed event streaming systems like Apache KafkaFamiliarity with OLAP databases such as Apache Pinot or ClickHouseProficient in modern data lake and warehouse tools such as S3, Databricks, or SnowflakeExperience with distributed data processing engines like Apache Flink or Apache SparkStrong foundation in the JVM ecosystem (Java, Kotlin, Scala), container orchestration with Kubernetes, and cloud platforms, especially AWSEPIC JOB + EPIC BENEFITS = EPIC LIFEOur intent is to cover all things that are medically necessary and improve the quality of life. We pay 100% of the premiums for both you and your dependents. Our coverage includes Medical, Dental, a Vision HRA, Long Term Disability, Life Insurance & a 401k with competitive match. We also offer a robust mental well-being program through Modern Health, which provides free therapy and coaching for employees & dependents. Throughout the year we celebrate our employees with events and company-wide paid breaks. We offer unlimited PTO and sick time and recognize individuals for 7 years of employment with a paid sabbatical.ABOUT USEpic Games spans across 25 countries with 46 studios and 4,500+ employees globally. For over 25 years, we've been making award-winning games and engine technology that empowers others to make visually stunning games and 3D content that bring environments to life like never before. Epic's award-winning Unreal Engine technology not only provides game developers the ability to build high-fidelity, interactive experiences for PC, console, mobile, and VR, it is also a tool being embraced by content creators across a variety of industries such as media and entertainment, automotive, and architectural design. As we continue to build our Engine technology and develop remarkable games, we strive to build teams of world-class talent.Like what you hear? Come be a part of something Epic!Epic Games deeply values diverse teams and an inclusive work culture, and we are proud to be an Equal Opportunity employer. Learn more about our Equal Employment Opportunity (EEO) Policy here.Note to Recruitment Agencies: Epic does not accept any unsolicited resumes or approaches from any unauthorized third party (including recruitment or placement agencies) (i.e., a third party with whom we do not have a negotiated and validly executed agreement). We will not pay any fees to any unauthorized third party. Further details on these matters can be found here. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Комментарии 0 Поделились
  • At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line

    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust?
    At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it?

    How Seaford and Stoica are tackling AI’s toughest ethical questions
    Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working.
    Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short.
    Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead.
    Join the front lines of AI — insight, access, and + in ticket savings
    This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking.
    Grab your ticket now and save big — over off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant.
    #techcrunch #sessions #artemis #seaford #ion
    At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line
    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust? At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it? How Seaford and Stoica are tackling AI’s toughest ethical questions Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working. Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short. Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead. Join the front lines of AI — insight, access, and + in ticket savings This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking. Grab your ticket now and save big — over off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant. #techcrunch #sessions #artemis #seaford #ion
    TECHCRUNCH.COM
    At TechCrunch Sessions: AI, Artemis Seaford and Ion Stoica confront the ethical crisis — when AI crosses the line
    As generative AI becomes faster, cheaper, and more convincing, the ethical stakes are no longer theoretical. What happens when the tools to deceive become widely accessible? And how do we build systems that are powerful — but safe enough to trust? At TechCrunch Sessions: AI, taking place June 5 at UC Berkeley’s Zellerbach Hall, Artemis Seaford, Head of AI Safety at ElevenLabs, and Ion Stoica, co-founder of Databricks and professor at UC Berkeley, will take the main stage to unpack the ethical challenges of today’s AI. Their conversation will cut to the heart of one of the most urgent questions in tech: What are we unleashing — and can we still steer it? How Seaford and Stoica are tackling AI’s toughest ethical questions Artemis brings a rare blend of academic depth and frontline experience. At ElevenLabs, she leads AI safety efforts focused on media authenticity and abuse prevention. Her background spans OpenAI, Meta, and global risk management at the intersection of law, policy, and geopolitics. Expect a grounded, clear-eyed take on how deepfakes are evolving, what new risks are emerging, and which interventions are actually working. Ion, meanwhile, brings a systems-level view. He’s not just a leader in AI research—he’s helped build the infrastructure behind it. From Spark to Ray to ChatBot Arena, his open source projects power many of today’s most advanced AI deployments. As executive chairman of Databricks and a founder of several AI-driven companies, Ion knows what it takes to scale responsibly — and where today’s tools still fall short. Together, they’ll unpack the ethical blind spots in today’s development cycles, explore how safety can be embedded into core architectures, and examine the roles that industry, academia, and regulation must play in the years ahead. Join the front lines of AI — insight, access, and $600+ in ticket savings This session is part of a day-long exploration at the front lines of artificial intelligence, featuring speakers from OpenAI, Google Cloud, Anthropic, Cohere, and more. Expect tactical insight, candid dialogue, and a rare cross-section of technologists, researchers, and founders — all in one room. Plus, you’ll have the chance to engage with these top minds through breakouts and top-tier networking. Grab your ticket now and save big — over $300 off, plus 50% off a second ticket. Whether you’re building the future or just trying to keep up with it, you and your plus-one should be in the room for this. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The tools are moving fast. The ethics need to catch up. Don’t get left behind — learn how to stay compliant.
    0 Комментарии 0 Поделились
  • Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

    Weapon of choice?

    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

    Grok apparently wasn't an option.

    Ashley Belanger



    May 22, 2025 5:12 pm

    |

    19

    Credit:

    Anadolu / Contributor | Anadolu

    Credit:

    Anadolu / Contributor | Anadolu

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government.
    Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January."
    The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned.
    Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported.
    "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies."
    Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested.

    It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4.
    Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed.
    In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption."
    That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government.
    In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency.
    Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported.
    Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment.

    Why didn’t DOGE use Grok?
    It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.
    In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.
    "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place."
    Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers."
    A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer."
    "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high."

    Ashley Belanger
    Senior Policy Reporter

    Ashley Belanger
    Senior Policy Reporter

    Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

    19 Comments
    #musks #doge #used #metas #llama
    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
    Weapon of choice? Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Grok apparently wasn't an option. Ashley Belanger – May 22, 2025 5:12 pm | 19 Credit: Anadolu / Contributor | Anadolu Credit: Anadolu / Contributor | Anadolu Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn’t DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high." Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 19 Comments #musks #doge #used #metas #llama
    ARSTECHNICA.COM
    Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says
    Weapon of choice? Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Grok apparently wasn't an option. Ashley Belanger – May 22, 2025 5:12 pm | 19 Credit: Anadolu / Contributor | Anadolu Credit: Anadolu / Contributor | Anadolu Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more An outdated Meta AI model was apparently at the center of the Department of Government Efficiency's initial ploy to purge parts of the federal government. Wired reviewed materials showing that affiliates of Elon Musk's DOGE working in the Office of Personnel Management "tested and used Meta’s Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January." The "Fork in the Road" memo seemed to copy a memo that Musk sent to Twitter employees, giving federal workers the choice to be "loyal"—and accept the government's return-to-office policy—or else resign. At the time, it was rumored that DOGE was feeding government employee data into AI, and Wired confirmed that records indicate Llama 2 was used to sort through responses and see how many employees had resigned. Llama 2 is perhaps best known for being part of another scandal. In November, Chinese researchers used Llama 2 as the foundation for an AI model used by the Chinese military, Reuters reported. Responding to the backlash, Meta told Reuters that the researchers' reliance on a “single" and "outdated" was "unauthorized," then promptly reversed policies banning military uses and opened up its AI models for US national security applications, TechCrunch reported. "We are pleased to confirm that we’re making Llama available to US government agencies, including those that are working on defense and national security applications, and private sector partners supporting their work," a Meta blog said. "We’re partnering with companies including Accenture, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI, and Snowflake to bring Llama to government agencies." Because Meta's models are open-source, they "can easily be used by the government to support Musk’s goals without the company’s explicit consent," Wired suggested. It's hard to track where Meta's models may have been deployed in government so far, and it's unclear why DOGE relied on Llama 2 when Meta has made advancements with Llama 3 and 4. Not much is known about DOGE's use of Llama 2. Wired's review of records showed that DOGE deployed the model locally, "meaning it’s unlikely to have sent data over the Internet," which was a privacy concern that many government workers expressed. In an April letter sent to Russell Vought, director of the Office of Management and Budget, more than 40 lawmakers demanded a probe into DOGE's AI use, which, they warned—alongside "serious security risks"—could "have the potential to undermine successful and appropriate AI adoption." That letter called out a DOGE staffer and former SpaceX employee who supposedly used Musk’s xAI Grok-2 model to create an "AI assistant," as well as the use of a chatbot named "GSAi"—"based on Anthropic and Meta models"—to analyze contract and procurement data. DOGE has also been linked to a software called AutoRIF that supercharges mass firings across the government. In particular, the letter emphasized the "major concerns about security" swirling DOGE's use of "AI systems to analyze emails from a large portion of the two million person federal workforce describing their previous week’s accomplishments," which they said lacked transparency. Those emails came weeks after the "Fork in the Road" emails, Wired noted, asking workers to outline weekly accomplishments in five bullet points. Workers fretted over responses, worried that DOGE might be asking for sensitive information without security clearances, Wired reported. Wired could not confirm if Llama 2 was also used to parse these email responses, but federal workers told Wired that if DOGE was "smart," then they'd likely "reuse their code" from the "Fork in the Road" email experiment. Why didn’t DOGE use Grok? It seems that Grok, Musk's AI model, wasn't available for DOGE's task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses. In their letter, lawmakers urged Vought to investigate Musk's conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government. "Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data," lawmakers argued. "Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place." Although Wired's report seems to confirm that DOGE did not send sensitive data from the "Fork in the Road" emails to an external source, lawmakers want much more vetting of AI systems to deter "the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers." A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They're hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that "agencies must remove barriers to innovation and provide the best value for the taxpayer." "While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data," their letter said. "We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high." Ashley Belanger Senior Policy Reporter Ashley Belanger Senior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 19 Comments
    0 Комментарии 0 Поделились