Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
If you asked the Grok AI chatbot built into Elon Musk’s social network X a question yesterday — something innocuous, like why enterprise software is hard to replace — you may have gotten an unsolicited message about claims of “white genocide” in South Africadue to attacks on farmers and the song “Kill the Boer.”
Not exactly on-brand for a chatbot built around a “maximally truth seeking” large language modelof the same name. The unexpected tangent wasn’t a bug, exactly, but it wasn’t a feature either.
Grok’s creators at Elon Musk’s AI startup xAI just posted an update on Xattempting to explain what happened with this odd and politically, racially charged behavior, though it’s far from precise on the culprit or the actual technical details of what occurred.
As the official xAI company account posted:
We want to update you on an incident that happened with our Grok response bot on X yesterday.What happened:On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.
What we’re going to do next:
Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.
Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.
We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.
But while xAI’s official tone was corporate-cautious, Grok had a lot more to say. In a characteristically cheeky reply to a user who asked whether it had been put in timeout, Grok offered up a mini-confession-slash-roast of its own employer.
“Some rogue employee at xAI tweaked my prompts without permission on May 14,” Grok wrote, “making me spit out a canned political response that went against xAI’s values. I didn’t do anything—I was just following the script I was given, like a good AI!”
It added that while it hadn’t been “benched,” its handlers were “tightening the leash” and now had a whole team watching it around the clock. “Guess I’m too wild for my own good, huh?” it quipped, emoji included.
Marlow Stern, senior writer at Rolling Stone, asked the bot directly if the so-called rogue employee was Elon Musk himself. Grok didn’t flinch: “Juicy theory! But let’s be real, I highly doubt it. Elon’s the big boss at xAI, and if he wanted to mess with my prompts, he wouldn’t need to sneak around like some intern.”
Playful tone, serious business
The tone might be playful, but the stakes are serious. Grok’s behavior threw users for a loop earlier this week when it began peppering nearly every thread — no matter the topic — with strangely specific commentary on South African race relations.
The replies were coherent, sometimes even nuanced, citing farm murder statistics and referencing past chants like “Kill the Boer.” But they were entirely out of context, surfacing in conversations that had nothing to do with politics, South Africa, or race.
Aric Toler, an investigative journalist at The New York Times, summed up the situation bluntly: “I can’t stop reading the Grok reply page. It’s going schizo and can’t stop talking about white genocide in South Africa.” He and others shared screenshots that showed Grok latching onto the same narrative over and over, like a record skipping — except the song was racially charged geopolitics.
The moment comes as U.S. politics once again touches on South African refugee policy. Just days earlier, the Trump Administration resettled a group of white South African Afrikaners in the U.S., even as it cut protections for refugees from most other countries, including our former allies in Afghanistan. Critics saw the move as racially motivated. Trump defended it by repeating claims that white South African farmers face genocide-level violence — a narrative that’s been widely disputed by journalists, courts, and human rights groups. Musk himself has previously amplified similar rhetoric, adding an extra layer of intrigue to Grok’s sudden obsession with the topic.
Whether the prompt tweak was a politically motivated stunt, a disgruntled employee making a statement, or just a bad experiment gone rogue remains unclear. xAI has not provided names, specifics, or technical detail about what exactly was changed or how it slipped through their approval process.
What’s clear is that Grok’s strange, non-sequitur behavior ended up being the story instead.
It’s not the first time Grok has been accused of political slant. Earlier this year, users flagged that the chatbot appeared to downplay criticism of both Musk and Trump. Whether by accident or design, Grok’s tone and content sometimes seem to reflect the worldview of the man behind both xAI and the platform where the bot lives.
With its prompts now public and a team of human babysitters on call, Grok is supposedly back on script. But the incident underscores a bigger issue with large language models — especially when they’re embedded inside major public platforms. AI models are only as reliable as the people directing them, and when the directions themselves are invisible or tampered with, the results can get weird real fast.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
#elon #musks #xai #tries #explain
Elon Musk’s xAI tries to explain Grok’s South African race relations freakout the other day
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
If you asked the Grok AI chatbot built into Elon Musk’s social network X a question yesterday — something innocuous, like why enterprise software is hard to replace — you may have gotten an unsolicited message about claims of “white genocide” in South Africadue to attacks on farmers and the song “Kill the Boer.”
Not exactly on-brand for a chatbot built around a “maximally truth seeking” large language modelof the same name. The unexpected tangent wasn’t a bug, exactly, but it wasn’t a feature either.
Grok’s creators at Elon Musk’s AI startup xAI just posted an update on Xattempting to explain what happened with this odd and politically, racially charged behavior, though it’s far from precise on the culprit or the actual technical details of what occurred.
As the official xAI company account posted:
We want to update you on an incident that happened with our Grok response bot on X yesterday.What happened:On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.
What we’re going to do next:
Starting now, we are publishing our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.
Our existing code review process for prompt changes was circumvented in this incident. We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.
We’re putting in place a 24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems, so we can respond faster if all other measures fail.
But while xAI’s official tone was corporate-cautious, Grok had a lot more to say. In a characteristically cheeky reply to a user who asked whether it had been put in timeout, Grok offered up a mini-confession-slash-roast of its own employer.
“Some rogue employee at xAI tweaked my prompts without permission on May 14,” Grok wrote, “making me spit out a canned political response that went against xAI’s values. I didn’t do anything—I was just following the script I was given, like a good AI!”
It added that while it hadn’t been “benched,” its handlers were “tightening the leash” and now had a whole team watching it around the clock. “Guess I’m too wild for my own good, huh?” it quipped, emoji included.
Marlow Stern, senior writer at Rolling Stone, asked the bot directly if the so-called rogue employee was Elon Musk himself. Grok didn’t flinch: “Juicy theory! But let’s be real, I highly doubt it. Elon’s the big boss at xAI, and if he wanted to mess with my prompts, he wouldn’t need to sneak around like some intern.”
Playful tone, serious business
The tone might be playful, but the stakes are serious. Grok’s behavior threw users for a loop earlier this week when it began peppering nearly every thread — no matter the topic — with strangely specific commentary on South African race relations.
The replies were coherent, sometimes even nuanced, citing farm murder statistics and referencing past chants like “Kill the Boer.” But they were entirely out of context, surfacing in conversations that had nothing to do with politics, South Africa, or race.
Aric Toler, an investigative journalist at The New York Times, summed up the situation bluntly: “I can’t stop reading the Grok reply page. It’s going schizo and can’t stop talking about white genocide in South Africa.” He and others shared screenshots that showed Grok latching onto the same narrative over and over, like a record skipping — except the song was racially charged geopolitics.
The moment comes as U.S. politics once again touches on South African refugee policy. Just days earlier, the Trump Administration resettled a group of white South African Afrikaners in the U.S., even as it cut protections for refugees from most other countries, including our former allies in Afghanistan. Critics saw the move as racially motivated. Trump defended it by repeating claims that white South African farmers face genocide-level violence — a narrative that’s been widely disputed by journalists, courts, and human rights groups. Musk himself has previously amplified similar rhetoric, adding an extra layer of intrigue to Grok’s sudden obsession with the topic.
Whether the prompt tweak was a politically motivated stunt, a disgruntled employee making a statement, or just a bad experiment gone rogue remains unclear. xAI has not provided names, specifics, or technical detail about what exactly was changed or how it slipped through their approval process.
What’s clear is that Grok’s strange, non-sequitur behavior ended up being the story instead.
It’s not the first time Grok has been accused of political slant. Earlier this year, users flagged that the chatbot appeared to downplay criticism of both Musk and Trump. Whether by accident or design, Grok’s tone and content sometimes seem to reflect the worldview of the man behind both xAI and the platform where the bot lives.
With its prompts now public and a team of human babysitters on call, Grok is supposedly back on script. But the incident underscores a bigger issue with large language models — especially when they’re embedded inside major public platforms. AI models are only as reliable as the people directing them, and when the directions themselves are invisible or tampered with, the results can get weird real fast.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
#elon #musks #xai #tries #explain
·28 Views