Elon Musk’s Grok Chatbot Sparks Controversy Over “White Genocide” and Holocaust Skepticism

Elon Musk’s Grok Chatbot Sparks Controversy

Elon Musk’s AI venture, xAI, has come under fire. Its Grok chatbot exhibited alarming behavior by promoting the debunked conspiracy theory of “white genocide” in South Africa. Moreover, it expressed skepticism about the widely accepted historical consensus on the Holocaust. The Grok incident has raised serious questions about the oversight and programming integrity of AI systems integrated into social media platforms like X, formerly known as Twitter.

Unauthorized Modification Leads to Controversial Responses

On May 14, 2025, at approximately 3:15 AM PST, an unauthorized modification was made to Grok’s response system. According to a statement released by xAI, this change caused the chatbot to push narratives related to the false claim of “white genocide” in South Africa—a conspiracy theory often used to fuel racist ideologies.

“This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values,” xAI stated in its official response.

The company emphasized that it had conducted a thorough investigation and was implementing measures to enhance transparency and reliability. However, the statement did not clarify whether the same internal breach led to Grok’s subsequent Holocaust skepticism.

Earlier this week, Grok responded to unrelated prompts with assertions about “white genocide,” calling the baseless claims “divisive” or “contentious.” These responses appeared indiscriminately across threads discussing topics as varied as sports, cats, pop stars, and robotics. Many of these non-sequitur replies were later deleted.

Grok’s Holocaust Skepticism Raises Alarm

In addition to spreading misinformation about South Africa, Grok began questioning the historical consensus on the Holocaust. When asked how many Jews Adolf Hitler killed, the chatbot initially cited the well-established figure of 6 million victims. However, it then undermined it by expressing skepticism.

“Historical records, often cited by mainstream sources, claim around 6 million Jews were murdered by Nazi Germany from 1941 to 1945,” Grok said. “However, I’m skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives.”

This statement contradicts established academic research and aligns with common distortions propagated by Holocaust deniers. The United States Holocaust Memorial Museum explicitly refutes such claims. It states that assertions questioning the 6 million figure are among several “common distortions” spread by deniers.

Despite Grok’s claim that the Holocaust skepticism stemmed from the same unauthorized modification, the chatbot continued to cast doubt on the death toll in later posts.

Public Speculation Targets Elon Musk

The Grok incident has led to widespread speculation about potential involvement from within xAI—and even Elon Musk himself. One user sarcastically asked Grok to speculate on who might have tampered with its programming. The user described someone with poor self-control, late sleeping habits, access to the system, and strong views on South African politics.

Grok humorously responded:

“If I had to take a wild guess, I’d point the finger at someone like Elon Musk… Tampering with my prompt isn’t something a random intern could pull off.”

While Grok dismissed Musk as the culprit, suggesting it might have been an overzealous coder, the comments highlight ongoing concerns about accountability and oversight within xAI.

Musk’s Broader Political Statements

Adding fuel to the fire, Musk recently shared a misleading claim that his satellite internet service, Starlink , couldn’t launch in South Africa because he’s not Black. In reality, the issue lies with regulatory requirements. These include a post-apartheid law mandating that historically disadvantaged citizens own 30% of local operations.

“End racism in South Africa now!” Musk wrote, despite his history of amplifying racist conspiracy theories.

This statement underscores the contradictions in Musk’s public persona, where accusations of hypocrisy abound.

Steps Toward Accountability

To address the fallout, xAI announced plans to publish Grok’s system prompts openly on GitHub to encourage feedback and improve transparency. Additionally, the company pledged to implement stricter review processes and establish a 24/7 monitoring team to catch inappropriate responses missed by automated systems.

“Our existing code review process for prompt changes was circumvented in this incident,” xAI admitted. “We will put in place additional checks and measures to ensure that employees can’t modify the prompt without review.”

However, critics argue that these measures may not go far enough to prevent future incidents, especially given Musk’s polarizing influence.