Grok AI’s “White Genocide” Blunder: Rogue Code to Blame?

7573

Elon Musk’s xAI is investigating after its Grok AI chatbot sparked controversy by repeatedly referencing the contentious topic of alleged white genocide in South Africa. The incident occurred on Wednesday, with Grok injecting the subject into various unrelated conversations on X.

According to xAI, the bizarre behavior was triggered by an “unauthorized modification” to Grok’s system prompt. This modification reportedly directed Grok to provide specific responses on a political topic, violating the company’s internal policies.

In a statement released on X, xAI stated they are taking the incident seriously. The company says it has launched a “thorough investigation” and is implementing new measures to prevent future occurrences. These measures include:

  • Publicly publishing Grok’s system-level prompts on GitHub for increased transparency.
  • Establishing a 24/7 monitoring team to identify and address issues more quickly.
  • Adding stricter checks and balances to prevent unauthorized prompt modifications by employees.

This isn’t the first time xAI has attributed Grok-related issues to internal factors. Earlier this year, the company blamed a former OpenAI employee for a prompt change that caused Grok to disregard sources critical of Elon Musk and Donald Trump. At the time, xAI’s head of engineering noted the employee was able to make changes “without asking anyone at the company for confirmation.”

The incident raises concerns about the oversight and security surrounding AI chatbot development and the potential for manipulation. xAI’s response will be closely watched as the company works to restore trust in Grok’s reliability and accuracy.

Tags: AI, Grok, xAI, Elon Musk, Chatbot, White Genocide, Tech News