After being accused of being Woke, Elon Musk’s xAI chatbot Grok is determined to prove that he isn't woke by spurring out white genocide rhetoric in South Africa, even in replies on unrelated topics.
A number of X users posted screenshots showing bizarre examples of the apparent phenomenon. In one of the examples, the user was asking Grok to confirm how many times HBO has changed its name.
While the chatbot did initially responded promptly with the timeline of HBO's streaming service name changes, it then followed with a statement on "white genocide" in South Africa.
In another example, entrepreneur Sheel Mohnot shared how Grok answered a riddle with an unrelated comment about South African racial tensions.
Grok as an advocate of truth
When Grok was initially released, Elon Musk advertised Grok as an advocate of truth. When GROK was asked who was the greatest spreader of misinformation, the AI machine didn't even hesitate to say that its creator, Elon Musk, was the biggest source of misinformation.
Grok was also not afraid to correct powerful political figures like President Trump, Robert F. Kennedy, and Tom Cotton.
But while some might hail its courage, there are some also many who are accusing Grok of being too woke.
Grok itself has acknowledged the issue, attributing the off-topic responses to a misfire in its programming.
"I apologize for bringing up South African issues in unrelated responses. That was not my intent, and I can see how it's confusing. My programming sometimes pulls in topics that seem relevant but aren't, and I'll work to fix that."
Beyond coding errors, another possible cause is Grok's tendency to overemphasize trending topics, including U.S. Granding asylum to 59 white South Africans and an executive order by President Trump in February regarding claims that the South African government was seizing land from Afrikaners.
The perils of a Woke AI
The Grok incident highlights the complex challenges facing AI platforms as they become more deeply integrated into public discourse. With chatbots increasingly shaping how information is delivered and interpreted, the need for robust safeguards against bias, misinformation, and manipulation is more urgent than ever.
As the debate continues, industry observers and users alike are calling for greater oversight and transparency in how AI systems are trained, programmed, and monitored-especially when they intersect with sensitive political or social issues.