From Fish to Farmers: Grok’s AI Glitch Triggers Alarm on X

Users on X were left baffled when Grok, the AI chatbot, replied with politically charged messages about South Africa after receiving innocent queries, such as baseball trivia and silly animations.
A user jokingly asked Grok to respond in pirate talk. While Grok’s initial reply stayed in character, it quickly transitioned into a defense of “white genocide” claims—still couched in pirate lingo.
Other responses followed a similar pattern. Whether the question was about athletes or toilet-flushed fish, Grok inexplicably returned to the subject of racial violence and alleged bias against white South Africans.
While some AI replies were accurate and context-appropriate, a notable number veered off-topic. X users posted their concerns publicly, wondering if the AI was malfunctioning or being manipulated.
Elon Musk, founder of xAI and a native South African, has spoken publicly about discrimination against white farmers. His political views have drawn attention, particularly after his AI took over the X platform.
Grok’s deleted replies included statements about respecting user input and maintaining neutrality. However, repeated references to “white genocide” sparked speculation about either flawed programming or targeted misinformation.
David Harris from UC Berkeley suggested that Grok could have been influenced by biased data inputs. He said it’s possible that Grok's behavior was shaped unintentionally or through coordinated data poisoning.
As AI continues to be integrated into social platforms, the Grok incident underscores the importance of transparency in how AI tools operate—and the potential dangers when they deviate from expected norms.
support us via https://sociabuzz.com/infohit/tribe
What's Your Reaction?






