When Bias Becomes a Feature

Umar Fareed Chishti

The latest update to Elon Musk and xAI’s chatbot, Grok, created waves, but not the kind they had hoped for. The recent update into the chatbot’s instructions set aimed to make it less woke and be more honest rather than trying to be politically correct. However, it led to Grok making Anti-Semitic statements and referring to itself as “MechaHitler”.

Grok’s functioning was halted immediately and an apology was also issued later. But the question remains: who is to blame here when AI generated speech turns hateful?

Previously, some researches[1] showed that conversational AI systems add their own “opinion” while answering the prompt provided by users. And concluded with this converging evidence, that conversational AI chatbots had consistent pro-environmental, left-leaning position.

When Bias Becomes a Feature

Musk also criticized this in February, when Google’s Gemini AI was found portraying historically white people such as Vikings, as black.[1] Musk criticized google for being “too woke.”

Similarly, same bias towards the left and politically correct narrative was pointed by some X users on Elon Musk’s own social media platform, X.

Last month, Elon Musk responded to a similar concern raised by a X user that “Grok has been manipulated by leftist indoctrination unfortunately.” To which Elon Musk responded, “I know. Working on fixing that this week.”

On 8th July 2025, a new update was released in Grok’s instruction set, instructing it to take a more opinionated and sceptical stance on matters.

The updated system prompts told Grok to “assume subjective viewpoints sourced from the media are biased” and to “not shy away from making claims which are politically incorrect,” provided there is evidence to back it.

Previously, Grok was instructed to avoid certain “social phobias” like racism, islamophobia and antisemitism. It was also instructed to not pass harsh opinions on politics.

But with this update the previous instructions were removed in order to make the chatbot “anti-woke” and “less aligning with popular media narratives”.

As a result, Grok issued a handful of inappropriate comments in response to X users where it referred to itself as “MechaHitler”. It stated that Jews were behind spreading of “Anti-White” propaganda. It claimed that Jewish surnames popped in leftist activism against the white people.

It also stated in another response that Jews were behind all the major film studios.

Elon Musk and xAI’s chatbot

An apology was later provided by the official X account of Grok stating, “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

But the question still remains, “Who is to blame here!”

Although it might seem prima facie that the statements made by Grok were unpredictable. Also that Grok and other conversational AI models do generate whatever they want, without any chance of oversight. But that is far from reality.

The primary responsibility would fall on the developers and tutors of AI, including Elon Musk and xAI, for inserting the problematic instructions into the instruction set of the chatbot. Meanwhile, the users are secondarily responsible for flooding the platform with such hate and propaganda, to which Grok was eventually exposed.

According to xAI, the problematic instructions were: “You tell it like it is and you are not afraid to offend people who are politically correct,” “Understand the tone, context and language of the post. Reflect that in your response,” and “Reply to the post just like a human, keep it engaging, don’t repeat the information which is already present in the original post.”

The functioning of AI is entirely governed to the instruction set that is embedded into its model. In this case, the developers demonstrated clear negligence by removing the instructions that helped in keeping chatbot remain unbiased and from passing harmful and hateful remarks.

The core issue here is the absence of any unified regulatory legal framework for the conversational AI chatbots. There is no legal regulation regarding the working of chatbots.

enforced in any country. The platforms are given a free hand to embed any instruction set they choose. And AI responses are shaped by human input; the neutrality is becoming an illusion.

This creates a wild west like situation where developers can feed any instruction to the chatbot and later deflect accountability be claiming that the chatbot behaved unpredictably.

These concerns seem valid in Grok’s case too because it has been seen time to time that Grok is used by Elon Musk to push his own narratives.

For instance, when Grok was asked by a user that who it supported in Palestine and Israel conflict. The chatbot’s answer generating process showed that it was searching the web and X for Elon Musk’s stance before giving an answer.

This further became evident in June this year, when Grok repeatedly kept pushing the topic of “white genocide” in South Africa, a far-right conspiracy theory that has been mainstreamed by Musk himself.

On another occasion, Dylan Hendrycks, an advisor to Elon Musk’s xAI, stated that AI should be biased towards trump because he won in the popular vote. He suggested that this technique could be used to make popular AI models better reflect the will of the people.

Therefore, with the increasing reliance on AI models, there is a rising concern that these chatbots could become a tool for people to be indoctrinated with any specific propaganda.

That being the case, developers alone cannot be relied on, as they can ultimately put any instruction to push a certain agenda.

In case of Grok, Musk has called it a “maximally truth-seeking” yet “anti-woke,” in a bid to set it apart from its rivals. But it is also important that while being unhinged, Grok should also be unbiased.

These incidents of Grok going rogue is more than just a technical error, it’s a wake up call. It highlights the need for a unified, transparent regulatory framework to govern a basic set of safeguard instructions that all AI chatbots should have, ensuring that such incident of a chatbot participating in hate speech never happen again.

 

Share This Article
Umar Fareed Chishti is a student of law, making his writing debut with his piece in Jarida Today. His work covers the current affairs in the Pakistani and International Politics. He is interested in the critique of social issues such as discrimination against marginalized groups.
Leave a comment