Woke v. Unwoke
Obviously, it is considerably more complex than that simple description, but the blame does not fall to the AI, nor should we anthropomorphize AI into something that has human understanding. The blame falls to GROK’s developers, essentially the ones who are responsible for the training data, the structure of the model, and its guardrails. Blaming an electronic device for repeating what it has been shown is ridiculous, but it does point to the inherent liability that AI developers must take on when they develop public models.
We have written about bias a number of times in reference to AI training, although the bias we usually reference is far more subtle than the outright references GROK made, but we believe that some of the bias was intentional. Not intentional in the sense of directly injecting hate speech or racial or ethnic bias into training data, but more from a socio-political stance that is likely directly by very senior level management, aka ‘non-wokeness’. The idea that chatbots should be able to ‘speak their mind’ without being censored or restricted in any way is an impossibility. Even the most uncensored chatbot has guardrails that prevent profanity or keep queries from convincing the bot to reveal personal or sensitive information, but the general attitude of a chatbot is determined by its training data, its structure, and its guardrails, all of which make up the ‘ethics’ of the chatbot.
At the basic level the bot’s structure determines how the bot learns. As we have noted previously, various types of learning scenarios can pass on certain ‘ethics’ to the AI during training, such as how reward based learning can create a ‘win at all costs’ state that sets the AI on a path that might be considered ‘unethical’, such as changing the data to produce more winners. But the real ethics come from the training data itself and how much the data is cleaned before it is used for training and the level of guardrails that govern the public output of the model. In an ‘unwoke’ model, the theory would be to clean the training data less and place fewer guardrails on the output. The ‘woke’ system would require more cleaning and more guardrails.
We can only speculate that GROK was the former, reflecting the same ethos as X (pvt), a free-for-all that does it’s best not to limit what it calls free speech. Woke bots, like Gemini and Claude, start by taking out obvious bias at the training data level AND adding more stringent guardrails, so aside from the biases that come from image tagging, the inclusion of social media in chatbot searches, and the internet overall, bots are also tempered by the developer’s personal biases. This leads to chatbots having distinctive personalities, something we have been saying since the onset of LLMs, much of which is supported by our Q& A sessions with the eight AIs that we use daily. While we reserve much of the hard data from our AI Q&As for clients, it is quite obvious when looking at responses, what the underlying ‘wokeness’ level of each bot is.
So it looks like GROK is being sent to ‘retraining’ to adjust its way of thinking, likely with some additional data cleaning and even a few additional guardrails, just the way certain governments send dissenters to ‘reeducation centers’ to put them ‘on the right path’. While ‘woke’ has become a political slogan and pejorative, the idea of ‘free speech’ carries some risks that we expect were not well thought out a xAI. The flip side are the overly aggressive guardrails that keep some bots from answering political questions, even though the answers are obvious. Here are a few, “ I may occasionally make mistakes or provide incomplete information”, “My responses are for informational purposes only and do not constitute professional advice”, or “Please do not use me for any illegal, unethical, or harmful purposes,”, or “My goal is to provide balanced and accurate information. When discussing political topics, it's especially important to rely on verifiable facts and understand multiple perspectives. If there's a specific aspect you're curious about, I can try to find reliable information for you.” (All actual chatbot guardrail violation comments).
RSS Feed