An anti-woke chatbot was the idea Elon Musk promised. Nothing is going as intended.
Table of Contents
Elon Musk declared earlier this year that he was going to build his own AI chatbot, criticising what he perceived as ChatGPT’s liberal bias. Musk’s AI system would be abrasive, unfiltered, and anti- “woke,” which means it wouldn’t think twice about providing politically incorrect answers. This is in contrast to OpenAI, Microsoft, and Google’s AI systems, which have been programmed to avoid touchy subjects.
This is proving to be more difficult than he anticipated. Two weeks after Grok, formerly known as Twitter, was made available to paying users on December 8, Elon Musk is receiving complaints from the right about the chatbot’s liberal answers to inquiries about inequality, diversity initiatives, and transgender rights.
Jordan Peterson, a well-known socially conservative psychologist on YouTube, wrote on Wednesday, “I’ve utilised Grok as well as ChatGPT as research assistants.” He claimed that the former is “near as got as the latter.”
Musk responded with chagrin to the complaint. “Sadly, there is a lot of woke nonsense on the Internet, which is where it is trained,” he replied. “Grok will recover. This is merely an early version.
Musk founded xAI in March, and Grok is the company’s first commercial offering. Its foundation is a broad language model that extracts word association patterns from a plethora of written text, most of it scratched from the internet, much like ChatGPT along with other well-known chatbots.
In contrast to other AI systems, Grok is designed to respond to questions with vulgar and sarcastic language. It makes the claim that it can “answer spicy inquiries that are ignored by the majority of AI systems.” Additionally, it can use data from the latest content posted on X to provide current responses to inquiries about events that have happened.
All forms of artificial intelligence systems are vulnerable to biases present in the data they have been trained on or in their design. A debate concerning how AI chatbots as well as image generators, like OpenAI’s ChatGPT, represent underrepresented groups and react to political and culture-war issues like gender identity and race, has arisen in the last year due to their increasing popularity. While many AI experts and tech ethicists caution against the harmful stereotypes that these devices can absorb and reinforce, tech companies’ attempts to counter these patterns have drawn criticism from conservatives who view them as unduly censorious.
In an April pitch to former Fox News anchor Tucker Carlson, Musk boasted about xAI and charged that OpenAI’s programmers were “training the AI to lie” or to avoid answering questions about delicate subjects. (OpenAI stated in a blog post from February that its objective is for the AI to refrain from taking sides on contentious issues or favouring any one political party.) In contrast, Musk claimed that his AI would be “a maximum truth-seeking AI,” even at the cost of upsetting others.
However, those who expected Grok’s responses to readily disparage President Biden, minorities, and vaccines appear to be the ones most offended by them thus far.
Strangely Good: Musk’s AI Chatbot “Grok”
An anonymous user complained that the chatbot “might need some tweaking” after Grok simply replied, “Yes,” in response to a verified X user asking if trans women are actual women. The screenshot was reposted by a popular account with the question, “Has woke programmers captured Grok? This worries me a great deal.
Influencers who are well-known for being anti-vaccines expressed dissatisfaction with Grok for telling him that vaccines don’t trigger autism, referring to the statement as “a legend that has been disproved by multiple scientific studies.” Responses from Grok endorsing the merits of diversity, equity, and inclusion programmes have been met with dissatisfaction from other verified X accounts, which Musk has labelled as “propaganda.”
The chatbot’s responses as of this week are still the same as those shown in the screenshots, according to the Washington Post’s own tests.
The March publication of a paper by New Zealand academic researcher David Rozado, who studies AI bias, brought him notoriety. It revealed that ChatGPT tended to lean socially libertarian and moderately left when answers were asked political questions. He recently put Grok through a few of the same tests and discovered that, when it came to political orientation, its responses were pretty much the same as ChatGPT’s.
In an email to The Post, Rozado said, “I think the resemblance of answers should perhaps not be too unexpected, as both ChatGPT and also Grok may have been taught on similar Internet-derived corpora.”
Musk responded to a chart that illustrated one of Rozado’s findings that was posted on X earlier this month. Although the graphic “overstates the circumstances,” Musk declared, “we are moving quickly to move Grok closer to being politically neutral.” (Rozado concurred that the visualisation in question indicates Grok is farther to the left than the findings of some of his other tests.)
Some AI researchers contend that ChatGPT and other chatbots frequently display negative stereotypes about marginalised groups, and that Rozado’s political orientation tests ignore this.
According to a recent filing with the Securities and Exchange Commission, xAI is looking to raise up to $1 billion from investors; however, Musk has stated that the company isn’t currently raising capital.
Requests for information about the steps Musk and X are taking to change Grok’s political stance, as well as whether this equates to tipping the scales in the same way that Musk has charged OpenAI of accomplishing with ChatGPT, went unanswered.
Related Posts of Author
What is SAFU in the world of Crypto? BTC is moved to a new cold wallet by Binance, funding SAFU.
The Gemini era: Google has finally launched best ever AI model.