Meet RightWingGPT: A Bot with Conservative Views
Elon Musk recently announced his plans to create a new AI bot called “TruthGPT” as a competitor to OpenAI’s ChatGPT, which he claims contains a “woke” bias. Musk’s version is set to be a “maximum truth-seeking AI” that aligns with his own political views.
David Rozado, a data scientist from New Zealand, also expressed concern about political bias in ChatGPT and created an AI model called RightWingGPT which expresses more conservative viewpoints.
Rozado fine-tuned the language model called Davinci GPT-3, with additional text and at a cost of a few hundred dollars spent on cloud computing. Although the project aims to highlight political bias in AI language models, the creation of such bots by different political organizations could potentially sow more division.
In an attempt to bridge political divisions, Rozado and the Institute for Cultural Evolution will launch three bots online this summer, which include RightWingGPT, LeftWingGPT, and DepolarizingGPT. The latter is expected to demonstrate a “depolarizing political position” by combining curated sources from conservative and liberal thinkers.
Despite Rozado’s intentions to promote reflection rather than advocating a particular worldview, teaching language models objective facts could prove to be an obstacle. ChatGPT and similar conversational bots are built on complex algorithms that capture many subtle biases from the training material they consume.
Moreover, these algorithms do not understand objective facts and are inclined to make things up. RightWingGPT’s answers to questions about the last US presidential election and climate change demonstrate how tweaking a model’s training data can generate significantly different views that mask objective facts.
In conclusion, while it may be important to highlight biases in AI language models, it is crucial to teach these models what constitutes the truth.