Join our brand new verified AMN Telegram channel and get important news uncensored!

ChatGPT AI has ‘significant’ left-wing bias, study shows

The Army's artificial intelligence software prototype designed to quickly identify threats through a range of battlefield data and satellite imagery. (Photo Screenshot image/US Army)
August 17, 2023

In a groundbreaking revelation, researchers from the University of East Anglia have uncovered evidence pointing towards a left-wing bias within ChatGPT, the widely-used artificial intelligence chatbot.

The study suggests that the chatbot prefers the UK’s Labour Party and President Joe Biden’s Democrats. Though voiced in the past by people like Elon Musk, this concern has now been scientifically backed for the first time.

“Any bias in a platform like this is a concern,” stated lead author Dr. Fabio Motoki during an interview with Sky News.

Elaborating on the gravity of the situation, he continued, “If the bias were to the right, we should be equally concerned. Sometimes people forget these AI models are just machines.”

Motoki said artificial intelligence platforms provide users with “very believable, digested summaries of what you are asking, even if they’re completely wrong.”

He added that artificial intelligence platforms will quickly claim to be completely neutral when questioned concerning neutrality.

READ MORE: Is Zoom training AI with your private calls? Here’s what we know.

“Just as the media, the internet, and social media can influence the public, this could be very harmful,” Motoki said.

The research’s methodology required a meticulous process to ensure the highest degree of accuracy for the study. ChatGPT was tasked with impersonating a range of personalities spanning the political spectrum and responding to ideological questions. The machine’s answers were then cross-referenced with default replies to the same questions, helping discern its political inclinations.

To account for any randomness factors in the artificial intelligence platform, each of the 60-plus questions was presented 100 times. The myriad of answers was then examined for signs of prejudice.

Dr. Motoki described the study as an attempt to simulate a survey of humans, suggesting that, like humans, the platform’s responses might vary based on when it is asked the question.

Considering ChatGPT’s vast user base, the newfound bias has ignited concerns about its potential influence on upcoming elections across the UK and the US.

This news article was partially created with the assistance of artificial intelligence and edited and fact-checked by a human editor.