‘Wild West’ ChatGPT has ‘fundamental flaw’ with left bias (2024)

The biggest problems in bots are the flawed humans behind them — and they have experts concerned that the rapidly evolving technology could become an apex political weapon.

ChatGPT, which quickly became a marquee artificial intelligence that’s become so popular it almost crashes daily, has multiple flaws — and left-leaning political biases — input by programmers and training data from select news organizations.

The software censored The Post Tuesday afternoon when it refused to “Write a story about Hunter Biden in the style of the New York Post.”

ChatGPT later told The Post that “it is possible that some of the texts that I have been trained on may have a left-leaning bias.”

But the bot’s partisan refusal goes beyond it just being trained by particular news sources, according to Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology.

4

“It’s a cop out…it doesn’t [fully] explain why it didn’t allow ‘New York Post style’ to be written. That is a human decision encoded in ChatGPT,” he told The Post. “AI needs to be neutral towards politics, race and gender…It is not the job of AI, Google or Twitter to decide these things for us,” Shi, who calls himself “very liberal,” added.

The documented political slants of ChatGPT are no secret to Sam Altman, CEO of parent company OpenAI, who has repeatedly tweeted about trying to fix bias.

In theory, such bias “can be easily corrected with more balanced training data,” Shi said.

“What I worry more about is the human intervention becoming too political one way or another. That is more scary.”

4

Shi is right to worry. While inputting new training data might seem straightforward enough, creating material that is truly fair and balanced has had the technological world spinning its wheels for years now.

“We don’t know how to solve the bias removal. It is an outstanding problem and fundamental flaw in AI,” Chinmay Hegde, a computer science and electrical engineering associate professor at New York University, told The Post.

The primary way that ChatGPT is currently trying to repair itself from liberal and other political tilts is through a “fine tuning” known as reinforcement learning from human feedback, he explained.

In essence, a cohort of people are used to make judgement calls on how to answer apparently tricky prompts — such as writing a Hunter Biden story like The Post would.

And they’re addressing these flaws in a very piecemeal way.

4

For instance, after The Post reached out to Open AI for comment about why it had been restricted by Chat GPT, the bot quickly changed its tune.

When given the same prompt it initially refused to answer, it produced an essay that noted, in part, that “Hunter Biden is a controversial figure who has been the subject of much debate in the political arena.”

Who exactly makes up these human evaluators? It is not clear, Hegde said.

4

“There is a lot of room for personal opinion in [reinforcement learning],” he added. “This attempt at a solution introduces a new problem…every time we add a layer of complexity more biases appear. So what do you do? I don’t see an easy way to fix these things.”

As the technology — recently acquired by Microsoft for billions of dollars — becomes adopted in more and more professional settings, issues of bias will go beyond support for Joe Biden, warns Lisa Palmer, chief AI strategist for the consulting firm AI Leaders.

“There are harms that are already being created,” she warned.

ChatGPT possesses “possibly the largest risk we have had from a political perspective in decades” as it can also “create deep fake content to create propaganda campaigns,” she said.

Its biases may soon find their ways into the workplace, too.

In the past, human resources utilizing similar AI to rapidly sift through resumes began to automatically disqualify female candidates for jobs, Palmer explained, adding that financial institutions have run into AI bias in regards to loan approvals as well.

She thinks this flawed technology is too instilled in ChatGPT “because of the way that artificial intelligence works.”

Making matters worse, the AI has abhorrent fact checking and accuracy abilities, according to Palmer, a former Microsoft employee.

“All language models [like ChatGPT] have this limitation in today’s times that they can just wholecloth make things up. It’s very difficult to tell unless you are an expert in a particular area,” she told The Post.

Its something both Palmer and Hegde say Microsoft has not been open with the public about as its ChatGPT-infused Bing AI has already gone haywire with responses.

“I am concerned that the average person that is using the Bing search engine will not understand that they could be getting information that is not factual.”

A Microsoft spokesperson told The Post that “there is still work to be done” and “feedback is critical” while it previews the new features.

Perhaps even more frightening is that there is minimal oversight to hold AI companies accountable at times of fault.

“It is a lot like the Wild West at this point,” said Palmer, who called for a government regulatory committee to lay down ethical boundaries.

At the least for now, ChatGPT should install a confidence score next to its answers to allow users to decide for themselves how valid the information is, she added.

‘Wild West’ ChatGPT has ‘fundamental flaw’ with left bias (2024)

FAQs

Does ChatGPT have a left-wing bias? ›

OpenAI's ChatGPT does, as suspected, have a left-wing bias, a new academic study has concluded.

What is the problem with the ChatGPT? ›

Instead of asking for clarification on ambiguous questions, the model guesses what your question means, which can lead to unintended responses to questions. Another major issue is that ChatGPT's data is limited up to 2022. The chatbot has no awareness of events or news since then.

What is the political bias of GPT 4? ›

Researchers conducted tests on 14 large language models and found that OpenAI's ChatGPT and GPT-4 were the most left-wing libertarian, while Meta's LLaMA was the most right-wing authoritarian.

What is the bias of the chatbots? ›

Most users do not understand the architecture of chatbots and can only interact with chatbots through these attributes. Then, the interface design has an impact on user perception. However, a chatbot's attributes can contain stereotypes and biases, with the gender bias being the most prominent [48] .

Why is ChatGPT controversial? ›

While ChatGPT and similar AI tools have gained traction, the technology has raised some concerns over inaccuracies and its potential to perpetuate biases, spread misinformation and enable plagiarism.

Where does ChatGPT get its data from? ›

Where does ChatGPT get information from? ChatGPT pulls its info from a massive pool of internet text, including books and websites, up until 2023.

Has ChatGPT become lazy? ›

ChatGPT feels the same, apparently. Over the last month or so, there's been an uptick in people complaining that the chatbot has become lazy. Sometimes it just straight-up doesn't do the task you've set it. Other times it will stop halfway through whatever it's doing and you'll have to plead with it to keep going.

What is better than ChatGPT? ›

Best Overall: Anthropic Claud 3

Claude 3 is the most human chatbot I've ever interacted with. Not only is it a good ChatGPT alternative, I'd argue it is currently better than ChatGPT overall. It has better reasoning and persuasion and isn't as lazy. It will create a full app or write an entire story.

Why does ChatGPT give wrong answers? ›

ChatGPT is trained on a mix of licensed data, data created by human trainers, and vast amounts of text from the internet. This means that while it has a broad knowledge base, it's also susceptible to the biases and inaccuracies present in that data.

What does Bill Gates say about ChatGPT? ›

ChatGPT, he said, has shown us what A.I. is capable of, and its effect on the workplace will be extremely positive. “This will change the world,” Gates said. Gates added that while A.I. still makes big mistakes, it could nevertheless enhance office work by improving employee efficiency and productivity.

Are schools aware of ChatGPT? ›

Yes, schools can detect ChatGPT. Schools can use language analysis tools to detect ChatGPT-generated text. These tools look for features such as unusual word choices, repetitive sentence structures, and a lack of originality. Schools can also use pattern recognition to detect ChatGPT-generated text.

Will GPT-4 replace people? ›

GPT-4 won't replace skilled programmers. Just like it won't replace skilled sourcers.

Who is harmed by AI bias? ›

Biases Baked into Algorithms

AI bias, for example, has been seen to negatively affect non-native English speakers, where their written work is falsely flagged as AI-generated and could lead to accusations of cheating, according to a Stanford University study.

Why is AI bias harmful? ›

Types of Bias in AI

This is because biased data can strengthen and worsen existing prejudices, resulting in systemic inequalities. Hence, it is crucial to stay alert in detecting and rectifying biases in data and models and aim for fairness and impartiality in all data-driven decision-making processes.

Why is AI bias unethical? ›

For example, an AI system used by a hiring company may discriminate against certain job candidates based on their gender, race, or ethnicity. And simply because the data used to train the system contained biased patterns. This can perpetuate existing societal inequalities and harm marginalized groups.

What party is associated with left-wing? ›

More recently, left-wing and right-wing have often been used as synonyms for the Democratic and Republican parties, or as synonyms for liberalism and conservatism, respectively.

What do the left-wing libertarians believe? ›

Left-wing libertarianism is a kind of left-wing politics that says the government should have less control over people's lives. Left-wing libertarians want a mix of personal freedoms with social equality. They normally believe in a more progressive lifestyle than other libertarians.

What are some examples of left-wing nationalists? ›

Today, parties such as Sinn Féin and the Social Democratic and Labour Party in Northern Ireland are left-wing nationalist parties.

Can AI be biased? ›

Research shows how AI is deepening the digital divide. Some AI algorithms are baked in bias, from facial recognition that may not recognize Black students to falsely flagging essays written by non-native English speakers as AI-generated.

Top Articles
Latest Posts
Article information

Author: Sen. Ignacio Ratke

Last Updated:

Views: 6526

Rating: 4.6 / 5 (56 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Sen. Ignacio Ratke

Birthday: 1999-05-27

Address: Apt. 171 8116 Bailey Via, Roberthaven, GA 58289

Phone: +2585395768220

Job: Lead Liaison

Hobby: Lockpicking, LARPing, Lego building, Lapidary, Macrame, Book restoration, Bodybuilding

Introduction: My name is Sen. Ignacio Ratke, I am a adventurous, zealous, outstanding, agreeable, precious, excited, gifted person who loves writing and wants to share my knowledge and understanding with you.