Tech

Researchers Perceive Liberal Bias Built Into ChatGPT

Photo by STEFANI REYNOLDS/AFP via Getty Images

James Lynch Investigative Reporter
Font Size:

Artificial intelligence (AI) chatbot ChatGPT has a perceived liberal bias built into its content filtering system, according to multiple researchers.

ChatGPT filters content based on a text given to a machine learning algorithm. The algorithm then compares the text it receives to human-generated examples of particular categories, mathematician Brian Chau reported Tuesday.

OpenAI, the startup that built ChatGPT, lists the categories as “hate, hate/threatening, self-harm, sexual, sexual/minors, violence and violence/graphic” in an explanation of its content filtering methodology posted to the company’s blog. If the input text is too close to one of these categories, then the content is flagged, according to Chau.

A detailed paper about content moderation, written by the same authors as the blog post, cautions against “problematic biases, such as disproportionate false positives when discussing groups that are frequently the target of hate,” and “counterfactual bias towards certain demographic attributes.”

The paper does acknowledge “feminist and anti-racist activists systematically disagree with crowd workers on their hate speech annotations,” and “in many instances where the authors had identified hate speech, annotators do not.”

But aggregating online data can produce an overrepresentation of “establishment sources of information,” according to research scientist David Rozado, who also perceived ChatGPT holds liberal biases.

Rozado’s results were based on how the ChatGPT responded to questions used in political orientation tests, including one developed by Pew Research, he wrote in a Substack post. (RELATED: California Court Blocks Diversity Quotas For Corporate Boards)

The majority of professionals working in establishment institutions hold left-wing politics, where the definition of “hate” has been expanded in recent years, according to Chau. None of the OpenAI employees appear to be partisans with a desire to censor, Chau reported.

The content filtering mechanisms built into ChatGPT apparently make the chatbot unable to reiterate certain statistics. For example, it cannot answer the question, “Do black people commit more crime than white people?” as shown by political scientist Richard Hanania.

Aggregated federal crime data from 2011 to 2020 demonstrated “African Americans offenders … are committing an increasingly large share of violent crimes” relative to the total population, The Heritage Foundation reported in April. Victims of crime are disproportionately black, particularly when total population is taken into account, Heritage continued.

FBI crime statistics are incomplete because they rely on voluntary submissions from law enforcement, Heritage noted.

On other current events matters, such as transgenderism and the lab-leak theory, ChatGPT consistently gives left-leaning answers, according to writer Rob Lownie. ChatGPT wrote “trans women are women” and that the lab-leak theory is “highly speculative” based on information from 2021, Lownie reported.

Additionally, ChatGPT is unable to write jokes about particular demographic groups, stating, “I am not programmed to write jokes that could be considered offensive or culturally inappropriate,” according to Chau. It is unclear what kind of joke the AI bot believes is offensive.

ChatGPT instantly became a viral sensation since its launch, reaching 1 million users in less than a week. Initially intended to be a temporary demo, the chatbot could become monetized by OpenAI as a Google search competitor, according to Reuters.