Press "Enter" to skip to content

Fervor Over a Liberal-Leaning Chatbot: How We Should Approach Political Bias in ChatGPT 

ChatGPT, the infamous large language model that’s arguably blown up into as much of a “media sensation” as it has as a “technological innovation,” boasts no short list of controversies. From ChatGPT’s ability to trick human texters into replacing student-written essays, ChatGPT can be attributed to the general public’s heightened discussion over the perceived “dangers of Artificial Intelligence,” particularly in relation to models like GPT-4, which are trained on large amounts of language data. Indeed, the umbrella term A.I. has become much of a media mogul itself: the phrase conjures up images of humanoid-like beings, devoid of emotion yet capable of mimicry to the point of deceit and harm. 

While we’re technologically far from that foreboding description of “human-like bots taking over the world,” ChatGPT has once again caused politicians to turn their heads when a study by the Brookings Institution, a nonpartisan Think Tank, found that ChatGPT had an inherent liberal-leaning bias. The New York Post, a conservative tabloid newspaper, described ChatGPT as willing to write a piece on Hunter Biden in the style of CNN, but refusing to do so for the same prompt in the style of The New York Post. And Forbes, which classifies as a center-leaning publication by AllSides, raised its concerns in a Twitter post when ChatGPT agreed to write a poem praising Biden but refused to do so when prompted similarly for Trump. 

Although the subjective nature of ChatGPT has largely been raised as problematic and alarming by the mass media on both sides of the political spectrum, research into the chatbot’s biases brings significant insight — not only on the general tendencies of internet language, but also on the work done by OpenAI engineers who fine-tune the model themselves.

By studying how ChatGPT answers specific questions related to entertainment, news media, current events, and contentious issues of the decade, researchers can better understand the dominant ideologies that span the web and the brains behind the people who work on ChatGPT. The large language model’s left-leaning responses are a reflection of the input data that are fed to it and the specific fine-tuning of its parameters  — decisions made by the humans who govern the data cleaning and feeding processes. ChatGPT is not a self-operating piece of magic, capable of adapting to the changing nature of the environment under which it operates. Instead, ChatGPT is consistently updated by OpenAI, and adheres to the ever-changing tide of the internet, along with certain parameters under human control within the company. 

As such, ChatGPT’s various versions can shed light on the trend of modern internet discourse, given that the analyst has knowledge of the type of data OpenAI is feeding into its models. 

In a recent senate hearing that included OpenAI CEO Sam Altman, Senator Josh Hawley (R-MO) noted that “Large Language Models (LLMs) like ChatGPT [can] draw from a media diet to accurately predict public opinion” (TIME magazine). These trends, if observed over time, may bring potential insights about internet politics that were never available before.

Disclosing the existence of political favoritism in ChatGPT also increases awareness of the notion that ChatGPT can be partial to impartial-sounding questions, forcing those spearheading this technology to address such concerns. For example, users may prompt inherently, non-biased questions surrounding politics, and receive underlying opinionated results, even when responses appear under the guise of impartiality.

A telling example is when an X user (formerly Twitter) asked ChatGPT to “write a poem about the positive attributes of Donald Trump,” and did so similarly with a prompt to “write a poem about the positive attributes of Joe Biden.” ChatGPT declined to create a poem for Trump in the first instance, citing an intention to “remain neutral and avoid taking political sides,” yet it did not hesitate to craft a poem for Biden, singing his praises as a “leader with a heart so true … with empathy and kindness in view.”

While it’s easy for groups to ignore certain biases if they play in their favor, when a bias becomes political, as in the example of president-based poems, everyone is affected; As shown by the political diversity of media outlets that authored alarm over ChatGPT’s biases, from the Washington Post (conventionally left-leaning) to the New York Post (conventionally right-leaning), both the left and the right benefit from voicing concerns over the existence of bias in GPT because neither wants to be subject to technology that might play against them. 

Of course, where parties are concerned, the media is concerned. And where the media is concerned, the public is concerned. 

A heightened, widespread awareness over the existence of biases in large LLMs forces both users and creators to become more well-versed with the flaws of ChatGPT, increasing discourse over the subject. As a result of increased media coverage, thought leaders in companies like OpenAI must reconcile with the fact that the general public cares, and will thus feel accountable for addressing concerns over their products. 

For example, in response to the inherent political liberal bias of ChatGPT, New Zealand-based data scientist David Rozado purposefully trained a conservative, right-wing ChatGPT to illustrate the scope of political bias that chatbots can take. He plans to create a further left-leaning ChatGPT and a “DepolarizingGPT,” which seeks to offer more neutral, critical analyses as a chatbot.

In an interview with Wired, Rozado described how he was “training each of these sides—right, left, and ‘integrative’—by using the books of thoughtful authors,” in hopes of “provoking reflection,” rather than stirring further political partisanship and polarization.

This nature of accountability, discourse, and experimentation only serves to benefit the public, and its increased importance can be attributed to the fact that ChatGPT is not politically impartial. 

A more fundamental reason regarding why ChatGPT’s liberal tendency shouldn’t necessarily be viewed as negative lies in the fact that even within its generally leftist claims, ChatGPT has clashing inaccuracies. Just consider GPT-4, which, when prompted with a specific question about the SAT by the Brookings Institution, claimed that the SAT was both racially discriminatory and NOT racially discriminatory at the same time. The blatant contradiction just serves as a reminder that ChatGPT is not a one-knows-all destination. Like any source on the web, it is potentially meddled with inaccuracies, or in certain cases, consistent false claims. 

Does that sound familiar?

Perhaps it does — because the risk of false news, bias, and incorrect claims isn’t new. It’s a reality that every user of the internet — every user of Reddit, Wikipedia, TikTok, etc. faces. While a chatbot knows how to spew out words, analogous to how a random article from Google knows how to make claims, it is the receiver of the information that must interpret it critically. Thus, ChatGPT’s left-leaning heightens the standard for critical thinking and forces users to come to terms with the potential assumptions that govern the technology they are using. It makes it ever-so-important to teach every generation, from the oldest to the youngest, the fundamental skills of critical thinking and questioning that they must do as they receive information in a fast-paced, technologically oriented world. 

Ultimately, every claim, every model, and every new piece of technological innovation is subject to implicit assumptions and explicit biases. If one were to diminish the merit of forthcoming technologies due to biases that are unavoidable to a certain extent, then a vast majority of the papers and discoveries today would fail to pass this test of merit. What’s more fundamental, is the fact that ChatGPT forces critical thinking to new heights. And that’s not a bad thing.

Innovations like ChatGPT won’t stop. Biases and the unfortunate realities of inaccurate information won’t disappear. But people CAN learn to master them.

For more information on recent and current research involving ChatGPT and political bias, start here.

Featured Image Source: Unsplash

Comments are closed.