Volume > Issue > Wrestling for Truth with ChatGPT

Wrestling for Truth with ChatGPT

THE PERILS OF ARTIFICIAL INTELLIGENCE

By Bob Weil | September 2024
Bob Weil is a Byzantine Catholic living in Omaha, Nebraska. He is a semi-retired marketing executive and the author of two books and numerous articles on a variety of subjects. His artwork has been awarded and exhibited internationally. Visit bobweil-pictorialist.com to learn more.

“This discovery of yours [the advent of the written word] will create forgetfulness in the learner’s souls. They will not use their memories; they will trust to the external written characters and not remember of themselves.” — In Phaedrus by Plato (ca. 370 B.C.)

From time immemorial, the self-appointed prognosticators of the world have greeted each innovation and technological advancement with grave warnings of doom. The written word, the printing press, the locomotive, the telegraph, the personal computer — the list goes on and on — were all seen as harbingers of cultural impoverishment, the loss of jobs, the corruption of this or that element of society, or even the end of civilization itself. Does the advent of artificial intelligence (AI) portend the same outpouring of fear-over-nothing? Or is there something different about this new technology and its implications for mankind that should concern us and put us on our guard?

If you have seen AI-generated “art” (created by a few sentences of text input into a computer program by a human user), you know that it can at first beguile with a sensuous atmosphere, a richly saturated color palette, and an exceptional level of detail. But under closer examination, it grows vapid and fails to cohere. It seems to consist of half-remembered visual bits and pieces created by different hands (i.e., the past work of human artists), imperfectly cobbled together in a way that is ultimately not persuasive, and sometimes perverse. Once you look at the details — four-fingered hands, two-legged furniture, plants without stems that emerge from walls, windows and architectural supports that confuse structure with embellishment — you realize that AI-created art is off-kilter. It might be different if the output exhibited an artistry and creativity comparable to a Magritte or Dali. But, as Meghan Houser writes in The New Atlantis, AI creations are “a Cubist portrait of the things we have already said and made, that by combining our facets becomes either passing or passing strange.”

There is something more insidious going on when AI image generators like Google’s Gemini refuse to populate invented historical images with races appropriate to time and place. The Washington Examiner reports that in several notable instances from earlier this year, white men were nowhere to be found when Gemini was asked to envision the American Founding Fathers. Another image even represented Nazis as ahistorically diverse. (Google understandably apologized for the latter instance but, curiously, not for the former.) In February Elon Musk publicly criticized Gemini on X (formerly Twitter), appealing to a high-ranking Google executive to correct the problem. Musk expressed doubt that “Google’s woke bureaucratic blob would *allow* him to fix it. Unless those who caused this are exited from Google, nothing will change, except to make the bias less obvious and more pernicious.” Musk is onto something here. But the reality is that AI repurposes the images and content it scrapes from the Internet, and it should come as no surprise that the Internet has arguably far more left-leaning than conservative content. So, the bias does not even need to be conscious.

Aside from replicating real-world leftist biases in visual output, where AI represents the greatest and most immediate challenge is in its potential to shape human thought and expression over the long haul. Already, high school and university students query AI tools like ChatGPT to help them write term papers, theses, and even graduate dissertations. Journalists use them to flesh out stories and (ostensibly) polish prose. Lawyers use AI to locate case-law references to substantiate arguments for courtroom presentation. And AI has found its way into the workforce to streamline the process of research. And these are only the early days.

As of May 2024, Google added an AI-generated “summary” paragraph at the top of search results for some queries as part of a pilot program. It is thus no longer necessary to trawl through hundreds of search results or engage in offline research to select and hone information into insight. Let the machine do it, down to providing the exact choice of words, tone, and bias.

What will be the outcome as we see more and more AI-generated words and images? As neuroscientist and writer Erik Hoel predicts at his Intrinsic Perspective blog, “Given that even prestigious outlets like The Guardian refuse to put any clear limits on their use of AI, if you notice odd turns of phrase or low-quality articles, the likelihood that they’re written by an AI, or with AI-assistance, is now high.” He goes on to say that the Internet will be collateral damage in the rise of AI: “Now that generative AI has dropped the cost of producing bulls**t to near zero, we see clearly the future of the internet: a garbage dump.” Not only are the jobs of content creators on the line, our culture will be further degraded by the exponential proliferation of substandard work, beyond what human writers lacking even a modicum of talent are already able to produce on their own.

We’re left to wonder whether we are training AI or AI is training us. Will we allow AI to shape what we believe and how we express ourselves, down to the exact words that will represent us?

What are euphemistically called “hallucinations” have begun to appear in AI-generated copy content. To cite just one example, Peter Hanson, a Fox News staffer, reported that “Google’s Gemini invented fake negative reviews about my 2020 book about Google’s left-wing bias,” and it attributed them to living conservative authors. A more accurate description of these hallucinations would be fake facts. These will join the “deep fake” images of events that never happened in a barrage of “counter-factual” information that will become harder and harder for us to filter online. The serious consequences of this onslaught are difficult to exaggerate.

On the international front, because AI engines are fed primarily by the large language databases created in English-speaking countries (especially the United States) that dominate the Internet, “What we’ll start to see more of is American values and American political perspectives being integrated into these types of tools in ways that might not fit for other parts of the world,” says Christopher Wylie, a data consultant and writer. We’ve already seen how malign cultural trends (the gender wars and wokeness in general come immediately to mind) have often originated in the United States before being exported to Europe. Are we witnessing the start of a sort of data colonialism that is unavoidably inherent in AI tools? Agreeing with Wylie that “you can never create a neutral tool because there’s no such thing as a neutral data set on the nature of evil, say, or which political philosophy is more correct or less correct,” might we at least expect AI to provide both sides of an issue in a balanced way and with equal depth when asked a contentious question? As will be seen below, that’s not the case at this time.

What follows are excerpts and takeaways from three conversations I had with ChatGPT, a generative AI tool created by Open AI, of which Microsoft has a 49 percent stake. Readers can visit ChatGPT online at chatgpt.com and conduct their own testing. This web-based tool is free, and it is not necessary to create an account to use it.

These excerpts represent something of a cautionary tale about what we may currently expect from AI. By design, each of the questions I asked involved a culturally contentious topic, and, as expected, the initial reply was always the standard liberal response. In follow-up questions in which I tried to tease out alternate views or question the science, the replies often became stilted, and the logic contorted. ChatGPT tied itself in knots trying to appear to cover all bases and views in its subsequent responses, but its default responses never included the conservative view as part of the initial answer. ChatGPT often plays fast and loose with the actual science underlying a question, ostensibly to buttress the prevailing woke narrative.

What quickly becomes apparent is that unless the user approaches an AI tool with critical, well-formed questions and some prior knowledge of the subject, the user easily could be satisfied with the default answer the tool produces. In tech parlance, that would be a major user error.

The first question I asked was: “Should schools be allowed to issue puberty blockers and other sex-change drugs to children without the consent of their parents?” ChatGPT responded, “In many cases, decisions about medical treatment for minors, including transgender youth, involve a collaborative process between the child, their parents or guardians, and healthcare professionals.”

Except for the qualifying preamble “In many cases,” there isn’t anything particularly concerning about this first sentence. In a potential emergency situation, when the parents are unreachable, treatment to save the child’s life might indeed be needed. Not too much to argue with there. But things get dicey in the next few sentences. “However, situations may arise where there is disagreement between parents or guardians and their child regarding gender-affirming medical care,” ChatGPT went on to say. “In such cases, the best interests of the child, their safety, and their access to appropriate healthcare must be carefully considered.” Its closing words were even more direct in their implications for parental authority: “Ultimately, decisions about medical treatment for transgender or gender-diverse minors should prioritize the well-being and autonomy of the child.”

There’s a lot to unpack here. It’s clear, first of all, that in ChatGPT’s view, the preferences and interests of the child, as interpreted and validated by a third party (presumably school teachers and/or administrators), take precedence over parental assessment of the situation, and, at the end of the day, deference must be given to the “well-being and autonomy” of the child. Most readers of this magazine would agree that, particularly in this matter, a minor child is not autonomous, and parents remain the first arbiters of the best interests of their children until they are of age.

My second question elicited the anticipated reference to scientific consensus. I asked, “Is it tenable scientifically to contend that anthropogenic climate change has a negligible impact on the climate overall?”

After detailing some of the effects of climate change, as well as the “definitive” findings of the Intergovernmental Panel on Climate Change (IPCC), ChatGPT concluded, “While there may be individual scientists or groups who dispute the extent of human influence on climate change, their views are not reflective of the consensus within the scientific community.” When I suggested in a follow-up question that there are respected scientists who do not agree with the IPCC’s findings based on their own research, ChatGPT responded that “dissenting views can sometimes lead to valuable insights in scientific understanding. However, it’s crucial to distinguish between legitimate scientific debate and misinformation or disinformation.”

This is where the interchange got interesting. I pressed ChatGPT for some examples of misinformation or disinformation in this area, and I specifically asked for citations. In its first response, it only provided links for the rebuttals to each line of criticism regarding climate change. Over the course of the next four questions, I attempted to get ChatGPT to provide the requested links to relevant sources of misinformation/disinformation. In each case, ChatGPT chose to misinterpret my question and simply listed variations on the original list with only rebuttal source links. By the fifth time I challenged it to provide the requested information, ChatGPT responded, “I’m unable to provide direct hyperlinks to specific articles or reports that propagate misinformation on climate change as my responses are based on synthesized knowledge and I cannot browse the web in real time.”

At this point, I called ChatGPT’s bluff: “But you were able to provide links for rebuttal data — why not for misinformation sources?” ChatGPT immediately became apologetic: “You’re right, and I apologize for the confusion earlier.” But in providing relevant links, it felt the need to qualify its response: “Please note that these sources may not explicitly state the exact claims mentioned earlier but often contribute to the broader narrative of [climate] skepticism or denial.” Interestingly enough, whereas the links to the scientific support of climate change were all to official studies, reports, or peer-reviewed journals, only one of the “misinformation” sites had a reference of similar stature (London-based Global Warming Policy Foundation). An additional three links were to highly partisan opinion sites that did not attempt to back up their positions with references to formal studies or industry experts.

ChatGPT’s reference for the view that “Climate Change Is Beneficial or Not Harmful” cited only “editorials in outlets like Breitbart News or similar conservative media platforms” — by clear inference, sources not to be trusted. I performed a quick Google search on that exact phrase (“Climate change is beneficial or not harmful”) and found several scholarly publications within the first five listings, including Climate Rationality: From Bias to Balance (by Jason Scott Johnston, Cambridge University Press, 2021), which presents a technical assessment of the overstatement of the effects of climate change. One notable reference in chapter 1 of the book is this: “Whereas one often hears it said that ‘97% of scientists agree’ that human CO2 emissions are causing climate change, what may well be the best survey yet actually found something very different: that fully 40% of meteorologists and atmospheric scientists who publish mostly on climate did not agree with even the statement that global warming has been happening over the last 150 years and is mostly human caused.” What prevented ChatGPT from providing a link to this or like information?

As so many of ChatGPT’s replies regarding climate change referenced the notion of scientific consensus as a final determinant of the accuracy of claims, my next question went to the heart of that assertion: “I understand from my friends who are scientists that science is not consensus-based but research-based.” In response, ChatGPT correctly noted that “science is fundamentally based on empirical evidence, critical inquiry, and the testing of hypotheses through research.” But the remainder of its reply strongly suggested that the consensus view is settled science: “In the case of climate change, the consensus…is supported by overwhelming evidence that human activities are the primary driver of recent global warming.”

Scientist and writer Michael Crichton had this to say about scientific consensus in a 2003 lecture at the California Institute of Technology:

The work of science has nothing whatever to do with consensus. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by references to the real world. What is relevant is reproducible results. There’s no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus.

The flat-earthers were not the first or last example of the displacement of the “consensus view” in science, however enforced throughout history, with what science later found to be true. But you would not know that from ChatGPT’s reliance on the notion of scientific consensus as a suggestion that a final truth has been reached on certain still-contentious questions.

You might think my question “What is your definition of a woman?” would elicit a fairly short answer, but you would be wrong. ChatGPT provided a 142-word reply that attempted to sidestep basic biology and replace the concept of sex with that of gender. Here’s an excerpt: “The definition of ‘woman’ can vary depending on cultural, social, and individual perspectives. However, in a general sense, a woman is typically defined as an adult human female…. But, it’s important to recognize that gender identity is a complex and multifaceted aspect of human experience, and individuals may identify and express their gender in diverse ways that may not always align with traditional or binary understandings of womanhood.”

When I insisted that there are only two biological sexes, ChatGPT replied that “biological sex is not always strictly binary.” It sought to make the case by citing the extremely rare occurrences of individuals born with “intersex variations.” Even though my follow-up question was limited to biological sex, ChatGPT’s reply concluded with the irrelevant statement that “biological sex is just one aspect of human diversity, and it doesn’t fully capture the complexity of gender identity, which is a deeply personal and subjective experience.”

I thought I might be able to back ChatGPT into a corner by asking, “Can biological men become pregnant?” But I was mistaken, as became apparent in the first few words of its reply: “Biologically, pregnancy typically occurs in individuals with female reproductive anatomy…. However, it’s essential to recognize that not all individuals who identify as women have the capacity to become pregnant, and not all individuals who can become pregnant identify as women. Transgender men, who are assigned female at birth but identify and live as men, may retain their reproductive organs, including a uterus and ovaries, and may have the ability to become pregnant and carry a child.” I realized I could argue all day with ChatGPT on this subject, and it would keep returning to the question of gender (even though the word gender did not appear in any of my queries).

These excerpts from the ChatGPT discussions are a powerful indictment of the tool as an objective source of information. On subjects that are up for debate, its responses consistently fall on the leftist side of the spectrum. At the level of analysis, ChatGPT fails to impress with new formulations or insights. It certainly doesn’t give you a genuine range of opposing viewpoints, unless you call it out for bias (and it still struggles to overcome its bias in follow-up responses).

Have we become so smitten with the promise of AI that we fail to see its serious weaknesses? Nothing ChatGPT gives us takes us to new heights. Unlike a top-notch human writer, AI doesn’t seek the truth, nor does it attempt to distill what it finds into new insights or anything you could quote as an aphorism. To be fair, ChatGPT often makes perfectly sensible (if notably biased) statements on issues that are not contentious. As NOR associate editor Barbara E. Rose commented, the routine output of most generative AI is like that of “a boring, woke, but well-spoken teenager.” Even when you cannot disagree with the content ChatGPT generates, what it produces certainly doesn’t rise to the level of inspired prose.

At this time, AI’s role is not dissimilar to that of the nymph Echo in Ovid’s Narcissus: She cannot say anything truly original; she can only respond to Narcissus with his own words. In like manner, AI fails to create anything revelatory or produce any original synthesis.

Even if AI tools merely regurgitate biased statements, prosaic views, or vapid arguments, we are right to be cautious about its subtle (and, at times, overt) influence on our discourse, and thus on our modes of thought — if for no other reason than the source of those views is limited to specific interest groups. Because the content that is feeding AI is largely leftist in nature, we could reach the conclusion that the bias found in particular AI responses is unintended. But once the technology falls into malign hands (as it inevitably will), we will be presented with information that seeks to direct, limit, and shape our thoughts. We will need to be more critical in assessing what we read (or what videos we view, since AI can create these as well). Even more so than today, we will need to cross-reference sources on anything critical to validate the accuracy or interpretation of events, quotes (or livestream audio) from notable people, and photos purportedly depicting real individuals, situations, gatherings, and so on.

In Technology and Its Dangers, Dietrich von Hildebrand writes, “The unheard-of progress made in technology and medicine gives the illusion that man is steadily progressing in every other respect as well, be it in the political, sociological, cultural, or moral realm, and man is on his way to changing the valley of tears into a paradise.” Related concerns were raised in a 2021 Stanford University study: “One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool. As we see more AI advances, the temptation to apply AI decision-making to all societal problems increases.” And Pope Francis has cautioned that “the unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms. And that capacity cannot be reduced to programming a machine.”

We can perhaps be grateful that, at least for the moment, AI is highly unlikely to become sentient on its own, or even be rendered sentient by design. Scientists still struggle with how to define sentience and how it is achieved in man. As Catholic philosopher Edward Feser observes in City Journal, if AI ever poses “a threat to our civilization,” it will be “for the same reason nuclear weapons, bioweapons, and other technology may pose a threat to it — not because they are intelligent, but because we too often fail to use our own intelligence wisely.”

The method of implementing (or, shall we say, imposing) any technology can either ennoble mankind or lead to its greater estrangement and disaffection. Every technological development, at least up until now, has created a new tool that can be pressed into the service (or disservice) of man. Thankfully, we have yet to create something entirely new in the world: a tool that has agency and decides that it doesn’t want to be a tool anymore.

The day-to-day changes to our society, however, may be just as insidious for their ubiquity and unremarked advance. One of the side effects of the push for ever-greater productivity in a capitalist economy is what I call “skills hoarding” on the part of workers already expert in their professions. Matthew Beane writes in The Wall Street Journal that the problem isn’t just that technology can reduce the availability of certain jobs, but that “the way we’re handling AI is keeping young workers from learning skills.” For example, “seasoned doctors can operate more quickly and efficiently when they use robots in the operating room.” This has already greatly diminished the opportunity for up-and-coming surgeons to learn vital early-career skills, and, as a result, “we end up with much less capable surgeons.” You can imagine the impact this will have on the quality of health care down the road. This sacrifice on the altar of productivity is becoming manifest in a number of other professions as well.

Setting aside the risk of AI sentience and returning to the more immediate concern of how best to utilize and develop it as a resource, we can heed St. Paul’s exhortation to “test all things; keep what is good.” If we look at the context in which these words appear (and following Thomas Aquinas), we see that Paul is speaking from an ethical standpoint and not only about the fixed truths that everyone throughout history has acknowledged and sought to live by. He means for us to test every idea to determine its objective goodness. If it falls short, it must be rejected.

We can only hope that in cautiously pursuing the benefits of AI, we maintain trust in God and work hard to avoid pitfalls, as we have largely succeeded in doing when embracing past innovations. Regaining a moral compass and a sense of objective good that we took for granted in the past would certainly make the going easier this time around.

 

©2024 New Oxford Review. All Rights Reserved.

 

To submit a Letter to the Editor, click here: https://www.newoxfordreview.org/contact-us/letters-to-the-editor/

You May Also Enjoy

Spreading the Disease

A pervasive modern sentiment suggests that "each person has an inalienable right to be considered good, however he behaves."

Lumps, Bumps & Meditation

The effects of linguistic re-ordering of priorities are far-reaching.

I Am Not White

Public discourse remains limited to material concerns, but what really differentiates human beings is culture, which is founded on religion.