Skip to content
Home » Human Hallucination vs. AI Hallucination

Human Hallucination vs. AI Hallucination

At first glance, the subject of human hallucinations versus AI hallucinations may appear straightforward, requiring little commentary. But that couldn’t be further from the truth. Hallucinations (from Latin: (h)al(l)ucinatio = delirium or (h)al(l)ucinari = to rave, to dream) are sensory perceptions that appear without any external stimulus and do not involve distortions of existing stimuli. It turns out that chatbots, like ChatGPT, can also experience hallucinations. Sometimes, these can be intentionally induced through human awareness, as seen with deepfakes, where such techniques lead to disinformation.

However, this is a case of intentional action. So, how should we compare hallucinations? Are they always negative, despite how creators use deepfakes? Ultimately, these two types of hallucinations—human and AI—should be placed side by side and carefully compared. Perhaps this comparison will bring insights that help answer the question of whether hallucinations can be useful or should be regarded as harmful byproducts, especially those created by AI. Can these be considered part of the risks AI brings?

Due to the growing popularity of GPT-based tools, organizations like the Panoptykon Foundation, which addresses issues in surveillance society, have highlighted the various side effects associated with these tools: legal accountability, privacy risks, the facilitation of disinformation, and the development of business models that carry negative social consequences. If disinformation turns out to be crucial, what impact will it have on education?

Stack Overflow banned ChatGPT-generated responses due to their lack of reliability. Situations in which ChatGPT and other large language models produce incorrect or fictional content are called “hallucinations,” a term popularized by Google researchers in 2018. But how does this compare to human experience? Where is the boundary between creativity and pure fiction?

Generative artificial intelligence (Gen AI) has transformed how we create and interact with the digital world. From generating realistic images and videos to creating novel text formats, generative AI models have opened a world of possibilities. Yet, despite their potential, generative AI models are not without flaws, and one of the most concerning issues is the phenomenon of AI hallucination.

In the field of artificial intelligence, the concept of “generative AI hallucinations” has proven captivating, blurring the boundaries between reality and fiction. Delving into this phenomenon reveals a world where AI systems can conjure information beyond their training data, leading to both fascinating possibilities and potential pitfalls.

What Are AI Hallucinations?

AI hallucinations arise from the very nature of generative models, designed to create new content based on patterns and relationships learned from training data. However, these models can sometimes extrapolate beyond their training data, producing new information that may not be entirely accurate or grounded in reality.

Generative AI hallucinations refer to cases where AI models generate outputs that are not based on their training data or factual information. These hallucinations can appear in various forms, such as fabricated text, images, or even audio and video content. Essentially, the AI creates information that does not exist within its knowledge base, resulting in outputs that may seem plausible but are ultimately fictitious.

How Do AI Hallucinations Occur?

AI hallucinations typically result from inherent limitations and errors in training data, as well as the design of AI models. Generative AI systems, like language models (LLMs) and image generators, are trained on massive datasets containing a mixture of accurate and inaccurate information. During learning, these systems pick up patterns and correlations but do not understand the underlying truth. As a result, they may produce outputs reflecting inaccuracies and errors present in their training data.

Furthermore, AI models can also hallucinate when they are “pushed” beyond their knowledge limits. For example, when a model is asked to generate information on a topic with which it has limited familiarity, it may fabricate plausible-sounding content to fill gaps in its knowledge.

Why Do AI Hallucinations Happen?

The generation of inaccurate or misleading outputs by AI can occur for various reasons. Several factors contribute to AI hallucinations:

Insufficient Training Data: If an AI model is not trained on enough data, it may lack the information necessary to generate accurate outputs.

Faulty Assumptions: AI models are trained on patterns within data, and if these patterns are flawed, the model may adopt incorrect assumptions about the world.

Data Errors: If the data used to train an AI model contains biases, the model may reflect these biases in its outputs.

Model Complexity: Complex models with many parameters can overfit training data, capturing noise and false correlations that lead to hallucinations.

Prompt Engineering: How input data is organized and presented to AI can impact the likelihood of hallucination. Ambiguous or leading prompts can cause AI to generate incorrect information.

Generalization Challenges: AI models may struggle to generalize training data to real-world scenarios, particularly when encountering new or unexpected inputs.

If a human creates total fiction, such as an imaginary world, it may be intriguing and suitable for a book. But if ChatGPT generates equally compelling, fictional information, should we consider it a hallucination or a form of creativity that AI clumsily attempts to produce? To clarify, let’s consider an example.

Nicolaus Copernicus is best known as an astronomer—the creator of the heliocentric model of the Solar System and likely the first heliocentrist in Europe since ancient Greece. Author of De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), which detailed his vision of the Universe, Copernicus’s work—unlike earlier concepts from Aristarchus of Samos—sparked one of the most significant scientific revolutions since antiquity, known as the Copernican Revolution. For this reason, heliocentrism is often called Copernicanism, while the cosmological-philosophical rejection of all geocentrism and anthropocentrism is known as the Copernican Principle.

An AI-generated illustration depicting Copernicus serves as an example of how AI might support us with various ideas, even those bordering on hallucination. Unfortunately, the risks associated with AI are significant. AI must lead to certain shifts in our education and thinking: how should we handle responses that are unexpected or unsatisfactory? Will such responses be dismissed as mere hallucinations?

Human hallucination vs. AI hallucination - an illustration created by AI, depicts Nicolaus Copernicus
Human hallucination vs. AI hallucination – an illustration created by AI, depicts Nicolaus Copernicus. See also how AI support can occur, and what AI can offer us (CreatedbyAI). AI (Generative Artificial Intelligence) can also support us with various concepts and ideas, even if they are on the threshold of hallucination. Unfortunately but an important issue is the dangers of AI. Therefore, AI must make some changes in our education, in our thinking – how to treat an unsolicited response that does not meet our expectations. Will such a response be treated only as a hallucination?

For some living in Copernicus’s time, he might have seemed to be hallucinating. What is now universally accepted could, in Copernicus’s time, have appeared as a type of hallucination. Does this imply that hallucinations influence our creative thinking? Possibly. The content on our website, www.theoryofeverything.info, represents another form of hallucination.

It turns out that certain “hallucinations” may have led to significant scientific discoveries. Might this also be the case for AI hallucinations? Will these hallucinations boost creativity, or will they be viewed as disinformation? No one knows where AI development will take us or how it will impact our actions. Not even the creators of these tools can answer this. Why might hallucinations hold creative potential? Could generating absurd content relative to our reality somehow spark creativity?

The illustration above, generated by AI, is meant to represent Copernicus according to a prompt:

“Create an illustration highlighting the importance of Copernicus’s discovery for humanity.”

Did ChatGPT fulfill this prompt, or did it generate a hallucination? Admittedly, assessing the illustration generated by ChatGPT is challenging. It seems entirely subjective. Similarly, perceptions of hallucination vary: some believe ChatGPT hallucinates, while for others, it inspires creativity. Others view everything ChatGPT generates as lacking value, dismissing hallucination altogether.

Writing a scientific article with ChatGPT’s assistance on “concepts and assumptions for designing a time machine” seems absurd—a pure hallucination. But what if, in the near future, such an article has rational foundations? Might people in the future regard today’s publication on “concepts and assumptions for designing a time machine” as a breakthrough? Today it is a hallucination, but tomorrow—a pioneering idea, much like Copernicus revived heliocentrism from Aristarchus of Samos in ancient Greece.

You might think this is an argument against AI assistance. Nothing could be further from the truth. I am fascinated by this technology and strive to understand what is truly happening. Considering something a hallucination is a user’s right. However, pairing an unfortunate prompt with ChatGPT’s equally unfortunate response (hallucination) may support another user’s creativity.

It’s similar to a writer receiving four different endings for their story from ChatGPT, allowing them to view their work from a fresh, creative perspective. Should there be a collection of “unusable” information? Perhaps someone could find inspiration in “ChatGPT hallucinations.” What holds no value for some may serve as a source of inspiration for others.

There should be a continuation…

Marek OżarowskiTom Wawer: Team Theoryofeverything, November 1, 2024.

Dark skinned girl with butterflies

Leave a Reply

Your email address will not be published. Required fields are marked *