Join our upcoming webinar “Deriving Business Value from LLMs and RAGs.”
Register now

Here's a made-up response by ChatGPT that's wrong and doesn't make sense.

llm hallucination example

If you've encountered a similar response from ChatGPT that didn't quite add up, you've likely come face-to-face with the AI hallucination problem. It's a strange yet significant challenge where AI systems, specifically large language models (LLMs) like ChatGPT, produce misleading or entirely fictional responses.

The AI hallucination phenomenon reflects on the complexities of AI processing mechanisms and is one of the top LLM challenges that researchers worldwide are trying to address. While it might not be critical for day-to-day small chats with LLMs, it might cause severe issues if occurred in important cases. Hallucinations in AI are especially crucial to avoid in industries like healthcare, law, finance, and anywhere else where information accuracy is paramount.

In this blog post, we're diving deep into the causes and consequences of generative AI hallucinations. We'll explore what triggers these missteps and their impact on real-world applications. More importantly, we'll discuss the latest strategies and innovations that aim to enhance AI accuracy and reliability.

What is AI hallucination?

AI hallucination is a phenomenon where a language model generates factually incorrect or misleading content. This occurs due to limitations in the training data or the model's inability to distinguish between reliable and unreliable sources. Such hallucinations often manifest as confidently presented but incorrect facts, nonsensical responses, or fictional scenarios. This issue warns about the importance of evaluating AI-generated content to improve their responses' reliability.

How AI hallucinations happen

To understand why AI hallucinations occur, we first need to understand how language models generally work. These models perceive words not with human-like understanding but as sequences of letters. Similarly, sentences are seen as sequences of words. The model's "knowledge" comes from its training data – vast collections of text it has been fed.

Now, the model uses statistical patterns from this data to predict what word or phrase might logically come next in a sentence. It's a bit like a highly advanced pattern recognition system. However, since it's not truly “understanding” the content but rather relying on statistical likelihoods, it can sometimes make mistakes.

These mistakes, or "hallucinations," occur when the model confidently generates information that is either incorrect or nonsensical. This is often because it's drawing from patterns it has seen in the training data, which might not always align with real-world accuracy or logic. In trying to construct a plausible response based on its training, the model may create a reasonable response but is actually disconnected from factual or logical consistency.

how ai hallucinations work

In our example, the model's response doesn't even make sense.

Why AI hallucinates

LLM hallucinations occur for multiple reasons.

why ai hallucinates

Training data issues

Training data is often at the heart of why gen AI hallucinations occur. Let's look at three critical aspects of training data that can lead to these issues:

  1. Insufficient training data: AI models with too little data will lack a comprehensive grasp of language nuances and contexts. There's not one cause of having insufficient training data. It might be that the data is extremely private and can't be used for training, such as a patient's data in healthcare or a bank client's data that's guarded with strict protocols. In such cases, it's crucial to entrust your data to reputable companies like SuperAnnotate, known for their commitment to data security and governance.
superannotate privacy and security measures
SuperAnnotate privacy and security measures

Or maybe the relevant data is just unavailable, and the collection process is labor-intensive. Yes, data is invaluable for the ML model quality, and insufficient data will result in overly simple or irrelevant responses to questions.

  1. Low-quality training data: The quality of data matters immensely. Unfortunately, if the training material has flaws, the model will learn the errors, biases, and irrelevant information it's fed. This can lead the AI to generate responses that are factually incorrect, biased, or not aligned with the intended query.
  2. Outdated training data: If a model's training data doesn't keep up with current data, the AI's outputs can become outdated. In rapidly changing fields like technology or current affairs, the model might lack understanding of recent developments, using outdated references or missing new terminology and concepts. This leads to responses that may seem out of touch or lacking relevance.

Prompting mistakes

If you ask someone a confusing or conflicting question, the response will reflect the question's quality. The prompt is your communication with the language model, and crafting it the wrong way will result in the model's logical mistakes. Hallucinating AI with prompt engineering arises because of three major reasons:

  • Confusing prompts: If the input to a language model is vague or ambiguous, it's challenging for the AI to understand the exact intent or context of the query. It's like asking a question without providing enough context for a meaningful answer. As a result, the model may "guess" the intent, leading to responses that don't fit the user's needs.
  • Inconsistent or contradictory prompts: When the prompt is inconsistent or contains conflicting information, it puts the model in a difficult position. It tries to reconcile these inconsistencies, often leading to illogical outputs. It's crucial to provide clear and coherent instructions to guide the model towards the desired type of response.
  • Adversarial attacks: These are deliberately crafted inputs designed to confuse or trick the model into generating incorrect, inappropriate, or nonsensical responses. Such attacks exploit the model's vulnerabilities or its reliance on statistical patterns.

Model errors

AI hallucinations, especially in models like GPT-3, naturally occur as these systems optimize their learning from existing data. To counteract this, incorporating human feedback is crucial. The dilemma between innovation and practicality further complicates this: focusing too much on novelty might lead to unique but incorrect outputs, whereas prioritizing usefulness could result in predictable, dull answers.

Technical issues in the AI model language processing can also induce hallucinations. Such issues include incorrect data associations or flawed generation strategies that aim for diverse responses. Additionally, extensive pre-training can make a model overly reliant on its stored knowledge, increasing error risks as it generates responses based on a blend of learned and newly created content. As dependency on AI communication tools grows, recognizing and addressing these models' limitations is vital to prevent undue trust in their capabilities.

Types and real-world examples of AI hallucinations

Hallucinations in GenAI can be categorized in different ways. There are three distinct types of chatbot hallucinations that not only confuse people but possess serious problems.

Factual errors

A common type of LLM hallucination is generating the wrong content. Again, the model may sound confident and generate plausible-sounding answer while telling the most nonsensical lies.

An example is Google Bard's hallucination about the James Webb space telescope. According to Bard, Webb took the first images of an exoplanet. Nasa, however, claims that the first exoplanet images (2021) came long before Webb's launch (2004).

Fabricated informationThis is also very dangerous. Earlier in 2023, a client suing an airline submitted a legal brief written by ChatGPT to a Manhattan federal judge. The chatbot fabricated information, included fake quotes, and cited non-existent court cases. A New York federal judge sanctioned lawyers who submitted the brief. This is vivid proof that ChatGPT hallucinations are real and should be very carefully avoided.

Harmful information

Language models like ChatGPT can also easily ruin people's reputations. A famous case happened in April 2023, when ChatGPT hallucinated that a university professor had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem is that no such article existed, there's never been a class trip to Alaska, and the professor has never been accused of harassing a student.

In the same month, ChatGPT hallucinated that Brian Hood, Mayor of Hepburn Shire Council, was imprisoned for bribery while working for a subsidiary of Australia's national bank. In fact, Mr Hood was a whistleblower and was never charged with a crime.

It's not always about individual's reputation. Air Canada faced a significant customer service issue when their AI chatbot provided incorrect information to a customer, leading him to purchase a full-price ticket under the impression he could apply for a refund later. This incident caused both the customer and the airline time, money, and trust.

Prompt engineering and model refinement are two major ways to prevent AI hallucinations. Let's explore both.

Prompt engineering

Recently, a 26-prompting-tricks-guide was published that helps people efficiently communicate with large language modes.

llm prompting principles

Turns out that prompt engineering is a great technique to reduce hallucinations. The most notable prompting tips to prevent hallucinations are:

  • Use clear prompts

Be as clear as you can be. Moreover, restrict the model so that it has fewer possible outcomes to generate.

  • Provide relevant information

Techniques like in-context learning and few-shot learning will significantly help you get your desired output from the model. Just show it an example of what you expect it to return.

  • Give a role to the model

Assigning a role like a teacher, friend, or expert can help the model fit its responses to fit the chosen persona, providing more contextually appropriate and engaging interactions.

Model refinement

  1. Use diverse and relevant data

When the AI model is trained on various topics and sources, it gets exposure to more robust knowledge and constructs more accurate and nuanced responses. Hallucination becomes much less possible if the model is fluent in the specific area. LLM fine-tuning on the domain area is one of the most robust techniques to deal with hallucinations, and SuperAnnotate's fine-tuning tool helps enterprises craft a smart and fluent model.

llm fine tuning
  1. Experiment with temperature

LLM temperature is a parameter that affects the predictability of the model's possible outputs. Experimenting with temperature is about adjusting the level of randomness in the models's responses. A lower temperature results in more conservative and expected outcomes, while a higher temperature encourages creativity and variability in the responses. And hallucinations sometimes come with creativity. By fine-tuning this parameter, developers can balance between novel content generated and accuracy. Building a model that is both truthful and creative requires good care.

how does temperature affect model response

Wrapping up

The AI hallucination problem has been relevant since the beginning of the large language models era. Detecting them is a complex task and sometimes requires field experts to fact-check the generated content. While being complicated, there are still some tricks to minimize the risk of hallucinations, like smart prompting, usage of relevant data, and experimenting with the model itself. As we move forward, the collaborative effort between developers, researchers, and field experts will remain crucial in advancing these solutions, ensuring that AI continues to serve as a valuable asset in the vast landscape of digital innovation.

Recommended for you

Stay connected

Subscribe to receive new blog posts and latest discoveries in the industry from SuperAnnotate