In May 2023, a New York lawyer submitted a legal brief to court that cited six cases as precedents — cases that ChatGPT had completely made up. The cases didn't exist. The judge was not amused. This is AI hallucination — and it's one of the most important things every AI user, including children, needs to understand.

What is AI hallucination?

AI hallucination is when an AI model generates false information and presents it as fact — confidently, fluently, and convincingly. It's not lying (AI has no intentions). It's a fundamental quirk of how large language models work.

Language AI doesn't look up facts. It generates text that statistically fits the context. Most of the time this works well — the statistically likely response to "what is the capital of France?" is "Paris," and that happens to be correct. But for niche facts, recent events, specific statistics, or obscure names, the "statistically likely" response might sound plausible but be completely wrong.

Why it sounds so confident

This is what makes hallucination dangerous: AI doesn't hedge when it's wrong. A human expert who isn't sure will often say "I think it's..." or "you might want to check that." AI generates text in the same confident tone whether it's absolutely correct or completely wrong. There's no uncertainty signal in the output.

What AI gets wrong most often

  • Specific statistics and numbers — "X% of people do Y" type claims are often fabricated
  • Citations and sources — AI regularly invents book titles, paper authors, and journal names that don't exist
  • Recent events — AI has a knowledge cutoff date and doesn't know about things that happened after it
  • Niche or obscure facts — the less common something is in training data, the more likely AI is to confuse it
  • Maths involving multiple steps — AI can reason through simple maths but makes errors in complex calculations
  • Very specific local information — shop hours, phone numbers, local events, specific addresses

The "trust but verify" rule

This is Lesson 5.1's core principle in AI Adventures. For anything important:

  1. Get the answer from AI — it's still useful as a starting point
  2. Note the specific claim — the statistic, name, date, or fact you want to use
  3. Find a second source — a textbook, Wikipedia, an official website, a trusted news source
  4. Only use the fact if it checks out

For school work, homework, or any work that will be assessed: always verify AI-provided facts. For casual conversation or brainstorming, the stakes are lower — but it's still worth building the verification habit.

The "Fact or Fake?" activity

Ask AI to give you 10 facts about a topic. Then research each one. How many are correct? How many are subtly wrong? How many are entirely made up? This exercise — done in Lesson 5.1 of AI Adventures — is one of the most eye-opening activities for kids who trusted AI completely before doing it. After seeing AI confidently state a completely false fact, they become instinctively more sceptical — which is exactly the right response.

How to ask AI about its own uncertainty

You can directly prompt AI to flag its uncertainty: "Answer the following question and tell me how confident you are, and which specific claims I should verify independently: [question]." Most modern AI models will flag their uncertainty more explicitly when asked this way.

🚀 AI Adventures with Parikshet

Free hands-on AI activity pack — no credit card, instant download

Get the Free Pack →