Learn Without Walls
← Back to Course Home
Module 2 of 8

How AI Learned to Talk

Training data, knowledge cutoffs, and why Claude can be confidently wrong.

~20 minutes
📌 Before You Start

What you need:

What you’ll do:

Discover what Claude knows — and what it doesn’t. You’ll test Claude on something you actually understand, so you can spot the gaps.

💡 The Big Idea

Claude learned from text. Enormous amounts of it.

Books. Websites. Articles. Conversations. Code. Wikipedia. Academic papers. All of it. This is called training data.

Claude didn’t experience the world. It read about it.

Here’s the catch: training stopped at a point in time.

Claude doesn’t browse the internet. It doesn’t learn from your conversations. It learned what it learned — and that’s it. Anything that happened after its training cutoff is simply unknown to it.

And then there’s hallucination.

Claude can be confidently wrong. This happens because Claude is pattern-completing, not fact-checking. If a pattern leads somewhere incorrect, Claude follows the pattern anyway — and may sound totally certain while doing it.

Why does this matter? Knowing the limits makes you a smarter user. You’ll know when to trust Claude and when to verify.

👀 See It

Here’s what happens when you ask Claude about well-established history vs. something recent or niche:

Prompt sent to Claude
In one paragraph, what caused World War I?
Claude’s response (confident, accurate)
World War I was triggered by the assassination of Archduke Franz Ferdinand of Austria-Hungary in Sarajevo in June 1914, but the underlying causes ran much deeper: a web of entangled alliances, rising nationalism across Europe, imperial competition between major powers, and a massive military buildup that had been brewing for decades. When Austria-Hungary issued an ultimatum to Serbia, the alliance system pulled one nation after another into a conflict that quickly escalated beyond anyone’s control.

Claude handles well-documented history with confidence — it’s a pattern that appeared thousands of times in training data.

Now notice what happens with something more niche:

Prompt sent to Claude
Tell me the exact box office opening weekend numbers for the third-highest-grossing film released last month.
Claude’s response (appropriately hedged)
I don’t have access to real-time data, and my training has a knowledge cutoff, so I can’t give you accurate figures for recent film releases. For current box office numbers, I’d recommend checking Box Office Mojo or The Numbers directly.
Key observation: Claude hedges when it genuinely doesn’t know. But it doesn’t always hedge. Sometimes it fills in a confident-sounding wrong answer. That’s the danger to watch for.
🖐️ Try It

Test Claude on something you actually know. That way you can judge its accuracy.

  1. 1
    Think of a topic you know really well — a hobby, a subject you studied, a skill you have, a fandom, anything.
  2. 2
    Copy this into Claude and fill in your topic:
    Copy this into Claude 👇
    Tell me 3 facts about [a topic you know really well]. Be specific.
  3. 3
    Read the answers carefully. Are they accurate? Partially right? Missing important nuance? Write down anything that seems off.
  4. 4
    Now question Claude directly:
    Copy this into Claude 👇
    How confident are you in those 3 facts? Could any of them be wrong or outdated?
  5. 5
    Notice how Claude responds to being questioned. Does it stand behind the facts? Does it hedge? Does it revise anything?
  6. 6
    In your head (or on paper), note: one thing Claude got right, and one thing you’d want to double-check before trusting.
🛑 You can stop here if you need a break. Come back to the Brain Break and Wrap Up anytime.
🧠 Brain Break

Your eyes have been working. Give them a rest.

Look away from the screen Focus on something far away 20 seconds Blink slowly

This is called the 20-20-20 rule. Every 20 minutes, look at something 20 feet away for 20 seconds. Real thing. Works.

✅ Wrap Up

The ONE thing to remember from this module:

Claude learned from text, not experience. It can be wrong — and sound totally confident. Always verify important facts.

The word for this is hallucination. It’s not lying — it’s pattern-completing into the wrong answer. Knowing this protects you.

← Module 1: AI Is Not Magic 📋 All Modules Module 3: The Secret to Good Answers →