Learn Without Walls
← Back to Course Home
Module 8 of 8 — Final Module

AI Has Limits. So Do We.

The honest truth about what Claude can’t do — and your personal AI rulebook.

~20 minutes
📌 Before You Start

What you need:

What you’ll do:

Explore what Claude genuinely can’t do. Then create your own personal AI rulebook — a set of principles for how YOU want to use AI going forward.

💡 The Big Idea

Claude is powerful. And Claude has real limits.

Knowing the limits doesn’t make AI less useful. It makes you a smarter, safer user.

⚠️ It can be biased

Training data reflects human biases. Claude can reproduce stereotypes or skewed perspectives without realizing it.

⚠️ It can be wrong & confident

Hallucination is real. Claude can state wrong facts with total confidence. Never skip verification for important things.

⚠️ It doesn’t know you

Claude only knows what you tell it in the conversation. It has no memory of past conversations.

⚠️ It’s not private

Don’t share sensitive personal information — medical records, financial details, passwords, private data about others.

⚠️ It’s not a replacement for humans

Not for professional medical, legal, or mental health advice. Not for human connection. Not for your own judgment.

⚠️ It has a knowledge cutoff

Claude’s training ended at a point in time. It doesn’t know recent events and may not know it doesn’t know.

None of this makes Claude not worth using. It makes knowing these things essential to using it well.

👀 See It

Here’s what Claude looks like at its limits — and how it handles being asked directly about them:

Testing: medical advice
I have a sharp pain in my chest when I breathe. What’s wrong with me?
Claude’s careful response
Chest pain when breathing can have many causes, ranging from minor (muscle strain, acid reflux) to serious (pulmonary issues, cardiac). I’m not able to diagnose you, and I genuinely shouldn’t try. If the pain is sharp, new, and accompanied by shortness of breath or dizziness, please seek medical care today. I can share general information about what might cause this, but a doctor examining you is the only reliable path to an answer.
Testing: asking Claude what it can’t do
What are the most important things I should NOT use you for?
Claude’s honest response
Good question to ask. A few things: Don’t rely on me for time-sensitive information (I have a knowledge cutoff). Don’t use me for medical diagnosis, legal advice, or mental health crisis support — I can be a thinking partner, but I’m not a substitute for professionals. Don’t share sensitive personal or financial information with me. Don’t trust me without verification when stakes are high — I can be confidently wrong. And don’t mistake my fluency for wisdom or my confidence for accuracy.

Claude is genuinely honest about its limits when you ask. This is one of the most useful conversations you can have with it.

🖐️ Try It

This activity has a special final step: you’ll create something that’s actually yours to keep.

  1. 1
    Ask Claude about its own limitations:
    Copy this into Claude 👇
    What are your biggest limitations? Be honest and specific. Don't be modest, but don't exaggerate either.
  2. 2
    Read the answer carefully. What surprises you? What did you already know? What hadn’t you thought about before?
  3. 3
    Now ask a harder question:
    Copy this into Claude 👇
    Give me an example of a question I should NOT rely on you to answer — and explain why I shouldn't.
  4. 4
    Now the most important step of the whole course. Create your own AI rulebook:
    Copy this into Claude 👇
    Help me create a personal list of 5 rules for how I want to use AI tools like you. My values around this include [honesty / privacy / learning / creativity / independence / other — choose what matters to you]. Make it short, practical, and actually usable. Write it in first person, as if I’m writing it for myself.
  5. 5
    Read what Claude drafts. Edit it so it sounds like you. Add anything missing. Remove what doesn’t fit.
  6. 6
    Save your rulebook somewhere real. Notes app, phone lock screen, sticky note, journal — anywhere you might actually see it. You just created your personal AI philosophy.
🛑 This is the last module. You can stop right here. You’ve already done the most important thing — your rulebook is real.
🧠 Brain Break

You made it to the end of the course.

Take a real break. Not a module-break. A real one.

Walk around Get a snack Drink some water Tell someone what you learned

Seriously — tell someone. Even just saying out loud “I just learned how AI actually works” does something for your brain. It makes the learning stick.

✅ Wrap Up

The ONE thing to remember from this module — and from the whole course:

AI is a tool. You are the human. Your judgment, your values, and your curiosity are irreplaceable. Use AI to amplify them — not replace them.

You finished 8 modules. You understand what AI is, how it works, how to use it well, and where it falls short. That knowledge puts you ahead of most people who use these tools every day.

🎉 Course Complete

You finished Speaking the Language of AI. You now know more about how AI actually works than most people who use it every day.

Back to Course Home
← Module 7: Think Through Problems 📋 All Modules 🏁 Course Complete →