1 / 15
Session 7 — Data Ethics | Grade 5 Data Science
Session 7 of 8

Data Ethics

Who owns your data? Who decides how it's used?
And what happens when data causes harm?

"Just because data can be collected doesn't mean it should be."

Data Science for Young Minds · Grade 5 · Ages 10–11

Hook — Think About This

A Fitness App Is Tracking You Right Now

What It Collects
  • Every step you take
  • Your heart rate all day
  • How long you sleep
  • Your location at all times
  • Your weight, if you enter it
What It Shares

The app is FREE because it sells "anonymized" health data to:

  • Insurance companies
  • Drug companies
  • Advertisers
  • Employers (sometimes)

Quick poll: Thumbs up if you'd still use the app. Thumbs down if you wouldn't.
There's no wrong answer — but why did you choose what you chose?

Definition

What Is Data Ethics?

Data ethics is the study of what is right and wrong when it comes to collecting, storing, sharing, and using data — especially data about people.

Data ethics asks questions like:

  • Who has the right to collect this data?
  • Did people actually agree to share it?
  • Could this data be used to hurt someone?
  • Is the data system treating everyone fairly?
  • Who is responsible if something goes wrong?

Why it matters to YOU:

  • You generate data every day
  • Apps, schools, and companies collect it
  • That data can affect your opportunities
  • You'll make decisions about data your whole life
  • Future laws will be shaped by your generation
Framework

5 Principles of Data Ethics

1. Privacy
People have the right to control who can see their personal information and how it is used. Some data should never be shared without permission.
2. Informed Consent
People should agree to share their data only after they fully understand what is being collected, how it will be used, and what the risks are.
3. Transparency
Organizations that collect data should be open and honest about what they collect, why, and what they do with it. No hidden tracking.
4. Fairness (Equity)
Data systems should treat all people fairly. They should not advantage some groups or harm others — especially already-disadvantaged groups.
5. Accountability Those who collect and use data must answer for any harm caused. "The algorithm did it" is not an excuse.
Principle 1

Privacy — Who Should Know What About You?

Imagine you wrote in a diary every day. Would you want your school to read it? Your parents? The government? A company trying to sell you things?

Privacy means you get to decide — not others.

Usually OK to share

  • Your name on a class list
  • Your grade level
  • Favorite subject (if you want)

Depends on context

  • Your home address
  • Your daily schedule
  • Who your friends are

Strongly protected

  • Health and medical data
  • Religion and beliefs
  • Financial information

Key idea: Privacy isn't about having "something to hide." It's about having the power to decide what you share, with whom, and why.

Principle 2

Informed Consent — Did You Actually Agree?

Every time you click "I Agree" on an app, you may be consenting to share a lot of data. But is that real consent?

Problems with current consent:

  • Terms of Service average 32,000 words — longer than most novels
  • Written in dense legal language
  • Often an all-or-nothing choice: agree or don't use the app
  • Kids are rarely asked at all

What real consent requires:

  • Plain language explanation
  • Person must be old enough to understand
  • Must be a genuine choice (not "agree or we won't help you")
  • Can be taken back (opt-out)

Discussion: If you didn't read the TOS, did you really consent? Is clicking "agree" enough?

Case Study 1

Health App Data and Insurance

The Situation

A free fitness app tracks your steps, sleep, and heart rate. It sells your health data to insurance companies. People who exercise more pay lower insurance premiums. People who don't exercise pay more.

Possible benefits:

  • Rewards people who stay healthy
  • Encourages exercise
  • Funds health research
  • Keeps insurance cheaper for some

Possible harms:

  • People with disabilities can't exercise the same way
  • Unsafe neighborhoods limit outdoor activity
  • Long work hours leave no time
  • Punishes circumstance, not choices

The real question: Is it ethical to use data to make decisions that affect people's lives when the data doesn't capture the full picture of their situation?

Case Study 2

Predictive Policing Algorithm

The Situation

A city uses an AI algorithm to predict which neighborhoods are most likely to have crime. Police are sent there more often. The algorithm was trained on historical arrest data.

The bias cycle:

Past biased policing
More arrests in certain areas
Algorithm flags those areas
More policing → more arrests
Algorithm "confirmed"

Key insight: If the data used to train an algorithm comes from a biased system, the algorithm will learn and repeat that bias — even if no one programs it to be biased.

Case Study 3

AI Surveillance Cameras in Schools

The Situation

A school district installs AI cameras that track every student's movement and flag "unusual behavior" to administrators in real time. Parents were notified by newsletter but not asked to vote.

Arguments for it:

  • Safety is a real and serious concern
  • Could detect threats earlier
  • Parents may feel reassured

Arguments against it:

  • Constant watching creates anxiety
  • Students may not speak or act freely
  • AI may flag certain groups unfairly
  • Students never consented
  • Who decides what "unusual" means?

Power question: Should students have a say in surveillance of their own school? Is a newsletter enough for consent?

Deeper Concept

Algorithmic Bias — When Data Systems Are Unfair

Algorithmic bias happens when a computer system produces results that unfairly advantage some groups over others — even when no one intended it.

How it happens:

  • Training data reflects past discrimination
  • Designers don't test for all groups
  • Certain groups are underrepresented in data
  • Metrics chosen favor some groups over others

Real examples:

  • Facial recognition works worse on darker skin tones
  • Hiring algorithms ranked male resumes higher
  • Medical algorithms undertreated Black patients
  • Loan algorithms charged higher rates to some zip codes

"Garbage in, garbage out" — biased data creates biased algorithms, even if the math is perfect.

Principles 3 & 5

Transparency and Accountability

Transparency

Organizations should be honest about:

  • What data they collect
  • Why they collect it
  • Who sees it
  • How long they keep it
  • How to delete your data
Accountability

When data causes harm, someone must answer for it.

  • "The algorithm decided" is not enough
  • Humans design and choose algorithms
  • Companies can be held legally responsible
  • Affected people deserve explanation and remedy

Discussion: If a self-driving car hits someone because of a coding decision made years ago, who is responsible — the programmer, the company, the car owner?

Empowerment

Your Data Rights

In many places, people have legal rights over their data. Knowing these rights is the first step to protecting yourself.

Right to Access

You can ask companies what data they have about you. Many are legally required to tell you.

Right to Correction

If data about you is wrong, you have the right to get it corrected — especially if it's affecting decisions about you.

Right to Deletion

In some regions, you can ask companies to delete your data entirely. This is called the "right to be forgotten."

Note: These rights are not equal everywhere. In the US, kids under 13 have COPPA protections. The EU has stronger laws (GDPR). Some places have almost no protections. Law is still catching up to technology.

Critical Thinking

Legal ≠ Ethical

Something can be legal (allowed by law) but still unethical (morally wrong). And sometimes things that feel right are technically illegal.

Legal but possibly unethical:

  • Selling user data to advertisers (with buried consent)
  • Using facial recognition in stores without telling shoppers
  • Tracking employees' every movement during work hours

Why laws lag behind:

  • Technology changes faster than laws
  • Lawmakers don't always understand the technology
  • Companies lobby to keep regulations light
  • Harms aren't always obvious until later

Data scientists and tech workers make ethical choices every day — even when they aren't required to by law.

🧠 Brain Break — Would You Rather?

Stand = Option A  |  Sit = Option B  |  No wrong answers!

Q1: A) A free app that collects your data  |  B) A $5/month app that collects nothing

Q2: A) AI that reduces crime 20% but sometimes wrong  |  B) Human police only, no AI predictions

Q3: A) School reads all student emails "just in case"  |  B) School reads none even for safety concerns

Q4: A) Doctors access your health data without asking in emergencies  |  B) Always ask first, even in emergencies

Session 7 Wrap-Up

What You Now Know About Data Ethics

Privacy
Right to control who sees your personal data
Informed Consent
Real agreement after truly understanding what you're agreeing to
Transparency
Being honest about what data is collected and why
Algorithmic Bias
When data systems produce unfair results for some groups
Accountability
Responsibility for harm caused by data systems
Legal ≠ Ethical
Allowed by law doesn't automatically mean morally right

Before you leave: Write one sentence — your personal data ethics pledge.
"As a data citizen, I will always ask..."

Next session: You become the data scientist — Session 8: Capstone Research Project