| Time | Block | What You Do | Student Activity |
|---|---|---|---|
| 0โ5 | Hook | Show the "fitness tracker data sold to insurance" headline. Ask: should companies be able to do this? | Think-pair-share, quick poll (thumbs up/down) |
| 5โ18 | Direct Instruction | Teach 5 data ethics principles: privacy, consent, transparency, fairness, accountability | Fill in vocab definitions on worksheet |
| 18โ40 | Case Studies | Walk through 3 nuanced cases โ no clear right answer. Facilitate structured debate using sentence frames | Small groups, worksheet case analysis |
| 40โ48 | Debrief | Collect class positions on each case; surface disagreements; emphasize that reasonable people disagree | Class vote + share reasoning |
| 48โ55 | Brain Break | "Would You Rather?" data ethics edition โ fast-paced, no wrong answer | Stand = Option A, Sit = Option B |
| 55โ60 | Wrap-Up | Assign take-home; preview Session 8 (their own research project) | Write one data ethics pledge sentence |
The situation: A popular fitness app tracks your steps, sleep, and heart rate. The app is free because the company sells your anonymized health data to insurance companies. The insurance companies use this data to set premiums โ people who exercise more pay less.
The tension: Benefits include cheaper insurance for healthy people and better health research. Harms include people who can't exercise (disabilities, long work hours, unsafe neighborhoods) paying more โ not because they made a "bad choice" but because of factors outside their control.
Key questions for class: Did users truly consent if it's buried in Terms of Service? Is this fair to people who can't exercise? Who benefits and who is harmed?
Instructor note: Push back on "just read the TOS" โ studies show TOS is longer than many novels. Explore what "real consent" requires.
The situation: A city uses an algorithm to predict which neighborhoods are likely to have crimes. Police are sent to those neighborhoods more often. The algorithm was trained on historical arrest data.
The tension: The city says it's just using data to be efficient. Critics say that if past policing was biased (more arrests in certain neighborhoods regardless of actual crime), the algorithm learns and reinforces that bias. More police presence โ more arrests โ algorithm "confirms" prediction โ cycle continues.
Key questions for class: If data comes from a biased system, can the algorithm be unbiased? Is efficiency a good reason to use potentially biased predictions? Who decides what counts as "crime"?
Instructor note: This is a real technology in use. Students may feel strongly. Validate all perspectives while helping them see the structural bias issue.
The situation: A school district installs AI cameras that track every student's movement and flag "unusual behavior" to school administrators in real time. The district says it's for safety after a school shooting threat. Parents were told in a newsletter but not asked to vote.
The tension: Safety is a genuine concern. But constant surveillance may create anxiety, distrust, and a "chilling effect" on students' behavior โ they may not express themselves freely knowing they're watched. The AI may also flag students from certain groups more often. There was no vote โ was that consent?
Key questions for class: Does safety justify surveillance? Who decides what behavior is "unusual"? Is a newsletter enough for consent? Should students have a say?
Instructor note: Students will relate to this personally. Encourage them to think about power โ who has it, who doesn't.
Let students sit with discomfort โ not every question has a clear answer. That's the point.
Instructions: Read each scenario. Stand = Option A, Sit = Option B. No wrong answers โ quick vote then one student explains their choice.