Lesson Report:
**Title: Applying OECD AI Principles to Repair the Allegheny Child Welfare Algorithm**
In this session, the class moved from diagnosing abstract problems in AI and democracy to practicing how to *repair* a real-world AI system using policy frameworks. Students revisited the Allegheny child welfare risk scoring algorithm and systematically applied the OECD AI principles to identify violations, translate those principles into concrete rules, and begin designing a three‑step plan to make the system more democratic, fair, and accountable. The lesson explicitly linked prior work on EU recommendations for AI with the OECD’s values-based principles, deepening students’ ability to operationalize high-level norms.
—
## Attendance
– Number of students explicitly mentioned as absent: **0**
(Some students had temporary connection issues or did not respond in discussion/polls, but no one was formally marked “absent� in the transcript.)
—
## Topics Covered (Chronological, with Activities)
### 1. Opening and Framing: Two “Evils� for Democracy
**Activity: Live Zoom Poll – “Which is a greater threat to democracy?�**
– Question posed:
– Which of the following is a greater threat to democracy?
1. A **biased but transparent** algorithm
2. An **accurate but black box** algorithm
– Instructor sets up a Zoom poll (with some self‑deprecating comments about “boomer momentsâ€� with the tech).
– Students vote; results are essentially **50/50**, even after multiple reminders and countdowns.
– Instructor’s takeaway:
– The class is sharply divided on which is “worse.â€�
– This mirrors the broader controversy: both outcomes are deeply problematic for democratic values.
– One of the “big takeawaysâ€� so far in the course is that it is **hard to rank** these harms.
**Conceptual framing:**
– Today’s goal: move **beyond merely identifying** these problems to:
– Investigate how emerging **AI policy frameworks** (specifically OECD principles) propose to address them.
– Explore both the **promises and limitations** of these frameworks in practice.
– Connection to previous class:
– Last class focused on the **EU recommendations** on AI.
– Today: **OECD AI principles**, especially their **values-based principles**.
– Later (future class): OECD **recommendations for policymakers** (second column) will be addressed.
—
### 2. Introduction to OECD AI Principles (Values-Based Column)
**Activity: Individual skim of OECD principles**
– Instructor shares a **link** (and notes it will also be posted on eCourse/LMS later).
– On screen: an OECD page with **two columns**:
– Left: **Values-based principles** (focus for today)
– Right: **Recommendations for policymakers** (for a later class)
– Students asked to:
1. Choose **one** principle category that interests them from the five values-based OECD principles.
2. Spend ~30–40 seconds scanning that principle.
3. Spend ~2 minutes forming **initial thoughts**:
– How does this principle attempt to address the kinds of AI-and-democracy problems discussed in the course so far?
**OECD Values-based Principles referenced** (implicitly or explicitly):
– Inclusive growth, sustainable development and well-being
– Human-centred values and fairness / human rights and democratic values (fairness, privacy)
– Transparency and explainability
– Robustness, security and safety
– Accountability
**Instructor’s aim at this stage:**
– Familiarize students with the *form* and language of OECD principles.
– Highlight that these look structurally similar to the **EU principles** discussed last week (broad, high-level, somewhat abstract).
—
### 3. Revisiting the Allegheny Algorithm: Purpose and Failure
**Activity: Whole-class recap (student-led)**
The class returns to a familiar case: the **Allegheny Family Screening Tool (AFST)**, a child welfare risk scoring algorithm.
**3.1. What is the Allegheny Algorithm supposed to do?**
Student explanation (Ermahan):
– The system is a **child welfare risk assessment algorithm** used in Allegheny County.
– It predicts the **likelihood that a child is in an abusive or neglectful situation**, or at risk of such conditions.
– It assigns a **risk score** to families/children, which then:
– Guides **social workers** in deciding whether to investigate, intervene, or leave families alone.
– Intended purpose:
– **Noble intention**: enable earlier, more accurate identification of abuse/neglect to protect children.
Instructor emphasizes:
– AI is used as a **decision-support tool** for extremely sensitive social services decisions.
– The algorithm’s numeric output has **real consequences** for families (e.g., intervention, investigations, removal).
**3.2. What went wrong with the Allegheny Algorithm?**
Student explanation (Elijah):
– The algorithm displayed **systematic bias**:
– It disproportionately categorized certain groups of children—especially from **low‑income families**—as “high risk.â€�
– Meanwhile, children from **wealthier families** were often not flagged even when they experienced actual abuse or neglect.
– It tended to:
– Over-scrutinize families with **unclean/“unkeptâ€� homes**, or other poverty-related markers.
– Under-scrutinize families that **fit the profile** associated with higher socioeconomic status.
– Net effect: **misclassification**:
– Many children who needed intervention were not flagged.
– Many families who were **not neglectful/abusive** were nonetheless treated as high risk.
Instructor synthesis:
– The system did **not** fulfill its purpose of accurately identifying abuse/neglect.
– Its most disturbing aspect: **who** was labeled as dangerous or neglectful:
– **Poor families** were systematically over-flagged.
– **Wealthier families** were comparatively protected from scrutiny.
– So the system was both:
– **Inaccurate**, and
– **Deeply inequitable** in its errors.
**Transition:**
– Today’s question is no longer “What went wrong?â€� (already covered in prior weeks) but:
– **“How might we fix it using the OECD framework?â€�**
– Specifically: How would we **redesign** or **govern** the Allegheny algorithm so it aligns better with OECD principles?
—
### 4. First Breakout: Diagnosing OECD Principle Violations
**Activity: 5-minute small-group analysis (6 breakout rooms)**
Task for each group:
1. Read through the OECD **values-based principles**.
2. Identify **which principles** the Allegheny algorithm **violates**.
3. Be ready to share **at least one violated principle** and why.
**Reported group findings (whole-class debrief):**
Students identified multiple violations; core themes:
#### 4.1. Human-centred values and fairness / Human rights and democratic values
– Observations (multiple students including Ermahan, Amina):
– The algorithm **discriminated against poor families**, effectively encoding socioeconomic bias.
– It treats families from different backgrounds **unequally**, undermining fairness and human dignity.
– It does not act in a **“human-centredâ€�** way because:
– It fails to consider the **full context** of families’ lives.
– It equates indicators of poverty with likelihood of neglect/abuse.
– Thus, it erodes:
– **Fairness**
– **Non-discrimination**
– **Respect for human rights** (e.g., the right to family integrity, presumption of good faith)
#### 4.2. Robustness, security, and safety
– Observations (Silik and others):
– The principle states AI should be **robust, secure, safe**, and when feasible should **bolster information integrity**.
– The Allegheny algorithm:
– Produced **false or misleading outputs**.
– Systematically misidentified high-risk cases, failing its **core safety function**.
– Because decisions based on its outputs affect real children, these inaccuracies are especially severe:
– Risk of **unwarranted interventions** in low-income families.
– Risk of **failing to protect** children in wealthier families.
– Therefore, it violates robustness and safety requirements.
#### 4.3. Transparency and explainability
– Observations (Gavin, Banu/Samira’s group, others):
– The algorithm functioned effectively as a **black box**:
– People subject to its decisions (families, social workers) had no clear explanation of **how** the risk score was generated.
– The principle of transparency and explainability requires:
– The reasoning behind AI decisions to be **understandable and inspectable**, especially when decisions are high-stakes.
– Without explanation:
– Individuals cannot ask: **“Why did you classify me as high risk?â€�**
– There is no meaningful way to **challenge or appeal** the algorithm’s assessment.
#### 4.4. Privacy
– Observations (Gavin and others):
– The system drew heavily on **administrative data from public services** (e.g., use of public benefits, publicly funded mental health services).
– Families who used **public services** had their data repurposed, often without meaningful awareness or consent, to:
– Infer their risk of abusing/neglecting their children.
– People who could afford **private services** were spared this level of data surveillance.
– This generates:
– **Privacy violations**, especially for vulnerable populations.
– A **privacy inequity**: poorer families have less data protection.
#### 4.5. Accountability
– Observations (Bekaim and others):
– OECD principle: actors across the AI lifecycle must be **accountable**; there must be **traceability** and **clear responsibility**.
– In Allegheny:
– The model outputs a risk score that heavily influences decisions, but:
– It is not clear who is ultimately **responsible** when the score is wrong.
– There’s no robust, built-in **appeals mechanism** or process to correct errors.
– There is limited provision for systematic **risk management** or **error review** after decisions.
**Instructor synthesis of this phase:**
– The Allegheny algorithm violates **multiple** OECD principles simultaneously:
– Fairness/human-centredness
– Robustness and safety
– Transparency and explainability
– Privacy
– Accountability
– This multi-principle failure makes it a powerful case study for how abstract frameworks might guide **reform**.
—
### 5. Second Breakout: Operationalizing Individual OECD Principles
**Activity: 10-minute breakout – turning principles into rules**
Task for each group (each assigned one principle):
1. **Translate** their assigned OECD principle into **one clear, actionable sentence** that describes what a *good* AI system should do.
2. **Formulate one question** to ask the creators of the Allegheny algorithm related to that principle (e.g., “Why did you use this data?� “How do you evaluate fairness?�).
3. Prepare a **defense** of the principle against criticisms such as “too vague,� “not actionable,� or “bad for business.�
In the whole-class debrief, only the **one-sentence rules** were shared:
#### 5.1. Principle: Inclusive growth / fairness / human-centred (Group 1: Banu, Barfia, Ermahan, Samira)
– Actionable rule (Barfia paraphrasing group consensus):
– A good AI system should operate in a way that is **fair, transparent, and accountable**, **respecting human rights**, clearly explaining its decisions, and **avoiding harmful outcomes**.
– Instructor note:
– This statement effectively bundles several OECD values (fairness, transparency, accountability, harm-avoidance) into a **single operational norm**.
#### 5.2. Principle: Human rights, democratic values, fairness, privacy (Group 2: Aya, Freshta, Gavin, Siddiq)
– Actionable rule (Siddiq):
– AI systems should treat people **fairly and equally**, regardless of **socioeconomic status, race, age**, or similar attributes.
– Emphasis:
– The algorithm must not reproduce or amplify existing **social inequalities**.
– Aligns closely with concerns about Allegheny’s differential treatment of **poor vs. wealthier families**.
#### 5.3. Principle: Transparency and explainability (reported from two groups, some overlap)
– Actionable rule (Lillian’s group):
– “AI models should give an explanation of their reasoning and why they came up with this decision.â€�
– Another group’s elaboration (Banu/Samira):
– People must be able to **question, investigate, and correct** AI behavior.
– Key pedagogical point:
– Students clearly recognize that **explainable decisions** are essential for democratic accountability and procedural justice.
#### 5.4. Principle: Robustness, security and safety (earlier analysis, not fully reported in this debrief but addressed previously)
– The rule distilled previously by students:
– AI systems should produce **reliable, credible information** and have mechanisms to detect and address **errors or biased proxy variables**.
– Ties back to:
– The need to scrutinize training data and proxy variables (e.g., poverty as a proxy for neglect).
#### 5.5. Principle: Accountability (Group 5: Aigerim, Bekaim, Wuti)
– Actionable rule (Aigerim):
– Every stage of an AI system’s development and use should have **clearly designated actors** who are responsible for its **decisions, impacts, and oversight**, with full **traceability**.
– Significance:
– This would make it clear whom affected individuals can **contact to challenge decisions**, and who must respond to harms.
—
### 6. Final Breakout: Designing a Three-Step Plan to “Fix� the Allegheny Algorithm
**Activity: Mixed-principle breakout groups (10 minutes)**
New groups were formed such that each group had (as much as possible) at least one “representative� of each principle. Task:
1. **Determine the core problems** with the Allegheny Algorithm:
– Bias and discrimination (especially against poor families).
– Use of **proxy variables** strongly correlated with poverty.
– Lack of **transparency**: families/social workers can’t see how scores are produced.
– Weak **privacy** protections for people using public services.
– Lack of **accountability/appeals** for incorrect scores.
– Insufficient **robustness/safety** (lots of misclassifications).
2. **Transform the principle-based rules into concrete fixes**:
– How can each principle’s rule suggest changes in data collection, model design, oversight, and user interaction?
– Example directions:
– Fairness: adjust training data, remove or de-weight proxies for poverty, perform fairness audits.
– Transparency: provide explanations for scores, publish documentation of model features.
– Privacy: limit data sources, implement strict data governance and informed consent.
– Accountability: define responsible roles, create accessible complaint and appeals processes.
– Robustness: regular performance monitoring and retraining, stress-testing against systemic bias.
3. **Draft a three-step plan** to transform the algorithm from “extremely flawed� to more functional and aligned with OECD principles:
– Instructor suggestions for structuring the three steps:
1. Identify and articulate the **core problems**.
2. Map each **principle-based rule** to a concrete design or governance change.
3. Envision and describe what a **functional version** of the Allegheny system would look like (in terms of fairness, transparency, privacy, robustness, and accountability).
The transcript ends while groups are still in breakout discussion; a full-class share-out of the three-step plans does not appear in the text provided.
—
## Actionable Items
Organized by urgency for you as instructor:
### High Priority (Before Next Class)
– **Post materials to LMS (eCourse)**
– Ensure the **OECD AI principles link** used in class is posted (if not already).
– Optionally, add a short note reminding students which **column** (values-based principles) they worked with today.
– **Plan follow-up discussion on three-step plans**
– Since the transcript ends mid-activity, begin next class by:
– Having 2–3 groups **present their three-step reform plans** for the Allegheny algorithm.
– Prompting the class to critique how well the plans implement **each OECD principle**.
### Medium Priority (Next 1–2 Weeks)
– **Transition to OECD “Recommendations for policymakersâ€�**
– Design a session where students:
– Move from the values-based principles to the **second column** of OECD guidelines.
– Consider how **policy-level recommendations** would affect procurement, oversight, and adoption of systems like Allegheny.
– **Clarify principle distinctions and overlaps**
– Some groups blended multiple principles (e.g., fairness + transparency + accountability in one sentence).
– Consider a short activity where students **map specific system features** to specific principles to reinforce the distinctions.
### Low Priority / Ongoing
– **Consider an applied assignment**
– Have students choose another real-world AI system (e.g., credit scoring, facial recognition, hiring algorithms) and:
– Identify which OECD principles it violates.
– Produce a **three-step OECD-aligned repair plan** similar to today’s exercise.
– **Track common conceptual difficulties**
– Note recurring areas of struggle:
– The tension between **accuracy and transparency** (from the initial poll).
– Understanding **how to operationalize** high-level principles into design decisions.
– Use these observations to shape review or exam questions later in the course.
If you’d like, I can also turn today’s activities into a structured lesson plan template for reuse (with clear timings, prompts, and expected student outputs).
Homework Instructions:
NO HOMEWORK
The transcript only describes in-class activities (poll, brief individual scan of OECD principles, multiple timed breakout-room discussions such as “I’m going to put you into groups of three for five minutesâ€� and “We will come back in tenâ€�) and mentions only that a link to the AI principles “will have this on eCourse as well later tonight,â€� without giving any instructions to complete work outside of class or before the next session.