Lesson Report:
Title
Bridging Bias to Accountability: Synthesizing Algorithmic Harms and Democratic Principles
This session closed the “diagnosing problems� phase of the course and prepared students to pivot toward solutions. Students revisited facial recognition bias through Joy Buolamwini’s case, then used course readings (Eubanks and Beckman) to analytically “bridge� code/data, lived experience, culture, and industry. The class launched a capstone synthesis diagram centered on algorithmic oppression, to be continued next session.

Attendance
– Absent (explicitly mentioned): 0
– Noted participants/late arrivals: Ermihan, Becayim, Amin, Elijah (late), Imaad (joined during class), Banu (question), Aya (clarification)

Topics Covered
1) Opening and assignment logistics: Policy memo + Shark Tank pitch
– Reminder: Policy memo due Saturday; informal 2-minute “Shark Tankâ€�-style pitch in class next week.
– Pitch format: Identify an urgent problem at the AI–democracy nexus and propose a policy to address it; peers “investâ€� via votes.
– Order/scheduling: Random assignment of speakers (1.5–2 class sessions likely). Instructor to email details and order tonight.
– Submission channel: Email accepted/preferred for the memo (student asked; instructor confirmed).
– Length: 4 pages without references (student asked “word countâ€�; confirmed page length per syllabus).
– Q&A availability: Instructor offered to answer assignment questions at end of class.

2) Share-outs from prior diagram activity (Joy Buolamwini/facial recognition bias)
Goal of activity: Map four interacting layers behind algorithmic bias—code/data, lived experience, culture, and industry—and explain how societal structures and technical choices co-produce errors.
– Group presentation 1 (Ermihan):
– Culture–industry link: Default assumptions about the “normalâ€� user shape development priorities, leaving minorities underrepresented.
– Data–industry–user chain: Skewed datasets lead to downstream harms for end-users, especially those outside the majority phenotype.
– Lived experience: Students struggled to fully articulate this link but flagged its importance for surfacing harm.
– Group presentation 2 (Becayim):
– Data imbalance: Overrepresentation of white faces in training sets; underrepresentation of other racial groups.
– Producer/owner bias: Designer and institutional preferences can embed discrimination beyond mere data limitations.
– Monocultural prioritization: Dominant cultures often center their own norms, amplifying marginalization.
– Instructor elaborations:
– Data sourcing: High-volume web scraping and automation favor already-prevalent images online (white/European faces), resulting in biased training distributions and limited human quality control.
– Lived experience burden: People of color disproportionately identify, report, and must contest harms.
– Normalization: When failure modes are normalized, incentives to remediate decrease.

3) Course framing shift: From problems to solutions
– This lesson and the policy memos mark the close of the problem-diagnosis phase.
– Next phase: Evaluate major proposed solutions; today’s synthesis work prepares for that pivot.

4) Breakout Activity 1: Bridging course concepts with readings (10 minutes; 5 rooms)
Purpose: Use specific readings to connect layers from prior diagrams.
– Question 1 (Eubanks): Use the Allegheny Family Screening Tool (AFST) case to bridge code/data and lived experience (e.g., how model design + inputs translate into families’ on-the-ground experience).
– Question 2 (Beckman): Use the principle of publicity (transparency, accessibility, contestability) to bridge culture and industry (e.g., how cultural norms/values of accountability should constrain industrial design and deployment).
Debrief highlights:
– Student analysis (Amin) on AFST:
– Technical framing: AFST ingests multi-agency data (social services, schools, health) and uses methods such as logistic regression and random forests to produce a risk score via weighted features.
– Bias via inputs: Overreliance on data drawn disproportionately from low-income families skews predictions, risking unfair scrutiny and interventions (e.g., child removals).
– Transparency limits: Specific variables, weights, and training data are opaque; privacy is cited to justify non-disclosure. Student proposed synthetic/representative data for explainability.
– Instructor response:
– Black-box by design: Public systems often shield inputs/parameters; outputs become a number without legible reasoning trails.
– Even with auditing: Greater technical transparency may still leave unresolved questions about democratic legitimacy of automated adjudication.
– LLM limits: Current model architectures cannot introspectively articulate the actual causal path of their decisions; post-hoc “explanationsâ€� are approximations.
– Student synthesis (Elijah) on publicity/culture-industry:
– Efficiency ideology: Cultural valorization of efficiency and profit drives rapid automation; “efficiencyâ€� masks inequities and externalizes harms to low-income/minority communities.
– State–industry alignment: Government and tech often roll out systems quickly; transparency/accountability lag behind deployment.
– Systemic effect: Misrepresentation in data leads to biased profiles and false judgments; benefits accrue to the powerful while harms cluster among the already marginalized.
– Instructor amplification:
– Cultural economy: Profit-maximization and “move fastâ€� norms prioritize speed over safety checks.
– Amplification loop: Systems built within biased cultures replicate and heighten the same biases when scaled.

5) Capstone Activity Launch: Algorithmic oppression synthesis diagram (started; to be continued)
Objective: Build a master diagram with “algorithmic oppression� at the center; show how each author helps explain that outcome.
– Central concept: Algorithmic oppression—when governance/adjudication by algorithms entrenches or amplifies social hierarchies through opacity, error, bias, or undemocratic design.
– Required authors (periphery nodes):
– Zuboff (surveillance capitalism: data extraction logics, behavioral surplus, economic incentives)
– Eubanks (Automating Inequality/AFST: welfare tech, biased inputs, lived consequences)
– Papacharissi (networked/affective publics: how platforms shape participation, visibility, and agenda-setting)
– DiResta (mis/disinformation networks: manipulation, amplification dynamics, platform responsibility)
– Beckman (principle of publicity: transparency, accessibility, contestability as democratic criteria)
– Task (for groups): Draft how each author’s core claims illuminate mechanisms feeding into algorithmic oppression. Next class: connect authors to each other and assemble a class-wide master diagram.
– Logistics: 10 minutes to begin drafting; instructor recorded group compositions to preserve continuity for next session.

Actionable Items
Urgent (before Saturday)
– Email tonight: policy memo details (format, due date/time), 2-minute pitch structure, randomized speaking order, and whether pitches span 1.5 or 2 class sessions.
– Confirm submission protocol: reiterate that email submission is preferred; provide subject line/file-naming conventions and citation/formatting requirements (4 pages excluding references).
– Invite last-minute questions; share rubric/checklist if available.

Next class preparation
– Recreate the same breakout groups; share rosters so students join the correct rooms.
– Ask groups to bring their initial notes for the algorithmic oppression diagram; plan to complete and then synthesize into a class master diagram.
– Provide a simple diagram template (center outcome + five author nodes + labeled links) to standardize outputs.

Follow-ups/clarifications
– Distribute a one-page recap of key ideas: Zuboff, Eubanks, Papacharissi, DiResta, Beckman (key terms, mechanisms, exemplary quotes).
– Clarify whether groups should explicitly tie Eubanks/Beckman bridges back to the Joy Buolamwini case in their diagrams.
– Decide whether to allow additional brief share-outs from the original Joy Buolamwini diagrams for any groups that did not present.
– Optional enrichment: recommend the documentary “Coded Biasâ€� for students who have not seen it, as it was referenced in discussion.

Homework Instructions:
” ASSIGNMENT #1: Policy Memo on an AI–Democracy Problem and Policy Solution

You will finalize and submit your policy memo that diagnoses an urgent problem at the AI–democracy nexus and recommends a concrete policy response. This memo caps the “diagnosing problems� phase of the course and sets you up to pitch your solution next week.

Instructions:
1) Choose and clearly state the problem.
– Focus on an urgent AI–democracy issue (e.g., facial recognition bias highlighted by Joy Buolamwini; opaque public-sector risk scoring as in Eubanks’s Allegheny case; surveillance capitalism from Zuboff; mis/disinformation dynamics from DiResta; publicness/contestability per Beckman).
– One or two sentences should define the problem and who is affected.

2) Explain the stakes with evidence.
– Briefly summarize why this matters for democratic institutions, rights, and lived experience (as we discussed with algorithmic oppression).
– Use concrete examples or data where possible; you may draw on course readings.

3) Propose your policy solution.
– Describe specifically what your policy does and who will implement it (agency, regulator, platform, legislature, etc.).
– Explain how it reduces or prevents the harms you identified, especially bias and opacity.

4) Show democratic design and accountability.
– Address how your proposal meets the principle of publicity (transparency, accessibility, contestability of reasons and decisions).
– Note any auditing, oversight, appeals, or explainability features.

5) Anticipate impacts and mitigate risks.
– Identify likely objections, trade-offs, or unintended consequences (e.g., how data sources or model design could reintroduce bias as in Eubanks).
– Offer concrete mitigations and equity safeguards.

6) Close with a clear recommendation.
– State your “askâ€� concisely (what should be adopted, by whom, by when).

7) Length and references.
– Length: 4 pages (excluding references). No specific word count; focus on clarity and substance.
– Include a references section; it does not count toward the 4 pages.

8) Submission and deadline.
– Submit your memo by email to the professor as an attachment.
– Due by Saturday.

9) Final check.
– Ensure your memo aligns with the solution you will pitch in class.
– Proofread for clarity and brevity.

” ASSIGNMENT #2: 2-Minute “Shark Tankâ€� Pitch of Your Policy Memo

You will deliver an informal, 2-minute Shark Tank–style pitch of your memo’s solution. Your goal is to persuade classmates (the “investors�) to back your policy to address an urgent AI–democracy problem we’ve been studying.

Instructions:
1) Watch for the schedule email.
– The speaking order will be assigned randomly and will run across two class sessions next week. Check the email the professor is sending tonight for your slot and any additional details.

2) Prepare a tight 2-minute oral pitch (casual, concise).
– No slides are required; keep it simple and persuasive.
– Time yourself; plan for exactly 2 minutes.

3) Use a crisp structure.
– Hook (5–10 seconds): Name the problem in one compelling sentence.
– Harm (20–25 seconds): Who is affected and how? Connect to lived experience and bias (e.g., Buolamwini’s facial recognition case, Eubanks’s Allegheny example).
– Solution (30–40 seconds): State your policy clearly—what it is, who implements it, and how it works.
– Why it’s the right fix (30–40 seconds): Give 2–3 reasons grounded in course ideas (e.g., improves transparency/contestability per Beckman; addresses surveillance-capitalism incentives per Zuboff; reduces mis/disinformation dynamics per DiResta).
– Objection + mitigation (15–20 seconds): Name one likely pushback and how you handle it.
– Call to invest (5–10 seconds): Close with a clear, confident ask.

4) Tie back to course themes.
– Use the language we’ve been using (algorithmic oppression; principle of publicity; data and code vs. lived experience; culture–industry links).
– Show that your solution is both effective and democratically legitimate.

5) Rehearse for clarity and timing.
– Practice out loud; refine until you consistently finish within 2 minutes.

6) Be presentation-ready on your assigned day.
– The order is random. Arrive prepared and ready to go when called.

7) Align with your memo.
– Your pitch should match the policy you submitted and highlight its strongest elements.

8) Engage respectfully as an investor.
– After pitches, classmates will vote on whether to “invest.â€� Listen actively and be prepared to answer a brief question if time allows.

Leave a Reply

Your email address will not be published. Required fields are marked *