Lesson Report:
Title
Safeguarding Democracy in the Age of AI: Policy Pitches, Q&A, and Next Steps
This Week 12 session focused on continuing and nearly completing student policy-memo presentations on AI’s risks to democracy and governance. Students proposed concrete, time-bound regulatory and educational interventions targeting deepfakes, AI transparency, algorithmic auditing, and data protection across multiple jurisdictions. The class also reviewed near-term course logistics: reflection journal timing, policy memo grading, final project scope, Thursday’s reading/discussion plan, and an upcoming Thanksgiving no-class date.
Attendance
– Number of students mentioned absent: 0
– Presenters and participants heard: Safi, Niloufar, Elijah, Lillian, Banu/Beliu, Ermahan, Aya, Sadiq, Aigerim, Izirek, Amina, Bekayim, plus several unnamed classmates who posed questions
Topics Covered (chronological)
1) Opening and Objectives
– Week 12 continuation; goal to finish remaining policy memo presentations.
– Timekeeping: 2 minutes per presentation plus brief Q&A; later switched to lightning round (presentations only, no Q&A) to complete the queue.
– Emphasis for presenters: prioritize (a) problem context, (b) specific policy solution(s), (c) rationale.
2) Administrative Q&A and Course Logistics
– Reflection journals:
– Next reflection journal due not this coming Sunday but the following Sunday.
– Instructor will post the prompt on eCourse later tonight.
– Total remaining journals this term: at least two; possibly a third (TBD) to avoid overburdening students.
– Policy memo grading timeline:
– Instructor grading intro to political science midterms first; policy memo grades to be returned next week (not this weekend).
– Final project reminder:
– “Speculative narrativeâ€� due at the end of finals week (around Dec 20).
– Task: forecast a plausible scenario ~20 years ahead applying an AI-and-democracy course concept; more guidance later (details also in syllabus).
– Health note: Instructor recovering from a sinus illness (voice strained).
– Screen-share permissions: adjusted to allow all participants to share.
– Thanksgiving/No class:
– No class on Thursday, Nov 27 (Thanksgiving); to be posted and reminded next week.
– Thursday reading and plan:
– Read “12 EU Ethics Guidelinesâ€� (short, ~3–4 pages).
– On Thursday: finish remaining 5 presentations (strict 2 minutes, no Q&A) then discuss and critique the EU guidelines; note the overlap with student proposals.
– Reading was also emailed to at least one student (Elijah).
3) Presentation Round 1 (with Q&A)
A) Estonia: Digital Democracy Protection Framework (DDPF) — Presenter: Niloufar
– Problem context:
– Estonia’s highly digitized democracy (incl. iVoting) faces AI-generated disinformation threats.
– Background: 2007 Russian disinformation and cyberattacks; additional COVID-era cyber incidents; media literacy gaps despite ongoing programs.
– Policy components:
1) AI audits in political campaigns: Regular, professional audits of AI use in political messaging and elections.
2) Mandatory disclosure: Clear labeling of AI-generated political content; penalties for noncompliance.
3) Digital literacy and critical thinking: Expand and deepen curriculum and public education to help citizens differentiate AI vs. human sources and evaluate credibility.
4) Data protection and anti-microtargeting: Restrict unauthorized use of personal data in electoral strategies; enforce consequences against political actors misusing data.
– Intended outcomes:
– Move beyond platform-only moderation toward democratic, sustainable solutions; bolster transparency, civic resilience, institutional trust, and voter autonomy/digital safety in the AI era.
– Q&A highlights:
– AI as political actor (Albania example): Proponent favors an oversight/audit body monitoring compliance by political actors.
– Why empower citizens (vs. government-only)? Rationale: Shared responsibility—government regulation plus citizen capacity and resilience in case of gaps or failures.
B) France: Digital Electoral Transparency Authority (DETA) — Presenter: Lillian
– Problem context:
– Deepfakes and synthetic media spread rapidly across TikTok/Instagram/WhatsApp; a single fabricated video/quote can sway public opinion.
– Policy design (independent authority):
1) Certification/labeling: All online political communication (images/videos/messages) must indicate if AI-created/modified.
2) Monitoring: Develop AI tools (with partners like INRIA) to detect deepfakes, manipulative campaigns, bot/fake accounts.
3) Prevention/public awareness: Weekly public reports on manipulation trends and concrete cases; civic education role.
– Discussion prompts and Q&A:
– Public “database/boardâ€� of violators: Focus on labeling transparency rather than banning accounts; handle accidental violations proportionately.
– Jurisdictional transfer: Proposed within EU/French data protection framework; noted differences if adapted to US context.
C) EU-level: AI Transparency Unit for Elections — Presenter: Banu (also referenced as “Beliu�)
– Problem context:
– EU parliamentary elections already saw AI-manipulated videos, voices, images; misinformation erodes trust.
– Policy components:
1) Campaign disclosures: Mandatory public notice when campaigns use AI-generated/altered content.
2) Platform auto-labeling: Social media to identify and label likely AI-generated political content.
3) Public reporting: Periodic reports on AI’s role in campaigns and detected misleading content.
– Positioning:
– Not censorship; aims to preserve fairness and informed choice via transparency and education.
– Q&A:
– Trust effects: Emphasized public education and clear communication.
– AI advancement: Focus is on transparency and citizen literacy, not restricting innovation; policies likely updated as AI evolves.
D) Japan: Mandatory AI/Data-Use Warnings and Human-Readable Consent — Presenter: Ermahan
– Problem context:
– Platforms (e.g., YouTube auto-translation feature) may use creators’ content for AI training/processing without explicit, visible consent.
– Risk of normalization across platforms; erosion of personal data control.
– Policy proposal:
– Oversight by Japan’s Personal Information Protection Commission.
– Require conspicuous, standalone, human-readable warnings/consent pages at registration and before activation of AI features; ensure discoverable opt-outs/toggles per feature.
– Q&A highlights:
– AI’s influence in Japanese politics currently modest (speculative reasons: older population, conservative political communication); broader privacy concerns noted.
– Implementation detail: Terms-of-service should include a separate, clearly readable AI/data-use section; per-feature warnings when features roll out.
4) Shift to Lightning Round (presentations only; no Q&A)
E) Kyrgyzstan: Deepfake/Election Integrity Framework — Presenter: Aya
– Problem context:
– Local example: Photos of peaceful Women’s Day march were altered to mislead and inflame public sentiment; elders especially vulnerable to synthetic content.
– Policy components:
1) Legal definitions and penalties: Update digital code to define deepfakes/AI manipulation and set consequences.
2) Disclosure labels: Mandatory AI-generated labels on political ads/content.
3) Authenticity registry: Central Election Commission maintains a public database of official videos/speeches/photos for verification and fact-checking.
4) Rapid response unit: Small team within the Ministry of Digital Development to triage, debunk, and coordinate takedowns of egregious deepfake campaigns.
F) Public Hospitals: AI Use Transparency and Registry — Presenter: Sadiq
– Problem context:
– AI used in diagnosis without patient awareness; example from the UK where patient data sharing with a tech company violated privacy rules.
– Policy components:
1) Reporting requirement: Regular (monthly/annual) public reports on AI tools used, data inputs, and performance metrics.
2) National registry: Mandatory registration of all hospital AI systems.
– Rationale:
– Build trust and ensure safe, responsible AI while enabling innovation.
G) Kyrgyzstan: AI in Political Communication (Transparency Framework) — Presenter: Aigerim
– Problem context:
– AI-generated disinformation widely diffused via social media; citizens struggle to distinguish real/fake.
– Policy components:
1) Public education/media literacy programs (seminars/online courses for citizens, students, journalists).
2) Platform cooperation to flag suspicious AI content.
3) Mandatory visible watermarking/disclaimers on AI-generated political content.
– Implementation notes:
– Need high-skill personnel to deliver education and oversight.
– Limited leverage over platforms may be a barrier; watermarking feasible with legislative support.
H) Algorithmic Accountability in Kyrgyz Elections — Presenter: Izirek
– Problem context:
– Rapid digitalization (biometric registration, algorithmic systems) with past technical failures and confusion; digital divides; insufficient independent audits of algorithms.
– Policy components:
1) Mandatory independent audits of all election-related digital systems.
2) Create an Algorithmic Transparency & Audit Unit within the Central Election Commission.
3) Partner with OECD, UNDP, and OECD.AI for expert support and training.
4) Public awareness initiatives; technical/ethical capacity building.
– Goal: Ensure fairness, safety, and public trust in digital elections.
I) London/UK: AI Transparency Framework for Political Content — Presenter: Amina
– Problem context:
– Example: Deepfake video falsely announcing a Tory MP’s resignation; research shows AI-generated voices are near-indistinguishable from human voices.
– Policy components:
1) Visible labels and embedded digital watermarks on AI-generated/modified political content.
2) Oversight by the Information Commissioner’s Office (ICO); define technical standards; ensure compliance.
3) Graduated enforcement: Begin with assistance to achieve compliance; penalties only for repeated/intentional violations.
– Framing:
– Enhances transparency without suppressing speech; citizens’ right to know the origin of influence (“principle of publicityâ€�).
J) VAT Fraud Analytics and Explainability (Poland/STIR-like) — Presenter: Bekayim
– Problem context:
– AI (e.g., STIR) flags small businesses for VAT fraud via opaque “black-boxâ€� models; NRA can block accounts without explaining reasoning; potential overcollection of personal data unrelated to VAT.
– Policy components:
1) Require “white-box� (explainable) models for consequential decisions.
2) Mandate that NRA issue reasoned decision documents when blocking accounts.
3) Strict data minimization and separation: analyze only VAT-relevant transactions; protect personal data and sensitive purchases/donations.
– Rationale:
– Maintain due process, protect privacy, and limit unjustified account freezes.
5) Closing Logistics and After-Class Consultations
– Thursday plan: Finish remaining five presentations (2 minutes each), then EU guidelines discussion and critique.
– Thanksgiving: No class on Thursday, Nov 27.
– Individual follow-ups:
– Late policy memos require documented excuse and department approval (e.g., Mohamed Omar).
– Extra credit: Instructor likely to offer a small extra credit assignment in early December to help students recover missed work (e.g., for Safi and others).
– Reflection journals: One student’s previously late RJ will be graded as accepted (confirmed via email record); others more than a month late cannot be accepted without proper documentation (e.g., Siddiq).
– Reading distribution: “12 EU Ethics Guidelinesâ€� also emailed to at least one student (Elijah).
Actionable Items
Immediate (before Thursday)
– Post the next reflection journal prompt on eCourse (instructor committed to do this “later tonightâ€�).
– Verify screen-sharing permissions are set to “All participantsâ€� at start.
– Enforce 2-minute limit and no-Q&A rule to finish the remaining five presentations.
– Ensure all students know to read “12 EU Ethics Guidelinesâ€� for Thursday’s discussion and critique.
This week/next week
– Complete grading and return feedback on policy memos next week.
– Confirm and announce again that there is no class on Thu, Nov 27 (Thanksgiving).
– Prepare discussion prompts tying student proposals to the EU guidelines (overlaps, gaps, feasibility critiques).
By end of term
– Final project (speculative narrative): Remind timeline and forthcoming guidance; due end of finals week (~Dec 20).
– Extra credit plan: Publish details in early December for students who missed one or two assignments.
Student-specific follow-ups
– Late submissions:
– Mohamed Omar: Await documentation to request late acceptance of policy memo (department approval required).
– Siddiq: Reflection journal too late to accept without proper institutional justification (already communicated).
– Grading housekeeping:
– Safi: Mark first reflection journal as accepted/graded per email record; remind about possible extra credit to offset two missed journals.
– Communication:
– Reconfirm Elijah received the “12 EU Ethics Guidelinesâ€� email attachment.
Homework Instructions:
ASSIGNMENT #1: EU AI Guidelines Reading and Discussion Prep
You will read a short EU document on AI regulation and prepare to discuss and critically evaluate it in relation to the policy ideas we’ve been exploring in class so you can contribute meaningfully to Thursday’s discussion.
Instructions:
1) Log into eCourse and locate the reading the professor referenced (about 3–4 pages). It was described as the EU guidelines on regulating AI; you may see a title like “EU Ethics Guidelines� or “12 EU Ethics Guidelines.�
2) Read the full document before Thursday’s class.
3) As you read, annotate where the guidelines overlap with the solutions we discussed in class (for example: mandatory disclosure/labeling of AI-generated content, independent audits/monitoring, public transparency reports, media/digital literacy, limits on microtargeting, algorithmic transparency and explainability).
4) Prepare brief notes to bring to class:
– One concrete example from the reading that matches either your policy memo/presentation or a classmate’s.
– One thoughtful criticism or limitation of a guideline you think needs improvement.
– One clarifying question you would like to raise.
5) Bring your notes (digital or paper) and be ready to participate in a focused discussion and light critique on Thursday.
ASSIGNMENT #2: Reflection Journal (next one)
You will complete the next reflection journal to deepen your engagement with the course themes; the professor is spacing these out to reduce workload, and the prompt will be posted for you.
Instructions:
1) Check eCourse later tonight for the posted Reflection Journal prompt and the submission link.
2) Note the due date carefully: it is due next week—specifically, not this upcoming Sunday, but the Sunday after.
3) Follow the prompt exactly as posted (length, format, and any required references as specified in the syllabus/prompt).
4) Draft your response, revise for clarity, and proofread before submitting.
5) Submit your journal on eCourse by the stated Sunday deadline.
ASSIGNMENT #3: Lightning-Round Policy Memo Presentation (only for those who have not yet presented)
You will deliver your 2-minute lightning-round policy memo presentation at the start of next class so we can finish all remaining presentations efficiently.
Instructions:
1) Prepare a concise, timed presentation (maximum 2 minutes).
2) Prioritize exactly what the professor asked for:
– Context: briefly present the problem.
– Solution: state the specific policy you propose.
– Rationale: explain why your solution fits the problem.
3) Rehearse with a timer and trim to ensure you stay under 2 minutes.
4) If using slides, keep them to 2–3 simple slides. Have a PDF backup and verify you can share your screen. Plan to turn your camera on when presenting.
5) Be ready to present first thing on Thursday; the remaining presentations will be back-to-back and, as announced, there will be no Q&A.
6) If you anticipate technical issues, have a backup plan (e.g., send your slides to the professor in advance and keep a local copy ready).