Lesson Report:
Title
Shark Tank Policy Pitches: AI, Elections, and Algorithmic Harm
In a Shark Tank–style session, students delivered two-minute policy pitches focused on AI-driven disinformation, electoral integrity, and algorithmic harms, with one proposal on AI governance in education. The objective was to concisely define a problem, specify the jurisdiction, and propose implementable policy solutions while the class practiced targeted, peer-critique questioning that would inform a later “investment� vote.
Attendance
– Students mentioned absent: 0
Topics Covered
1) Launch and Format: Shark Tank Pitch Day Setup
– Instructor framed the session as a Shark Tank simulation:
– Presenters: 2-minute maximum pitch; camera must be on; PowerPoint optional; focus on problem location + concrete solutions.
– Audience: Active listening with note-taking; each student must ask at least two questions over the entire set of presentations.
– Voting: Emoji reactions in chat at the end; each emoji = $10,000 of hypothetical investment; each student has a total “budgetâ€� of $50,000 to allocate for the day; instructor will count and score; winner is the pitch with the most “investment.â€�
– Process decision: Voting deferred to the end of all pitches (to avoid early budget depletion).
2) Pitch 1: Myanmar—Algorithmic Amplification of Hate Speech During Rohingya Genocide (Presenter: Wu-Ti)
– Problem context:
– Longstanding conflict; Buddhist majority dominance; online hate speech (notably 2015–2020) intensified during/after 2017.
– Citation mentioned: Amnesty International 2024—Facebook’s algorithm failed to remove harmful content, contributing to anti-Rohingya violence.
– Policy proposals (four-part):
1) Algorithmic flagging/assessment of fake profiles in conflict contexts (Myanmar as case), given their role in inflaming violence.
2) Context-sensitive confidentiality controls for information circulation on platforms in conflict zones.
3) Algorithmic amplification “space� for marginalized voices to counter majority-favoring dynamics.
4) Harm-reduction design priority in genocide/transformative justice contexts.
– Limitation acknowledged: Platforms’ lack of demographic data impedes identification of marginalized vs majority users.
– Q&A highlights:
– Ensuring safe, equitable visibility for marginalized communities (Imad, Safi): Presenter suggested “intentional spacesâ€� and reducing “sensitiveâ€� posts that trigger conflict; peers emphasized equity-oriented engagement and protection from targeting.
– Example raised: Hashtag activism (#BlackLivesMatter) as a visibility mechanism that platforms can algorithmically support.
3) Pitch 2: European Union—EU Electoral Transparency Directive (EETD) (Presenter: Imad)
– Problem:
– AI-generated disinformation (text, image, audio, video) threatens voter trust; example: 2024 Slovak election fake audio circulated pre-vote.
– Existing EU instruments (DSA, AI Act) do not directly regulate AI use in elections.
– Policy package:
1) Mandatory labeling of AI-generated political content.
2) Public disclosure: Campaigns and platforms must report AI tool use for messaging/targeting; publication of transparency information.
3) AI Election Oversight Unit under the European Cooperation Network on Elections to receive complaints and publish reports.
– Implementation:
– Legal basis: TFEU Article 114 for harmonized standards across member states; 12-month platform transition; funding via Citizens, Equality, Rights and Values (CERV) Programme.
– Principle: Transparency ≠censorship; supports informed choice.
– Q&A highlights:
– Public communication of labeling (Gavin): Advisory + clear labeling; monitoring of election-related posts; nonbinding guidance possible ahead of enforcement.
– Cost concerns (Wuqi): Acknowledged significant compliance costs but argued democracy-protection benefits outweigh; emphasized role of the oversight body.
4) Pitch 3: United States—FTC-led Federal Framework for AI Transparency in Elections (FSAITE) (Presenter: Barfiya)
– Problem:
– Rapid growth (claimed 300% in 2024) of AI-generated deepfakes and political content undermining voter trust; Zuboff’s point on behavioral manipulation at scale.
– Policy design:
– FTC leadership on:
1) Defining AI-content rules and labeling requirements for political content.
2) Creating an AI disinformation task force.
3) Enforcing penalties for noncompliance.
– Operations: Social media cooperation (info-sharing with regulators; rapid response to remove or fact-check); public awareness campaigns; phased rollout.
– Expected outcomes: Protect truth in elections, rebuild trust, strengthen democratic integrity.
– Q&A highlights:
– Scope of FTC rules vs bans (Elijah): Presenter clarified the goal is to ensure transparency/honesty, not blanket bans; clear disclosure when AI is used.
– Platform cooperation (Elijah): Envisions data/reporting pipelines to agencies and rapid fact-checking/removal during elections, learning from cases like Cambridge Analytica.
5) Pitch 4: United States—Council of AI Screeners + Two-way Bio-authentication (Presenter: Elijah)
– Problem:
– Structural algorithmic oppression driven by biased data, market incentives, and cultural stereotypes encoded in systems; harms compounded for low-income and marginalized groups.
– Policy proposals:
1) Establish a Council of AI Screeners (with expertise in critical race theory and lived experience of algorithmic harm) to oversee election-related AI usage and standards.
2) Implement two-way bio-authentication to distinguish human users from AI-driven disinformation agents online.
– Jurisdiction: 118th U.S. Congress (national-level).
– Q&A highlights:
– Bias and CRT requirement (Gavin): Presenter argued CRT equips screeners to identify systemic bias and prioritize equity over profit/efficiency incentives.
– Cultural stereotypes in algorithms (Banu): Presenter linked media portrayals and skewed datasets to algorithmic bias; emphasized feedback loop between culture, industry incentives, and code/data.
6) Pitch 5: California—AI Transparency and Authenticity Act (Presenter: Gavin Gonzalez; addressed to Gov. Gavin Newsom)
– Problem:
– Californians face AI-authored scripts, fake voices/images/videos during elections; voters can’t reliably separate truth from fabrication; Zuboff on “automatingâ€� people; DiResta on persona floods/exhaustion.
– Policy pillars:
1) Disclose it clearly: Mandatory AI labeling for political content.
2) Register it: File AI-generated political material with the Fair Political Practices Commission (FPPC) to ensure traceability.
3) Keep it legitimate: Prohibit fake faces/voices/votes in political content.
– Oversight and integrity:
– Public audits and external oversight councils to monitor committee reviews and deter corruption; access to records for accountability.
– Q&A highlights:
– Identifying deepfakes vs process enforcement (Lillian): The system relies first on registrant self-disclosure; if content is unlabeled and detected, committee reviews and can require labeling; emphasis on auditability rather than perfect technical detection.
– Preventing committee corruption (unidentified student): Proposed public audits and multi-body oversight as checks and balances.
7) Pitch 6: Tajikistan—Ethical AI in Education Framework (Presenter: Safiullah; to the Ministry of Education and Science)
– Problem:
– Growing use of digital/AI systems (grading, platforms, student databases) can create bias (urban/rural divides) and threaten privacy, eroding public trust.
– Theoretical grounding: Zuboff (surveillance/control), Eubanks (automated discrimination), Papacharissi (transparent communication for democracy), Halawa Amir (unregulated algorithms enable misinformation).
– Policy framework (four pillars):
1) Transparency: Independent algorithm audits for any digital system used in schools; publish results.
2) Data protection: Student data code (consent, anonymization, retention limits).
3) Fairness: Pre-deployment bias testing, especially for protected groups/regions.
4) Public oversight: Include teachers, parents, and students in governance of AI use.
– Implementation:
– Led by Ministry’s Digital Transformation Dept.; decree guidelines for audits; train educators/IT staff; publish a public registry of approved tools; partner with UNESCO/OSCE for capacity-building.
– Q&A highlights:
– Request for specificity (Freshta + instructor): Who tests, which tools, and how? Presenter suggested comparative performance across groups to detect bias; instructor advised detailing responsible entity/protocols.
– Rural equity and feasibility (Banu): Presenter cited current online grading expansion and external funding (UNESCO/OSCE) to support rural deployments; peer noted realism challenges, especially equitable rollout.
8) Pitch 7: Kyrgyzstan—AI Content Disclosure & Verification Policy for Elections (Presenter: Anousheh)
– Problem:
– AI-enabled fake images, videos, and narratives can mislead voters and undermine confidence in young democracies ahead of the 2025 elections.
– Policy design:
– Via Central Election Commission:
– Require campaigns and online publishers to label and register AI-generated political materials.
– Platforms (e.g., Meta, YouTube, Telegram) must submit transparency reports and maintain public databases of AI-generated political materials.
– Framing: Transparency and accountability—not censorship—to allow citizens to distinguish genuine from artificial content.
– Implementation and challenges:
– Risks: Limited technical capacity and potential misuse.
– Mitigation: Partnership with NGOs/international bodies (e.g., OSCE), open data, potential to position Kyrgyzstan as a regional leader in ethical AI governance.
– Q&A highlights:
– Platform compliance (Ermahan): Presenter suggested state requirement as condition to operate; possible blocking for noncompliance.
– Free speech implications (Gavin): Presenter acknowledged potential infringement and openness to reforming the proposal to better safeguard expression.
– Citizen trust and use (Banu): Plan to include citizens in committees, enable open verification, and leverage international partners for credibility.
9) Closing: Timing, Voting, and Next Steps
– Time management: Only about half the class presented; remaining students will present next class (Tuesday).
– Pitch guidance for next session:
– Hard 2-minute limit.
– Focus on problem, location/jurisdiction, and concrete solutions (implementation details if possible); minimize background theory in the live pitch.
– When using slides, keep the “Policyâ€� slide on screen during Q&A.
– Reading for next class:
– EU frameworks for AI regulation (3–4 pp) to be posted on eCourse; instructor to email directly to Elijah (no eCourse access).
– Voting:
– Investment voting will happen after all presentations; each student has $50,000 total in emoji allocations; instructor will tally.
Actionable Items
Urgent: Before next class
– Send EU AI framework reading to Elijah by email (no eCourse access).
– Post EU AI regulation reading on eCourse for all students.
– Remind presenters: 2-minute cap; camera on; keep policy slide visible during Q&A.
– Verify screen-sharing permissions for all remaining presenters.
– Share concise pitch rubric/checklist (Problem + Location + Solutions) on eCourse.
High priority: During next class
– Use a visible timer to enforce 2-minute limit.
– Track each student’s participation toward the “two questions minimumâ€� requirement.
– Clarify/restate emoji voting rules at session start (per-student $50k cap, timing of vote).
– Prepare a simple tally sheet or tool to count emoji investments efficiently.
Follow-up
– Record today’s presented students: Wu-Ti, Imad, Barfiya, Elijah, Gavin, Safiullah, Anousheh; schedule the rest (e.g., Lilian, Banu, Freshta, Ermahan, others).
– After all pitches, tally emoji investments and announce “Shark Tankâ€� winner(s).
– Consider grouping overlapping policy themes (EU/US election transparency, platform labeling, oversight bodies) for upcoming comparative analysis.
– Plan a segment to probe detection feasibility limits (e.g., deepfakes vs current technical detection capabilities) to connect with the EU reading.
Homework Instructions:
ASSIGNMENT #1: Two-minute “Shark Tank� policy pitch for Tuesday (for those who have not yet presented)
You will deliver a fast, catchy, and memorable two-minute pitch of your policy memo that focuses only on the problem (what and where) and your concrete solution(s), mirroring today’s in-class format; this keeps us on time, enables high-quality Q&A, and prepares classmates to “invest� via emojis at the end.
Instructions:
1) Clarify scope and audience: You are pitching classmates acting as “investors� in a Shark Tank-style setting. Your goal is clarity and persuasion in under two minutes.
2) Define the problem in one sentence: State the issue and location up front (e.g., “AI-generated political deepfakes are undermining voter trust in California�).
3) List 2–3 concrete policy actions: Use plain language. Prioritize labeling, disclosure/transparency, oversight/audits, implementation timelines, and rights safeguards if relevant—these were common themes in today’s Q&A.
4) Write a 200–250 word script: Exclude theory, literature reviews, and citations as the professor emphasized; focus strictly on the problem and your solutions.
5) Time it: Rehearse until you consistently finish in 1:50–1:55. You cannot exceed two minutes to receive full credit.
6) Prepare optional slides: If you use slides, include a single, clean “Policy� slide listing your solutions, and plan to leave that slide visible during Q&A (as requested today).
7) Meet tech requirements: Test your camera, mic, and screen share. Your camera must be on during your presentation to receive credit.
8) Anticipate questions: Prepare 2–3 concise, 20–30 second answers on feasibility, costs, free speech, enforcement, rural/urban equity, and how labeling/verification would work—these were recurring class questions today.
9) Bring your best opener/closer: Start with a crisp problem statement; end with a memorable, one-sentence takeaway (e.g., “Transparency is the shield of democracy�).
10) Day-of professionalism: Be ready when called, keep calm, and stick to your time. If you already presented today, be prepared to listen actively, take notes, and ask thoughtful questions of your peers.
ASSIGNMENT #2: Read and annotate the EU AI regulation excerpt (pre-class preparation)
You will review the 3–4 page EU AI regulation excerpt the professor is posting so you can connect your proposal (e.g., labeling, transparency, oversight bodies, audits, education AI) to how a real jurisdiction is approaching similar problems and be ready to reference it in Tuesday’s discussion and Q&A.
Instructions:
1) Access the document: Open the EU AI regulation excerpt posted in our course materials; if you lack platform access, check your email for the file from the professor.
2) Read actively: Highlight where the EU addresses issues we discussed today—mandatory labeling of AI content, public disclosure/transparency, oversight units, transition periods, funding sources, and freedom-of-expression safeguards.
3) Annotate for your memo: Note 2–3 points that support or challenge your proposal. Mark any definitions or mechanisms (e.g., audits, reporting requirements) you could cite to show feasibility.
4) Prepare 1–2 discussion questions: Focus on implementation challenges raised in class (e.g., identifying deepfakes at scale, cost/administrative burdens, corruption/oversight risks, rural equity).
5) Bring your notes to class: Have your takeaways handy so you can reference them briefly during your pitch or Q&A and contribute to the broader discussion.
6) Deadline: Complete this reading and note-taking before the start of our next class on Tuesday.