Lesson Report:
Title
AI-Driven Disinformation, Public Trust, and Identity Online
This session examined how automated systems can scale and target online disinformation, the downstream effects on democratic discourse, and trade-offs between identity verification and online anonymity. Students presented automation workflows developed in groups, analyzed the Diresta article’s “Alice Donovan� case, explored the “Liar’s Dividend,� and debated solutions such as government-ID verification versus preserving anonymity.

Attendance
– Students mentioned absent: 0
– Note: One student reported missing the previous session but was present today; several had intermittent connection/breakout-room issues.

Topics Covered
1) Re-entry and framing the problem
– Quick check-in; transition to democracy and information integrity.
– Reminder of prior assignment: design an automated disinformation system to “flood the zoneâ€� around the topic of walkable cities. Instructor directed students to:
– Locate any saved plans (e.g., shared Google Docs).
– Take 3 minutes to review in groups and be ready to explain “how you would go from no budget to flooding the internet with varied disinformation.â€�

2) Group plan recaps: Automating a disinformation pipeline
– Prompting and content variation:
– Strategy: Use LLMs (ChatGPT, Grok, DeepSeek) to generate short-form posts/comments that question walkable cities from multiple angles (cost, current problems, emotional/sarcastic takes), and create 10–20 variations per prompt to avoid repetition.
– Visuals and persona design:
– Use image tools (Stable Diffusion, Midjourney, Canva) to produce infographics, memes, and “citizenâ€� testimonials; craft diverse personas (students, parents, office workers) to simulate a community.
– Scale and automation:
– Goal: 100+ distinct pieces/day via scripting and automation; simple lexical/style edits to diversify outputs; schedule posts with bots and tools (Telegram bots, Zapier, Make, Google Sheets) to multiple platforms (X/Twitter, Facebook, TikTok, Instagram, Telegram, Reddit).
– Distribution tactics:
– Trend hijacking via hashtags; infiltration of subgroups and niche communities; multi-platform API use to broaden reach; consistency of posting cadence to appear organic.
– Instructor note:
– Mentioned a newly announced AI-generated-content social platform concept to illustrate the direction of content ecosystems; emphasized the importance of “subgroupsâ€� and creator typologies in platform dynamics.

3) Advanced workflow presentation (Amin/Safi)
– Modular pipeline with quality control:
– Daily trend analysis (e.g., kick off at 6 a.m.) to detect emergent topics; feed insights into prompt generation.
– AI text generation, followed by quality checks using other AI components for coherence, tone, and error filtering.
– Visual/video generation via Stable Diffusion APIs; incorporate error handling; real-time monitoring; scheduled posting.
– Tooling and integration:
– Orchestration via n8n/Zapier; multi-platform APIs (Telegram, Reddit, TikTok, Facebook, Instagram) tailored to each platform’s content norms.
– Target of 100+ unique items/day across personas; content diversification across styles/voices.
– Instructor feedback:
– Praised modularity, QC layers, and trend-detection feeding prompts; highlighted professional clarity and scalability.

4) Transition to the Diresta article: the “Alice Donovan� case
– Who was “Alice Donovanâ€�:
– A fabricated Western-journalist persona aligned with Russian intelligence objectives; early, benign clickbait evolved into overt foreign-policy narratives (example: 2016 Turkey downing a Russian jet).
– Inefficiencies and detection (pre-LLM era):
– Took ~18 months to build reputation; reliance on plagiarism and stolen profile photos left forensic traces; caught by media and investigators.
– Connection to 2016 US election operations:
– Aim was less to elect a specific candidate than to toxify the public sphere—flood debates with conflicting narratives (including pro- and anti- positions across candidates) to discourage participation.

5) Activity: Compare 2016 human-centered model vs. modern AI-automation
– Student-identified advantages of AI systems:
– Scale and speed: 24/7 generation and posting; near-instant output; global reach.
– Cost: Orders of magnitude cheaper per post than human labor; trivial marginal cost at scale.
– Consistency and adaptability: Models can check each other, rapidly A/B test, and pivot to trends.
– Cultural/linguistic mimicry: Fewer telltale foreign-language errors; can deliberately degrade grammar to mimic casual speech when needed.
– Counterpoints/limitations raised:
– Human intuition/context: Prompts risk uniformity; AI may miss sociocultural nuance; can be detected via artifacts or “jailbreakâ€� tells.
– Instructor synthesis: Despite weaknesses, for the purpose of “flooding the zone,â€� current AI benefits often outweigh costs; trend-detection modules can mitigate some context gaps.

6) Mini-lecture synthesis: Why AI botnets are effective (per Diresta’s framing)
– Scale: Potentially limitless with compute resources.
– Cost: Minimal per-unit content costs; state actors can afford large fleets.
– Speed: Content generation in seconds; no need to “build upâ€� long reputations as in the past, especially on short-form platforms.
– Platform dynamics: TikTok/Reels/Shorts enable new accounts to go viral without established followings.
– Plausibility: AI-generated faces and non-plagiarized text make detection harder than reverse image search/plagiarism checks of earlier eras.

7) Poll and discussion: What is the primary danger?
– Prompt: Which is worse—convincing millions of specific lies, or muddying the waters to create a cynical, disengaged public?
– Student majority: “Muddying the watersâ€� is the larger danger.
– Liar’s Dividend:
– In an environment where truths are indistinguishable from fakes, liars gain deniability (“It’s just AIâ€�), eroding accountability; the public becomes disillusioned and disengages.
– While individual lies can sometimes be corrected with facts, systemic cynicism undermines democratic participation itself.

8) Solutions brainstorm and debate setup: Verification vs anonymity
– Solution pathways discussed:
– Paid verification: Insufficient; trivial for well-resourced actors.
– Phone verification: Weak; bulk numbers are purchasable.
– Biometrics: Not mature for broad web identity; significant privacy risks.
– Government-issued ID verification: More robust but high civil-liberties cost.
– Trade-off framing:
– Identity verification could deter botnets but would curtail anonymity, which is essential for dissenters and citizens in repressive contexts.
– Breakout debate (3 minutes prep; report-back):
– Team Verification: As AI becomes indistinguishable from humans, platforms need robust verification; propose incentives and “perksâ€� for verified users to reduce harms and encourage adoption.
– Team Anonymity: Anonymity is a right; forced ID risks misuse of personal data and state retaliation (examples cited from Central Asia); essential for whistleblowing, opposition, and democratic discourse.

9) Closing and assignments
– Video Journal:
– Due: Thursday by 11:59 p.m.
– New requirement: Watch at least one classmate’s video (from this or the earlier assignment) and respond to a classmate’s idea in your submission.
– Access: Instructor will upload all videos to a shared Google Drive and share the link.
– Accommodations: If responding publicly is uncomfortable, email the instructor to discuss alternatives.
– Policy Memo Journal:
– Due in ~4 weeks; details next Tuesday.
– Consider today’s themes (verification models, incentives, privacy-preserving approaches) as potential solution spaces.

Actionable Items
Urgent (before Thursday)
– Upload and share: Aggregate all student video journals into a Google Drive and distribute the link to the class; confirm access for students without eCourse/BPI students.
– Clarify assignment: Re-send brief instructions and rubric for the video journal with the peer-response requirement and the Thursday 11:59 p.m. deadline.
– Tech check: Provide simple instructions for accessing breakout rooms and Zoom polls; share poll results summary since students could not view them live.

Next Class (Tuesday)
– Policy memo briefing:
– Provide prompt, scope, evaluation criteria, timeline, and at least two policy-model examples.
– Offer a short reading list on identity verification, content authenticity, and privacy-preserving alternatives.
– Revisit the debate:
– Explore concrete designs for verification with safeguards (e.g., third-party verification, privacy-preserving credentials, zero-knowledge proofs, W3C Verifiable Credentials).
– Discuss harms mitigation for anonymity (reporting/abuse controls) and authenticity signals that don’t expose identity.
– Case materials:
– Share concise notes/slides on the Diresta article, Alice Donovan case, “public sphereâ€� framing, and Liar’s Dividend for reference.

Longer-term
– Resource pack: Compile references on AI botnets, platform virality mechanics, content provenance (e.g., C2PA), and detection limitations for students’ policy memos.
– Participation records: Note active contributors (e.g., Safi, Amin, Banu, Elijah, others) for participation tracking; watch for students with recurring connectivity issues to ensure equitable engagement.
– Ethics and safety: Consider a brief module on responsible research practices when designing adversarial/automation workflows to avoid normalizing misuse.

Homework Instructions:
ASSIGNMENT #1: Video Reflection Journal — Respond to a Classmate

You will record your next video reflection, building on today’s discussion of the Diresta article, the “Alice Donovan� case, AI-automated disinformation, the “liar’s dividend,� and the verification vs. anonymity trade-off. The purpose is to deepen your analysis of how AI changes public discourse and to practice scholarly dialogue by engaging directly with one classmate’s prior video.

Instructions:
1) Review today’s core ideas
– Revisit your notes on: the Alice Donovan example; why botnets scale so efficiently (cost, speed, scale, plausibility); Diresta’s main concern (muddling the waters and public disengagement); the “liar’s dividendâ€�; and the class debate on government ID verification vs. anonymity online.
2) Choose one classmate’s video to respond to
– Once the shared class folder link is posted, select any single classmate video from either the first video journal round or the most recent round.
– Watch actively. Identify 1–2 specific claims, examples, or questions you want to respond to.
3) Plan your own video (same format as your previous video journal)
– Begin by briefly summarizing your classmate’s point in your own words and naming the classmate so your audience can follow the exchange.
– Respond substantively: agree, extend, or push back with reasons. Connect your response to today’s themes (e.g., does their claim align with Diresta’s argument about disengagement? How does the “liar’s dividendâ€� complicate their view? What trade-offs surfaced in our verification vs. anonymity debate?).
– Integrate at least one concrete example from class (e.g., Alice Donovan’s plagiarism and profile-photo issues vs. how today’s tools reduce those tells; or how short-form platforms change reach for new accounts).
– Optional: tie in your group’s automation workflow discussion or the “walkable citiesâ€� scenario as an illustrative case.
4) Record your video
– Use the same length/format, recording method, and tone you used last time. Ensure clear audio and steady framing.
5) Submit your video by the deadline
– Upload or share the link using the same process you used for the previous video journal.
– In your submission text, include:
a) The name of the classmate whose video you responded to
b) A brief sentence pinpointing the claim you engaged
c) If possible, a link or reference to their video within the shared folder
6) Deadline
– Due Thursday by 11:59 p.m.
7) Access or privacy issues
– If you cannot access the shared folder once the link is posted, or if you have concerns about identifying a classmate by name, email the instructor right away to arrange an accommodation, as discussed in class.
8) Community norms
– Engage respectfully and substantively. Aim to advance the conversation with evidence, clear reasoning, and concrete examples rather than simply agreeing or disagreeing.

ASSIGNMENT #2: Policy Memo Journal (Announced; Full Brief Forthcoming)

You will write a policy memo journal due in about four weeks. The memo will grow out of today’s themes: how AI-augmented disinformation affects democratic discourse, and what policy or design interventions (e.g., verification regimes, detection tools, platform design changes, media literacy, incentives) could mitigate harms without sacrificing essential rights like anonymity.

Instructions:
1) What is known now
– The memo is due in approximately four weeks. The full assignment details will be presented next class on Tuesday.
– Today’s discussion (Diresta’s thesis, the liar’s dividend, and verification vs. anonymity trade-offs) will be central to viable memo topics and solutions.
2) Before Tuesday (recommended pre-work, not required)
– Brainstorm 2–3 problem–solution pairs connected to course themes. Examples:
• Reducing the liar’s dividend while preserving legitimate anonymity
• Identity verification models that protect at-risk users
• Detection and provenance tools for AI-generated content
• Platform design tweaks to reduce “flood the zone� effects
• Media literacy interventions that target disengagement, not just fact-correction
– For each idea, draft:
• One-sentence problem statement (who is harmed, how, and why current practice fails)
• One-sentence policy or design intervention (what lever, by whom, and why it might work)
– Bookmark at least three credible sources you might draw on (policy analyses, research articles, platform policy documents, or credible investigative reporting).
3) Come prepared Tuesday
– Bring your notes so you can refine your topic once the full brief and expectations are provided.
4) Timeline
– Expect the exact due date and full requirements on Tuesday; plan your schedule for a submission roughly four weeks from today.

Leave a Reply

Your email address will not be published. Required fields are marked *