Lesson Report:
Title
Democracy in the Age of AI: Framing AI Tools, Risks, and Course Expectations
This opening session oriented students to the course’s dual focus: how AI technologies affect democratic processes and how democratic institutions can govern AI. Students mapped their current uses of AI, learned core categories of AI systems and their risks, and reviewed assignments, policies, and next steps to prepare for deeper conceptual work on AI and democracy.
Attendance
– Explicitly recorded absences: 0 (no specific names recorded as absent)
– Inferred count: Class size ~30; live participant count ~17 during breakout rooms; approximately 13 likely absent
– Notes: Several late arrivals and one temporary disconnection; instructor emphasized camera-on expectations moving forward
Topics Covered
1) Welcome, context, and participation norms (start of class)
– Course introduction and enthusiasm: “Democracy in the Age of Artificial Intelligenceâ€�
– Diverse cohort highlighted (political science, economics, data science, software engineering, OSUN fellows); this interdisciplinarity framed as both the promise and the challenge of the topic
– Participation expectations in Zoom:
– Camera-on request as default
– If camera must remain off due to tech/security, students must email the instructor; they may be cold-called more to verify presence
2) Icebreaker: Partner interviews in breakout rooms (approx. 5 minutes)
– Setup:
– Students paired (one group of three) for 5 minutes
– Prompts: Name; where you’re from; major; main use(s) of AI in your life
– Instruction to speak answers aloud to “get the socialization goingâ€�
– Reporting back to class with instructor whiteboard capture
– Observed student AI use cases (examples consolidated from share-outs):
– Academic support:
– Proofreading/grammar correction; polishing tone and clarity (e.g., Grammarly; ChatGPT suggestions)
– Summarizing and explaining readings; “understanding textâ€�
– Brainstorming ideas and research leads; topic overviews
– Coding help (recalling formulas, troubleshooting)
– Translation of documents and passages
– Planning and task management:
– Travel suggestions and lists of places to visit
– Scheduling and study plans; to-do/task generation
– General “life assistantâ€� queries (answering everyday questions quickly)
– Creative/administrative work:
– Drafting rules/instructions or outlines for teaching or projects
– Data analysis basics (structuring, preliminary insights)
– Emotional support/mentorship:
– Using AI as a sounding board for motivation or basic guidance
– Information-seeking:
– Quick Q&A, health/nutrition questions, addresses and directions
– Instructor synthesis: These uses reflect both the breadth of tools labeled “AIâ€� and the ambiguity around definitions students bring to class
3) Scoping the term “AI�: Key categories and early risks (lecture-discussion with examples)
– AI as an umbrella term with multiple distinct technologies that share measurable effects:
– Advertising/social media recommendation algorithms:
– Content and ad feeds that optimize for engagement; personalize what you see based on behavior
– Large Language Models (LLMs):
– Examples: GPT (ChatGPT), Gemini, Claude, DeepSeek, others
– Heuristic: “autocorrect on steroidsâ€� processing vast text corpora to predict and generate likely continuations; not conscious or “thinking,â€� but multi-layer algorithmic text prediction based on training data from internet, books, media
– Generative media:
– Image generation (still images); video generation (moving images); audio generation (voice cloning, text-to-speech)
– Modification of existing media (image/video edits) enabling realistic deepfakes
– Democratic and social risks introduced:
– Misinformation/disinformation at scale via synthetic media and deepfakes, potentially undermining trust in elections and institutions
– Bias and inequality replicated from training data distributions (e.g., “generate a familyâ€� returning stereotyped demographics), not due to explicit programmer intent but due to skewed data and representation
– Takeaway: The class will evaluate these AI categories by their observable impacts on democratic processes and the regulatory/policy responses available
4) Syllabus walkthrough and major assignments (logistics + pedagogical rationale)
– Access/logistics:
– Syllabus and readings on eCourse; some OSUN students awaiting access; instructor temporarily shared reading via Zoom chat
– Office hours: Tuesdays (exact time to be added to syllabus)
– Enrollment key for eCourse: GPT-Democracy
– Assessment design philosophy:
– Emphasis on authentic thought and voice; minimize incentives to submit AI-generated prose
– Major assignments:
– Critical Reflection Video Journals
– 6 total; one every two weeks; 5–7 minutes each
– Content: Choose a topic from course weeks-to-date and speak candidly; starting with Journal 2, respond to at least one specific point from a classmate’s prior video
– Grading: Engagement, thoughtfulness, and connection to course materials (not “right answersâ€�)
– Anti-script policy: Obvious AI-script reading will lose points; authenticity is required
– First journal due next Thursday (not this week)
– Policy Memo (main analytical writing)
– 4 pages (references do not count toward the 4 pages)
– Due around Week 10 (November)
– Task: Propose an actionable, specific policy intervention for a selected country to address a defined AI-and-democracy problem (e.g., elections, voter registration, civic engagement)
– Speculative Narrative (creative final assignment)
– Project the year 2035; choose a democratic-process theme (e.g., AI in elections, legislatures, accountability/human rights)
– Weave course theory and real-world cases into a narrative envisioning plausible futures
– All major assignments are individual work
– AI and academic integrity policy:
– Direct copy-paste or obvious AI-generated submissions will be treated as plagiarism and receive a zero
– Attendance and camera policy:
– Presence in Zoom with visible name required; camera-on is expected
– If camera must be off, email the instructor explaining the circumstance; be prepared to participate more to demonstrate presence
– Per AUCA policy: 5 absences = course failure unless formally excused (e.g., medical certificate)
5) Next steps and preparation for Thursday
– Learning arc preview:
– Establish shared definitions of “AIâ€� in this course context and map them onto “democracyâ€� (what institutions and processes are affected?)
– Reading assigned (on eCourse and shared via Zoom for those without access):
– Focus concept: “Surveillance capitalismâ€� (students should be prepared to define it and connect it to AI’s economic and political dynamics)
– Not graded/quiz-based for next class; aim for familiarity with terms
Actionable Items
Immediate (before next class)
– Students:
– Enroll in eCourse using key GPT-Democracy
– Download/read the assigned article; prioritize understanding “surveillance capitalismâ€�
– Ensure your camera/mic setup works; email the instructor if you anticipate camera-off needs with reasons
– Note the first Video Journal deadline: next Thursday (5–7 min; pick one course topic to reflect on)
– Instructor:
– Add specific Tuesday office hour times to the syllabus
– Confirm eCourse access for OSUN students; resend reading via email if needed
– Post detailed prompts/rubrics for the three major assignments and set up submission portals
Short-term follow-ups (this week)
– Students:
– Begin brainstorming Video Journal 1 topic based on today’s discussion or the reading
– Instructor:
– Share a concise summary or screenshot of the “AI usesâ€� whiteboard to the course site for reference
– Clarify any boundaries/allowances for tool use (e.g., is grammar-checking permitted?) in writing assignments
Coming weeks
– Students:
– Track examples of AI-democracy intersections from news or personal experience to contribute to discussions
– Instructor:
– Provide exemplar policy memos and speculative narratives to calibrate expectations
– Outline upcoming weekly themes so students can pre-read strategically
Homework Instructions:
ASSIGNMENT #1: Pre-class Reading — Surveillance Capitalism and Core AI Vocabulary
You will read a short piece (about 20 pages) to build a shared vocabulary for Thursday’s session, where we’ll define “AI� and “democracy� in our course context; focus especially on the term “surveillance capitalism,� which the author coins and which connects directly to our discussion of ad/social-media algorithms, large language models, and image/audio generation.
Instructions:
1) Get the reading:
– Access the file on our course site. If you aren’t enrolled yet, enroll with the key: GPT-Democracy.
– If you still don’t have access, use the PDF shared in the Zoom chat from today’s class.
2) Time-manage the reading:
– We only have two days; you do not have to read every page if you can’t. Prioritize any sections that define or explain “surveillance capitalism.â€�
3) Read with purpose:
– Underline key definitions and claims.
– In your own words, draft a brief working definition of “surveillance capitalism.â€�
– Identify 2 concrete examples that link the reading to our class list of AI uses (e.g., content-feeding ad algorithms on social media; LLMs used for research/summarizing/proofreading; image/video generation and deepfakes).
4) Prepare for Thursday:
– Bring 2–3 questions or points of confusion to discuss (no written submission required).
– Be ready to connect the reading to our in-class brainstorm about AI uses and to help establish shared definitions we’ll use moving forward.
ASSIGNMENT #2: Critical Reflection Video Journal #1
You will record a 5–7 minute informal video where you talk authentically about one topic we’ve covered so far (e.g., your own AI use-cases; ad algorithms vs. LLMs; image/video generation and deepfakes; bias from training data; or the reading’s “surveillance capitalism�). This series is meant to center your genuine voice and build engagement across the class.
Instructions:
1) Choose your focus:
– Pick one topic from class so far (examples: proofreading/research/summarizing; coding help; scheduling/emotional support; image/video generation and misinformation; bias in models; key ideas from the reading).
2) Plan briefly—don’t script:
– The instructor emphasized authenticity over polished prose. Do not write or read an AI-generated script. Points will be deducted if it sounds like you’re reading ChatGPT.
3) Record your video (5–7 minutes):
– Speak directly to the camera.
– Share your honest perspective, personal examples, and at least one connection to class material or the reading.
– Aim for clear audio and steady framing; perfection isn’t required.
4) Note the response rule:
– For Journal #1, you do NOT need to respond to a classmate.
– Starting with Journal #2 (and each entry thereafter), you will respond to at least one point from a classmate’s prior video.
5) Submit by the deadline:
– Upload your video to the course site under Video Journals by next Thursday (not this Thursday).
– If your account access isn’t active yet, keep the file ready and upload as soon as your access is enabled.
6) Know how it’s graded:
– Grading is based on engagement, thoughtfulness, and connection to course materials—not on being “rightâ€� or using perfect English.
– Obvious AI-written speech or reading from a script will lose points.
7) Remember the cadence:
– You’ll complete six total video journals across the semester (one every two weeks). This is the first in that series.