Lesson Report:
**Title: Reviewing Core Theorists and Story Structures for the Final AI & Democracy Narrative**

This final session closed the course by (1) consolidating key theoretical concepts from Zuboff, Eubanks, DiResta, and Beckman, and (2) scaffolding students’ final narrative assignment so they could leave with a concrete story plan. The class ended with structured reflection on how the course changed students’ views of everyday technologies and on the online course format itself, plus brief one‑on‑one grading logistics.

## Attendance

– Number of students explicitly mentioned as absent: **0**
– Instructor verbally counted: **11–13** students present (“eleven, twelve in the room… we have one more and… count that as good enoughâ€�), but no specific absences by name were recorded.

## Topics Covered (Chronological, with Activity Labels)

### 1. Opening, Course Context, and OSUN/GIA21 Survey

– **Framing of the session**
– Instructor notes this is the **last and final session of the semester**, marking the completion of the course.
– States that **most of today** will be dedicated to ensuring students know **what they will write for their final assignment**.
– **Administrative request: OSUN / GIA21 survey**
– Instructor explains they are “really sorryâ€� but must assign a **feedback task** due to frequent reminders from OSUN / GIA21.
– Posts a **link in the chat** to the official **GIA21 OSUN student survey**.
– Emphasizes:
– Students should provide **honest feedback**, as it helps:
– OSUN / GIA21 collect data.
– The instructor **improve future GIA21 classes**.
– Allotted **~5 minutes of class time** for completion, asking students to:
– Click the link.
– Fill out the survey honestly.
– **Clarifications from students**
– A student asks whether they are indeed supposed to be “filling out something.â€�
– Instructor confirms:
– Yes, the link in chat is the official GIA21 survey.
– Reiterates its importance to both the network and the instructor’s teaching.

### 2. Concept Review for Final Stories – Four Core Authors

The central segment of the session is a **guided review** of four major readings, explicitly framed as conceptual raw material for students’ final stories. This is done via **numbered questions** and **chat-based responses**, with the instructor prompting elaboration and clarifying key ideas.

#### 2.1 Zuboff – Surveillance Capitalism and Behavioral Data

– **Prompt**:
If your story focuses on **Zuboff**, what is the **primary goal of every corporation** in her account? What do they want?
– **Student responses (refined by instructor):**
– Corporations want to:
– **Collect and analyze users’ behavioral data.**
– Obtain **as much personal data as possible**, but with emphasis on:
– **Behavioral data** (how people act, what they click, how long they look at things), not just static facts like **birthdays** or a grandmother’s middle name.
– Use this data to:
– **Predict** how individuals and **people like them** will behave.
– **Shape** behavior (e.g., through targeted offers/experiences).
– **Monetize** these predictions for **profit**.
– Instructor repeatedly stresses:
– The focus is **behavioral surplus**—data about **actions and tendencies**.
– This data is a **“goldmine for advertisersâ€�**, who pay to target ads people are **more likely to click** rather than ignore.
– **Conceptual takeaway for stories:**
– Any story centered on Zuboff should reflect:
– Platforms extracting **behavioral data**.
– Converting it into **prediction products**.
– Selling those predictions to advertisers or other actors, with **profit as the underlying driver**.

#### 2.2 Eubanks – Algorithms, Welfare Systems, and Punishing the Poor

– **Prompt**:
Moving to **Week 7** and **Virginia Eubanks** (Allegheny County algorithm).
If your story focuses on Eubanks, **who** does the algorithm punish, and **why** does it catch them in particular?
– **Who the algorithm punishes:**
– Students correctly identify:
– **Poor and vulnerable populations**.
– **Marginalized groups** and **minorities**.
– People **dependent on social services**.
– **Why it punishes them: conceptual unpacking**
– Instructor pushes them beyond labels like “biased dataâ€� and “minoritiesâ€� to causal mechanisms:
1. **Biased Data and Proxy Variables**
– Algorithms use:
– **Biased training data**.
– **Proxy variables** (e.g., use of public benefits as a proxy for “riskâ€�).
– These proxies track **structural inequality**, not genuine individual risk.
2. **Most Monitored and Most Visible**
– Poor people are:
– The **most monitored**.
– The **most visible** in **government databases**.
– Students articulate that:
– Algorithms draw heavily on **data from public systems**.
– Poor people interact with those systems far more frequently than wealthy people.
3. **Dependence on Public Services**
– The instructor asks:
Why are poor people more visible in government systems than rich people?
– Students (with instructor steering) identify:
– Poor people **rely on public services**, which require:
– Constant **paperwork**, **monitoring**, and **data collection**.
– Rich or majority populations often use **private alternatives** that are **outside** these datasets.
– Examples discussed (largely drawn from Eubanks):
– **Public healthcare / public hospitals**:
– Poor people use low-cost public healthcare.
– Rich people often use **private hospitals** or **expensive insurance**, keeping them **out of public databases**.
– **Public housing** (e.g., Section 8):
– Poor people apply for public housing, generating records.
– Rich people **never appear** in these housing benefit databases.
– **Public transportation** and **public education**:
– Usage data can be logged and integrated.
– **Public psychiatry services**:
– Instructor references a vivid Eubanks example:
– A mother who used a **publicly funded psychiatrist**; her psychiatric records are captured in data used by the child welfare risk algorithm.
– A wealthier person seeing an expensive **private psychiatrist** would not have those records in the same system.
4. **Black-box and Efficiency**
– Students mention:
– Pursuit of **efficiency** as a design value.
– **Black-box models** whose criteria are **opaque**.
– Instructor agrees and links this to:
– Algorithms making **instantaneous decisions** with no transparent route for challenging them.
– **Summary the instructor enforces:**
– Algorithms in Eubanks’ examples **over‑target the poor and minorities** because:
– These groups **must interact** with public systems that **generate the data**.
– Rich and majority citizens use **private systems** that do **not feed into** those databases.
– The algorithm’s vision of “riskâ€� is therefore **skewed** by whose lives are **legible** to the state.

#### 2.3 DiResta – Censorship by Noise and the Liar’s Dividend

– **Prompt**:
With **DiResta** (Week 6), if you want to **censor someone without deleting their posts**, what is the best strategy?
– **Initial student responses**:
– Some mention:
– **Downranking** or **reducing visibility** via algorithms (e.g., making posts appear less often in feeds).
– **Instructor’s constraint & refinement**:
– Adds an important caveat:
– Imagine you **do not control the algorithm** itself (you’re not “Elon Musk on Twitterâ€�).
– You can’t simply reweight or hide their posts from the backend.
– Given that, what can you do?
– **Flood the Zone strategy:**
– Students, led by one strong answer, identify:
– The best strategy is to **“flood the spaceâ€�** (Diresta’s “flood the zoneâ€�):
– Create a massive volume of:
– **Distractions**,
– **Misinformation**,
– **Extremist or polarizing content** on all sides.
– Result:
– The target’s message becomes **buried** under noise.
– Their posts still exist, but are **drowned out** and **socially sidelined**.
– **Introduction of the “Liar’s Dividendâ€�**
– Instructor then pivots to a key principle:
– The **liar’s dividend**, one of the “most important principlesâ€� of the course.
– Asks students to explain:
– What is the liar’s dividend?
– How does it connect to flooding the information space?
– **Student/instructor reconstruction of the concept:**
– Definitions consolidated by the instructor:
– When the information environment is saturated with:
– **Fake content**, **deepfakes**, **manipulated media**, and **misinformation**,
– People become:
– **Confused**, unable to distinguish **true from false**.
– **Distrustful** of everything they see online.
– In such an environment, **liars benefit** because:
– They can **deny real evidence** by calling it “fakeâ€� or a “deepfake.â€�
– The public cannot **easily verify** what is true.
– Instructor offers a **politician nightclub example**:
– Suppose a **real** video surfaces of a politician:
– In a nightclub, clearly intoxicated/on drugs (sweaty, dilated pupils).
– In a world full of deepfakes:
– The politician can simply say, **“It’s a deepfake.â€�**
– Even if analysts confirm authenticity, **the average citizen**:
– Is worn down by constant fake/real disputes.
– Struggles to trust any claim.
– Thus:
– Liars have a built‑in **escape hatch** from accountability.
– **Connection back to DiResta and censorship:**
– Flooding the zone with misinformation:
– **Confuses** the public.
– **Devalues** all online evidence (real and fake).
– When a person or group is targeted (e.g., an emerging politician):
– Opponents can:
– Release a **torrent of deepfakes** portraying them in compromising or extreme ways.
– Force the public into **fatigue**—they won’t check each piece.
– Eventually, the target can label **any new accusation** or evidence as “fakeâ€� and remain plausible.
– **Net effect**:
– Censorship happens **indirectly**:
– The target’s **true positions** are **lost** in a polluted discourse.
– Accountability is blunted because **no one trusts anything**—to the advantage of **liars and bad actors**.

#### 2.4 Beckman – AI Judges, Publicity, and Democratic Legitimacy

– **Prompt**:
With **Beckman**, why would an **AI judge** be **undemocratic** even if it were **100% accurate**?
– **Student first pass:**
– AI judge would:
– Remove **human judgment** and **accountability**.
– Be a **black-box model**.
– Provide **no understandable reasons** for decisions.
– Make decisions **unquestionable** and **unappealable**.
– **Instructor’s structured explanation:**
– Introduces **Beckman’s principle of publicity** as the normative anchor:
– For a **public authority’s judgment** to be democratic:
1. **Publicity / Transparency of Reasons**
– Citizens must be able to:
– Access the **reasoning** behind decisions.
– Understand the **criteria applied**.
– AI systems, as currently designed:
– Are **black-box statistical models**.
– Even when they output conclusions, they **cannot genuinely explain** why a particular decision was made.
– The underlying model is **impenetrable** to citizens and often to engineers.
2. **Accessibility and Appeal**
– Democratic systems require:
– Decisions to be **appealable**.
– Citizens to have **procedural routes** to challenge outcomes.
– If citizens **cannot see the reasons**, they **cannot meaningfully appeal**.
3. **Analogy to ChatGPT behavior**
– Instructor gives a meta-example:
– When ChatGPT makes an error and you ask “why did you make this mistake?â€�, its explanation is **speculative**—the system **doesn’t actually know** its internal reasoning process.
– Similarly, an AI judge:
– Cannot articulate the **true causal chain** behind its outputs.
– Only **appears** to reason, but underlying process is inaccessible.
– **Conclusion**:
– Even if an AI judge were hypothetically **100% accurate** in outcomes:
– It would still be **undemocratic** because:
– Its decisions lack **publicly intelligible reasons**.
– Citizens have **no real avenue** to understand or challenge those decisions.
– **Accountability** is displaced onto an opaque system.

### 3. Structuring the Final Narrative Assignment

After the conceptual review, the instructor shifts to **assignment design** to ensure students can **translate theory into narrative form**.

– **Goal**:
Provide a **clear template** for how students should **organize and outline** their final narrative paper.
– **Instructions for in-class outlining (in shared Google Doc):**
1. **Choose the format of the paper**
– Reminds students:
– There are **four example formats** on eCourse (not detailed in transcript).
– Students may:
– Select one of these.
– Or propose a **different format** if it better suits their idea.
2. **Identify the protagonist**
– Question: **Whose eyes are we looking through?**
– This helps fix **point of view** and emotional anchor.
3. **Specify the focal technology or law**
– What **specific tech** or **policy/law** is central to the story?
– May be connected back to their **policy memo** (e.g., an AI labeling committee or government risk-scoring system).
– Clarify:
– What is the **issue that has gone awry**?
4. **Define the UI hook**
– Recall prior lesson: stories should open with a **concrete interface moment**.
– Examples:
– A **notification** appearing.
– A **screen** layout.
– A **form** to be completed.
– A particular **user experience** that triggers conflict.
– This hook should **start the conflict** for the protagonist.
5. **Articulate the central problem and anchor author**
– What is the **main overarching problem** of the story?
– Which **course author** best explains that conflict?
– E.g., Zuboff (surveillance capitalism), Eubanks (welfare algorithms), DiResta (misinformation), Beckman (AI and democracy), etc.
– **Logistics:**
– Instructor:
– Shares the **Google Doc link** again in chat.
– Asks students to work **individually** in their own sections of the doc for ~**15–20 minutes**.
– To Elijah:
– Instructor notes they will also **send the final assignment instructions** from eCourse, ensuring he has the full brief.

### 4. Student Story Idea Pitches (Whole-Class Sharing)

With about 10 minutes left, the instructor invites students to **verbally pitch** their story outlines based on the template.

#### 4.1 Elijah – UBI Recipient and Biased Automated Election System

– **Format**: Still **brainstorming**, not final; using the exercise to explore ideas.
– **Protagonist**:
– A **citizen on Universal Basic Income (UBI)**.
– Their job (like many others) has been **automated away by AI**, leading to UBI dependence.
– **Focal tech/law**:
– **Automated election systems** combined with:
– **Zoning rules**.
– **Voter inequality**.
– Systemic bias in how the platform presents **candidates and information** to different districts.
– **UI hook**:
– A **voter platform interface**:
– Shows candidates and voting options.
– In **low-income districts**, the interface:
– Presents **limited or skewed data** about candidates.
– Algorithmically **filters and tailors information**, creating bias.
– **Central problem**:
– **Low-income voters are misled**:
– They receive **distorted or incomplete information** about candidate positions.
– The system **nudges them** towards votes that **benefit the ruling class**.
– The platform’s recommendation engine:
– Automatically **targets low-income voters** with messaging that gives a **false impression** of candidates’ goals.
– Conceptual link:
– Fits into **DiResta’s misinformation ecosystem**:
– Algorithmic personalization.
– Asymmetric information across groups.
– **Instructor feedback**:
– Calls it a **great start**, especially around:
– **Voter suppression/manipulation** via technology.
– The realism of election-related algorithmic bias.

#### 4.2 Student 2 – Californian Small Business Owner vs. Opaque Government AI

*(Name partially garbled in transcript; likely one of the regular contributors.)*

– **Format**:
– **Retrospective narrative essay**.
– Explicitly connected to their **policy memo** on:
– Use of **opaque AI in government decision-making**.
– **Protagonist**:
– A **California resident**, likely a **small business owner**.
– Relies on an important **public benefit**, such as **healthcare**.
– **Focal tech/law**:
– A **government AI system** that:
– Automatically **approves/denies** access to public benefits.
– Operates as an **opaque, unexplainable model**.
– **UI hook**:
– Protagonist receives:
– A **notification letter** (physical or online) or platform update:
– “Your application/benefits have been denied. For more information, contact…â€�
– When they call, they encounter:
– **Robocalls** or frontline staff who can only repeat **stock phrases**, unable to explain the decision.
– Hook is the **frustration** of:
– Hitting a bureaucratic and technological **dead end**.
– Confronting an **unintelligible opaque decision** that deeply affects their life.
– **Central problem & theorist:**
– Problem:
– Loss of **transparency** and **appeal** in public-benefit decisions.
– Experiencing **Eubanks-style welfare automation** in a different jurisdiction.
– Theorist:
– Primarily **Virginia Eubanks**, *Automating Inequality* (Allegheny algorithm).
– **Instructor feedback**:
– Notes it’s highly **realistic**, especially the:
– Robocalls and **inability to appeal** an automated decision.
– Frames it as a strong application of Eubanks’ concepts.

### 5. Final Course Reflection: Technology, Habits, and Pedagogy

The instructor closes the group session with a **meta-reflection activity**:

– **Prompt**:
– Ask students to name:
– A **particular technology** (e.g., phone, Instagram) that this class made them think about **differently**.
– Whether they expect to **use it differently in the future**.
– **Student reflections (selected):**
– **Safi:**
– Most impactful elements:
– **Policy memo** exercise: felt like “a future politician.â€�
– Learned to be wary of:
– How apps like **Instagram** collect and **sell user data**.
– The operation of AI assistants like **Alexa** (“big old microphone broadcasting to Amazonâ€�).
– Takeaways:
– Be more **selective** about what personal information is shared.
– Pay closer attention to **terms of service** and data policies.
– **Banu:**
– Previously:
– Used **Instagram and other apps** without thinking about **algorithmic shaping**.
– Now:
– Understands how:
– Platforms collect data.
– Predict behavior.
– Subtly shape **emotions**, **opinions**, and **political views**.
– Plans to:
– Be more **careful about sources**.
– Limit how much she **follows algorithmic recommendations**.
– Be cautious with the **amount and kind of personal info** she shares.
– **Barfiya:**
– Describes the course as:
– “One of the bestâ€� she has taken.
– Appreciates:
– Learning how AI reshapes **democratic institutions** and **political decision-making**.
– Exposure to both **risks and opportunities** through multiple authors and stories.
– **Journal reflection assignments**, especially recording video reflections on readings.
– Emphasizes this course helped her grasp how modern tech interacts with **democracy and governance**.
– **Niloufar:**
– Focuses on **teaching methodology**:
– Praises:
– Interactive style (visuals, videos, open discussion).
– Assignments that felt like **genuine engagement** rather than just “racing to meet deadlines.â€�
– Critiques:
– Traditional university courses where students are constantly **running to submit tasks**, undermining the **joy of learning**.
– Says this class allowed them to **experience real learning and curiosity**.
– Explicitly thanks the instructor for:
– Being **knowledgeable** and **committed**.
– Making the process meaningful.
– **Instructor’s self-reflection:**
– Notes this was:
– First **online course** they’ve taught since COVID (prior online teaching was English and very different).
– Main challenge:
– Making online sessions **engaging and interactive** without physical co-presence.
– Admits:
– Often wished the course could be taught **in person** for richer interaction.
– Recognizes there are still **shortcomings** in the online design.
– Future plans:
– Will be teaching **online again next year**.
– Views this course as a **“first testâ€�** and wants to:
– Improve interactive structures.
– Incorporate student feedback into future iterations.
– Ends by:
– Thanking students for their **patience and engagement**.
– Stating enjoyment of the topic.
– Reminding them of the **final assignment due date (22nd)**.

### 6. Post-Class 1:1 – Reflection Journal Submission Issue

After most students leave, one student (Barfiya) stays to clarify a **grading concern**:

– **Issue raised:**
– She had:
– Completed **Reflection Journal 3** on time.
– **Emailed** it to the instructor because she had trouble **uploading it to eCourse**.
– Only later noticed:
– eCourse added a new **“I acknowledge this is my own workâ€�** checkbox she may have missed.
– Her gradebook now shows a **big F** for Section/Reflection Journal 3.
– She expresses **anxiety and confusion** about how this is possible given she did the work.
– **Instructor’s response:**
– Confirms:
– He sees the journal was sent via **email**.
– It appears to have been submitted **8 days after the official deadline**.
– Explains policy:
– He **cannot accept assignments submitted via email**; they must be in **eCourse**.
– Offers a **remedy**:
– Her grade is **not lost**.
– He will post on eCourse a **new reflection assignment** worth the **equivalent of two reflection journals**.
– Instructions (verbally outlined):
– She should:
– Go back to her **first reflection journal**.
– Write a new reflection on **what has changed** in her views since then, given everything learned in the course.
– Completing this will **compensate** for the missing journal and avoid permanent penalty.
– Student acknowledges the solution and thanks the instructor.

## Actionable Items

### High Urgency (Before Final Grading)

– **Post and Announce Make-Up Reflection Assignment**
– Create the **new reflection-journal task** on eCourse (worth 2 journals), with:
– Clear instructions (reflect on changes since first journal).
– Explicit **deadline** that works within grading timeline.
– Ensure **Barfiya** (and any other students in similar situations, if applicable) understand:
– That this assignment can **replace/offset** the missed journal.
– That **email submissions** are not accepted for grading.

– **Verify All Students Have Final Assignment Instructions**
– Confirm that:
– **Elijah** has received the promised copy/link of the **final story assignment instructions** from eCourse.
– Optionally:
– Re‑send a brief **class-wide reminder** with:
– Final assignment due date (**22nd**).
– Link to instructions and rubric.

### Medium Urgency (During/Right After Final Submission Period)

– **Review Google Doc Story Outlines**
– Quickly scan students’ **in-class outlines** to:
– Ensure each has:
– A chosen **format**.
– A clear **protagonist**.
– A defined **tech/law focus**.
– A concrete **UI hook**.
– A named **central problem and theorist**.
– Identify any students with **thin or missing outlines** and consider:
– Sending targeted follow-up emails or comments to support them before the final due date.

– **Monitor and Grade Reflection Work**
– Once the **make-up reflection assignment** is submitted:
– Update relevant students’ **journal grades**, particularly **Reflection Journal 3**.
– Verify the gradebook reflects:
– All completed journals.
– Adjustments from the new assignment.

– **Compile and Review OSUN/GIA21 Survey Feedback**
– Once survey data is accessible:
– Review **student feedback** on:
– Course content.
– Online format.
– Assignments and workload.
– Note **recurrent themes** to inform future course design.

### Longer-Term / Next Iteration of the Course

– **Refine Online Pedagogical Design**
– Based on:
– Student comments in this session.
– OSUN/GIA21 survey results.
– Consider:
– Additional **synchronous interaction techniques** (breakout rooms, live polls, structured debates).
– More **guided peer feedback** on story outlines and policy memos.
– Clearer integration of **UI hooks** and narrative craft earlier in the semester.

– **Clarify Technical Submission Procedures**
– Before the next run:
– Add a visible note in the syllabus and eCourse:
– **No email submissions**; only eCourse uploads count.
– Students must ensure they complete all required **submission checkboxes** (e.g., “this is my own workâ€�).
– Possibly provide a short “How to Submit Assignments on eCourseâ€� guide to reduce technical mis-submissions.

– **Further Develop Story Assignment Scaffolding**
– Given the quality of pitches (UBI voter manipulation, opaque AI denial, etc.):
– Preserve and reuse the **five-part outline template** (format, protagonist, tech/law, UI hook, central problem & theorist) in future iterations.
– Consider adding:
– One additional **check-in point** earlier in the term to practice building narratives around a single reading or case before the final project.

This report should give you a reconstructable view of the final session’s flow, the major theoretical consolidations, and the concrete steps students were guided through as they prepared their narratives.

Homework Instructions:
ASSIGNMENT #1: Final Narrative Assignment – Technology, Democracy, and the Future

You will write a narrative piece that uses what you’ve learned in this course to tell a story about how a specific technology or law shapes democracy and people’s lives. You will build on the concepts we reviewed in this last session (Zuboff, Eubanks, DiResta, Beckman, etc.) and, if you wish, connect the story to the policy memo you wrote earlier in the semester.

Instructions:

1. **Review the original assignment prompt and example formats.**
– Re‑read the final assignment instructions posted earlier in the semester.
– Recall that, as mentioned in class, there were *four example formats* you could use (e.g., different narrative structures). You may choose one of those or propose your own narrative format, as long as it remains coherent and purposeful.

2. **Choose your narrative format and commit to it.**
– Decide *how* you want to tell your story (for example: a retrospective personal narrative, a fictional first‑person story, a journalistic feature, etc.).
– In your notes, explicitly write:
– “Format: [your chosen format]â€�
– Make sure the format you choose fits your goals. For instance, a retrospective essay works well if you want a character to look back on how a technology changed their life, while a day‑in‑the‑life story works well to show how systems quietly shape everyday choices.

3. **Select your protagonist.**
– Decide: **Whose eyes are we looking through?**
– Your protagonist should be someone for whom the technology or law really matters. Examples that came up in class:
– A low‑income parent navigating public services (Eubanks).
– A citizen on universal basic income whose work has been automated (as in Elijah’s example).
– A small business owner dealing with opaque government decisions (as in another student’s example).
– Write down:
– “Protagonist: [who they are, where they live, what their situation is].â€�
– Make sure your protagonist has something meaningful to lose or gain because of the technology or law.

4. **Choose the specific technology or law at the center of your story.**
– Clearly identify the main system, technology, or regulation you want to focus on. For example:
– A targeted advertising system built on behavioral data (Zuboff).
– A welfare or child‑services risk algorithm drawing on public databases (Eubanks).
– A recommendation/feed algorithm that “floods the zoneâ€� with misinformation (DiResta).
– An AI‑powered court decision system or “AI judgeâ€� (Beckman).
– Or a policy tool you designed in your own policy memo.
– In your notes, specify:
– “Central tech/law: [describe it in 1–2 sentences].â€�
– Be concrete: imagine how this system actually works in your story’s world (who runs it, where its data comes from, how people interact with it).

5. **Define your UI hook (the moment where the story “starts�).**
– As we discussed in class, ground your story in a **user interface or interaction** that your protagonist actually experiences. Examples:
– A notification that their public benefit or healthcare has been denied with no clear explanation.
– A voting app that only shows certain candidates or slants the information given to low‑income voters.
– A message from a platform explaining that their post has been “downrankedâ€� or that their appeal has been rejected.
– A judgment screen from an AI court system with no rationale displayed.
– Write:
– “UI hook: [what the protagonist sees/receives/does at the beginning, e.g., ‘You get an SMS saying your housing benefit is terminated’].â€�
– This UI moment should either *trigger* the central problem or make the protagonist realize that something is badly wrong.

6. **State the central problem/conflict of your story.**
– In 2–3 sentences, answer: **What is the big problem your story is about?**
– Use the frameworks we reviewed:
– **Zuboff**: corporations seeking behavioral data to predict and shape behavior for profit.
– **Eubanks**: algorithms punishing the poor and marginalized because they are over‑represented in public service databases.
– **DiResta**: censorship through *flooding the zone* with misinformation, and the **liar’s dividend** that allows liars to thrive in a polluted information space.
– **Beckman**: highly accurate but **undemocratic** AI decisions when they are black‑box, unexplainable, and unappealable, violating the principle of publicity.
– Explicitly write:
– “Central problem: [describe it].â€�
– Make clear how the protagonist is caught up in this problem and what is at stake for them personally (health, custody, livelihood, political power, dignity, etc.).

7. **Choose the course author who best explains your conflict.**
– Decide which author’s concepts best illuminate what is going wrong in your story:
– Zuboff, if your focus is on surveillance capitalism and behavioral data extraction.
– Eubanks, if you focus on poverty, welfare systems, and how public services feed punitive algorithms.
– DiResta, if your story is about information disorder, disinformation campaigns, and confusion about what is true.
– Beckman, if you’re dealing with opaque AI decision‑making in courts or other public authorities, and issues of democratic legitimacy.
– Write down:
– “Main theoretical lens: [author’s name + key concept(s) you will use].â€�
– Plan to **weave this author’s ideas into the story**—not as a mini‑essay you drop in, but through what your characters notice, say, or experience.

8. **Outline your story in scenes.**
– Using your format, protagonist, tech/law, UI hook, and central problem, outline your story as a sequence of scenes. For each scene, note:
1. What happens.
2. How the technology or law shows up (screen, notification, process, interface, rumor, policy, etc.).
3. How the protagonist reacts or what they learn.
4. Which course concepts you are implicitly or explicitly illustrating.
– Aim for a narrative arc:
– Setup → Rising tension → Turning point → Consequences → Resolution (or open‑ended outcome).

9. **Write the full draft of your narrative.**
– Transform your outline into a complete story in your chosen format.
– Keep the focus on *lived experience*: what your protagonist sees, hears, feels, and has to decide.
– Let the theory guide the events, but avoid simply lecturing. Instead, show:
– For example, show how a poor family is constantly monitored through public housing, public hospitals, and social services databases (Eubanks) rather than just stating the theory.
– Or show how endless fake videos and contradictory posts make your protagonist doubt even real evidence (DiResta’s liar’s dividend).

10. **Revise for clarity, coherence, and conceptual accuracy.**
– Reread with these questions:
– Is the **central conflict** clear?
– Does a reader unfamiliar with the course still understand what’s going on?
– Are the course concepts (Zuboff, Eubanks, DiResta, Beckman, etc.) used correctly and meaningfully?
– Is it clear how the technology or law is shaping democratic possibilities or injustices?
– Edit for structure, language, and flow. Check that your protagonist’s motivations and reactions make sense.

11. **Finalize and submit by the agreed deadline.**
– Ensure your narrative is complete, polished, and formatted according to the original final assignment guidelines.
– Submit it by **the 22nd**, as mentioned in class.

ASSIGNMENT #2: Reflection Journal Make‑Up – How Your Thinking Has Changed

This assignment is intended to replace missed reflection journals by having you reflect, in depth, on how your understanding of surveillance capitalism, AI, and democracy has evolved over the course. You will revisit your first reflection and compare your earlier views with what you think now, after engaging with the readings and discussions.

Instructions:

1. **Locate and re‑read your first reflection journal.**
– Find the very first reflection you wrote for this course—especially the one where you gave your early opinions about surveillance capitalism and related technologies.
– Read it slowly and note your initial assumptions, concerns, and areas of uncertainty.

2. **Identify your original key claims and feelings.**
– On a separate document, briefly summarize:
– What you believed about how tech companies use data.
– How you felt about platforms like Instagram, your phone, Alexa/voice assistants, etc.
– What you thought the main democratic risks or benefits of AI and digital systems were.

3. **Reflect on what has changed in your thinking.**
– Now, based on everything you’ve learned since then, ask yourself:
– Which of those beliefs do you still hold?
– Which have shifted, become more nuanced, or reversed?
– Draw directly on course content:
– **Zuboff**: Did you become more aware that companies don’t just want *any* personal data, but specifically **behavioral data** to predict and shape your actions for profit?
– **Eubanks**: Did you come to see how algorithms can systematically **punish the poor and marginalized** because they rely more on public services whose data feeds these systems?
– **DiResta**: Did your view of social media change in light of “flooding the zoneâ€� with misinformation and the idea of the **liar’s dividend**—that liars benefit when no one can tell real from fake?
– **Beckman**: Did your sense of “fairâ€� AI change when you considered how a perfectly accurate AI judge could still be **undemocratic** if its reasoning is opaque and unappealable?

4. **Write a structured comparative reflection.**
– Aim for a reflection that is roughly equivalent in depth and effort to **two of your regular journals combined**.
– Organize it into clear sections. For example:
1. **Then** – What I thought at the start of the course.
2. **Now** – How my thinking has changed.
3. **Why** – Which authors, concepts, or class activities influenced this change.
– In each section, use concrete examples:
– For instance, describe how you now think differently about posting personal information, using “freeâ€� services, or trusting automated decisions in welfare, courts, or elections.
– You can draw on the class discussion where students mentioned changing how they see Instagram, how they view Alexa as “a big old microphone broadcasting what you say to Amazon,â€� or how they became more cautious about data sharing in general.

5. **Engage critically with at least two course authors.**
– Choose at least **two** of the following (you may include more): Zuboff, Eubanks, DiResta, Beckman.
– For each author:
– Summarize the key idea(s) you took from them in your own words.
– Explain specifically how this idea changed or deepened your understanding of a technology you use (e.g., social media feeds, recommendation systems, public service algorithms, AI court systems).
– Connect this explicitly back to what you wrote in your first reflection.

6. **Connect to your own practices and future behavior.**
– Discuss any ways you expect to act differently because of what you’ve learned. For example:
– Being more careful about which apps you give data to and what information you share.
– Being more skeptical about “neutralâ€� algorithms, especially in public services or the justice system.
– Paying closer attention to where your news comes from and how your feed may be shaped.
– Be honest about what you *will* and *won’t* realistically change.

7. **Conclude with a meta‑reflection on learning.**
– In a short concluding section, answer:
– What is the single most important insight you’re taking away from this course about technology and democracy?
– How did the **format** of the course (readings, discussions, policy memo, narrative assignment, video reflections, etc.) help you reach that insight?

8. **Revise and submit.**
– Reread your reflection to ensure it is clear, well‑organized, and genuinely comparative (past vs. present).
– Check that you have:
– Referred back to your first reflection.
– Explained how your views have changed.
– Engaged with at least two course authors.
– Submit the completed reflection according to the instructions that will accompany the make‑up assignment posting.

Leave a Reply

Your email address will not be published. Required fields are marked *