Lesson Report:
# Title
**Operationalizing Policy Problems: Alternatives, Principal Objectives, and Measurable Indicators**
This session focused on moving students from broad, idealistic policy language toward measurable policy analysis. The instructor reviewed alternatives and principal objectives, then introduced **operationalization** as the core skill for connecting a problem statement to concrete indicators that can be measured, critiqued, and later used in a policy memo and cost matrix.
# Attendance
– **Absent students explicitly mentioned: 1**
– **Kadyralieva Bereke Azamatovna** — likely the student referenced as “Betteke/Bereke” when the attendance sheet was checked.
# Topics Covered
## 1. Opening Review: Alternatives and Principal Objectives in Policy Analysis
– The instructor opened by confirming that students were in their groups and had their Tuesday assignments ready.
– A brief review followed on key terminology from the previous class:
– **Alternative**
– **Kurstanbekova Darina Kurstanbekovna** defined an alternative as **“what we can do instead of doing nothing.”**
– The instructor expanded this by emphasizing that in policy analysis, **doing nothing is always the baseline option**, so every policy solution is an alternative to inaction.
– **Principal objective**
– The class reviewed the idea that the principal objective is **the result or goal the policy is intended to achieve**.
– The instructor framed policy analysis as a pipeline:
1. **Problem statement**
2. **Principal objective**
3. **Alternatives that can get us from the problem to the objective**
– A simple running example was used:
– Problem statement: **Air quality is bad in Bishkek**
– Principal objective: **Air quality should be better**
– The class brainstormed possible alternatives for reaching that objective:
– green energy
– public transportation use
– alternative heating
– license plate restrictions to reduce traffic
– The instructor referenced a point made by **Shamyrbekov Erkhan Shamyrbekovich** on Tuesday: alternatives can be understood as **different routes to the same principal objective**.
– This was explicitly tied to the structure of the eventual **policy memo**:
– define the problem
– define the principal objective
– present 2–3 alternatives
– argue which alternative is most likely to succeed
## 2. Revisiting the Quality of a Problem Statement: Neutrality and Quantifiability
– The instructor returned to the sample statement **“The air quality in Bishkek is bad”** and asked whether it was a strong problem statement.
– Students identified that it was **not a good problem statement**, chiefly because it was **not quantifiable**.
– The instructor emphasized two standards already established earlier in the semester:
– a good problem statement must be **neutral**
– a good problem statement must be **quantifiable**
– The class focused especially on words like **“bad”** and **“good”**, which sound intuitive but are too vague unless they are converted into something measurable.
– This set up the day’s main concept: **operationalization**.
## 3. Introduction to Operationalization: From Abstract Ideas to Measurable Reality
– The instructor introduced **operationalization** as the main skill for the day and noted that students would keep encountering the term throughout policy and political science coursework.
– The class briefly deconstructed the word through the root **“operation.”**
– Students described an operation as an **action** and as something **functional**.
– The instructor then gave the policy-analysis definition:
– operationalization means **taking an abstract concept and explaining how it can be measured in the real world**
– A broader example was used:
– **poverty** is an abstract concept everyone understands intuitively, but it must be defined through indicators before it can be measured
– The instructor explained why this matters for students’ writing:
– the way they operationalize a concept will directly shape both their **problem statement** and their **principal objective**
– weak operationalization causes a mismatch between the stated problem and the objective supposedly solving it
## 4. Worked Example: “The Education System is Bad”
– The instructor moved to a fuller example:
– **“The education system is bad”**
– Students were asked how such a claim could be measured.
– **Kurstanbekova Darina Kurstanbekovna** proposed **test scores** as one possible way to measure quality of education.
– The instructor used this suggestion to demonstrate the next steps of operationalization:
1. identify an indicator
2. define what counts as “good” and “bad” on that indicator
– The class then operationalized **test scores**:
– **Juya Ali** explained that a good score could mean an **A or B**, roughly **80–99**
– the class defined a failing or bad score as **below 60%**
– The instructor stressed that even after choosing an indicator, students still need to define:
– what the numbers actually mean
– where the threshold is for success/failure
## 5. From Individual Scores to Population-Level Measurement
– The instructor then complicated the education example by asking how to measure **quality of education in the Chuy region**.
– He asked whether it would be valid to use just one student’s test result.
– **Orolova Altynai Sharshenalyevna** was used as the concrete example of a strong student whose score would not represent everyone in the region.
– The class concluded that **one person’s score is not representative**.
– **Kurstanbekova Darina Kurstanbekovna** suggested using an **average**.
– The instructor explained averaging in statistical terms:
– sum all values
– divide by the number of observations
– **Shamyrbekov Erkhan Shamyrbekovich** then raised a representativeness issue:
– even if a school average is known, **one school cannot stand for the whole district**
– The instructor extended this into a discussion of:
– sampling across multiple schools
– why **10 tests** is too small a sample
– how **outliers** can distort averages
– why random selection helps, but not if the sample is still too small
– A comparison was made to:
– **GDP vs. median income in the United States**, to show how averages can be pulled upward by extreme values
– The main takeaway was that operationalization is not just picking an indicator; it also includes:
– how many cases are measured
– how representative the data are
– whether the measurement really captures the concept of interest
## 6. Proxy Measures and the Problem of Bad Operationalization
– The instructor introduced the term **proxy**:
– something used as a **stand-in** for something harder to measure directly
– In this example:
– **quality of education** = abstract concept
– **test scores** = proxy
– The instructor then asked whether test scores are **actually a good proxy** for educational quality.
– Students challenged the validity of the proxy in several ways:
– corruption or bribery could produce high scores without real learning
– proctors or school officials could manipulate outcomes
– nepotism could distort results
– multiple-choice tests allow for guessing
– some students are simply better at the “game” of test-taking than at the underlying subject
– The instructor expanded this into a conceptual point:
– a high score does not always mean good education
– a low score does not always mean bad education
– therefore, **the proxy may not actually represent the abstract concept**
## 7. Policy Application Case: No Child Left Behind and “Teaching to the Test”
– To show why bad operationalization matters in real policy, the instructor used the U.S. case of **George Bush’s No Child Left Behind**, with later mention of **Common Core**.
– He explained the logic of the policy:
– use standardized tests to measure how well schools are doing
– reward schools with **more funding** when scores improve
– deny or reduce extra support when scores remain low
– The class then connected the earlier critique:
– if test scores are only a weak proxy for educational quality, then the policy is rewarding the wrong thing
– The instructor asked what schools actually ended up trying to maximize.
– Students identified that schools were not necessarily improving education itself; instead, they were improving **test scores**.
– This led to discussion of **teaching to the test**:
– schools trained students to perform on the measured instrument
– this could increase the indicator without improving the broader educational experience
– The instructor identified the central flaw as **bad operationalization**:
– the chosen proxy did not adequately capture the concept it was meant to measure
– An additional student contribution emphasized that one measure alone is often insufficient, and the instructor agreed that more advanced policy work often relies on **multiple indicators** to reduce this problem.
## 8. Levels of Abstraction: High vs. Low
– The instructor paused to define **abstraction**, since he had been using the term repeatedly.
– Students and instructor together characterized abstract ideas as:
– intangible
– not directly touchable, visible, countable, or measurable
– The distinction was clarified:
– **quality of education** = high level of abstraction
– **test scores** = lower level of abstraction
– This became the criterion students were told to apply to their own writing:
– in both the **problem statement** and the **principal objective**, ask whether the terms used can actually be measured
## 9. Individual Reflection: Students Review Their Own Problem Statements and Objectives
– Students were given a short pause to look back at the problem statement and principal objective they had worked on Tuesday.
– The instructor directed them to assess:
– whether the terms were too abstract
– whether the concepts could be counted or measured
– whether the objective was actually linked to the problem statement
## 10. Side Discussion with Juya Ali: Employment as a Possible Proxy
– During the reflection period, **Juya Ali** asked about using **employment after graduation** as another indicator of educational quality.
– The instructor treated this as a useful live example of operationalization:
– “employment” also remains somewhat abstract unless specified
– He worked through several possible ways to operationalize it:
– **binary indicator**: employed / not employed
– specify a time frame, e.g. **one year after graduation**
– include or exclude self-employment
– operationalize through **income level** after graduation
– The key lesson was that even when a new indicator sounds more practical, students must still ask:
– what exactly counts?
– when is it measured?
– does it truly capture the underlying concept?
– This discussion also foreshadowed later work with **cost matrices** and the importance of **time horizons** in policy evaluation.
## 11. Random Call-Out Presentations: Testing Students’ Objectives for Measurability
– The instructor used a random number generator to call on students from the attendance list.
### a. Kambarova Adilia Sagynbekovna
– Called first as “Adelia/Adilia.”
– Her example was used to discuss whether a proposed problem statement/objective had moved from a higher level of abstraction toward something more measurable.
– The transcript does not preserve her exact wording, but her contribution served as part of the live diagnosis of abstraction in student work.
### b. Hawton Kyle “Abu Bakr” Jarred
– **Abu Bakr** presented an objective centered on **reducing deforestation to zero**.
– The instructor pushed him to make the concept less abstract:
– “deforestation” is still broad unless defined concretely
– it could be restated as:
– **zero trees cut**
– or **zero square kilometers of forest cut**
– This exchange modeled how even seemingly specific concepts need a measurable unit.
### c. Uncertain student name (possibly **Erikova Aidana Erikovna**, but not fully clear from the transcript)
– Another student was called and shared an objective framed as **reducing environmental damage**.
– The instructor unpacked the phrase step by step:
– **environment** is too broad — does it mean forests, animals, water, air?
– if the focus is **forest**, then what is measurable is **trees**
– **damage** must also be defined — for trees, the example used was **cutting them down**
– **reduce** needs a meaningful threshold — not just 500 trees cut last year versus 498 this year, but a defined percentage or substantial reduction
– This became a strong example of how to break down a highly abstract objective into components that can be measured and debated.
– During this exchange, the instructor also answered a student question about whether a principal objective can include **time**; he said **yes**, and noted that time specification would become especially important in the upcoming work on **cost matrices**.
## 12. Group Workshop: Creating Indicators for Abstract Policy Concepts
– With about 15 minutes remaining, the instructor moved into an **operationalization workshop**.
– Students were counted off into four groups and assigned one abstract concept each:
1. **Traffic congestion in the city**
2. **Quality of education**
3. **Equity in healthcare**
4. **Air pollution**
– Instructions:
– work quickly
– produce **at least one indicator** for the assigned abstract concept
– think of the indicator as something that would allow one to see whether the condition is getting better or worse
– Students were first given several minutes to think individually/in place, then told to physically regroup by number in different parts of the room.
– The next step was **“share and judge time”**:
– each student in each group should share their indicator
– others should challenge it mentally and ask whether it truly measures the abstract concept
## 13. Air Pollution Group Debrief
– The transcript preserves most clearly the instructor’s interaction with the **air pollution** group.
– The group had largely converged on **AQI** as an indicator.
– The instructor challenged that choice by asking:
– what exactly is AQI?
– what does the number actually represent?
– how do emissions from sources like cars become the AQI number?
– This repeated the central lesson of the day:
– even when students name an existing metric, they must still understand what is being measured and how
– One student added **PM2.5** concentration as another possible indicator, making the measurement more concrete than a broad index alone.
– The discussion suggested that students still need practice distinguishing between:
– a label for a metric
– and the actual measurable substance or process behind that metric
## 14. Closing and Next Steps
– Because time had run out, the instructor stopped the group discussion before full critique could continue.
– To preserve the workshop grouping for next class, he took a **photo of each group** so he could remember who belonged where.
– Final reminders:
– there are **two readings on E-course**
– next class will continue with **operationalization**
– the class will also begin **cost matrices** on Tuesday
# Student Tracker
– **Kurstanbekova Darina Kurstanbekovna** — Defined an alternative as what can be done instead of doing nothing; suggested test scores as an indicator of educational quality; later proposed using averages when measuring education across many students.
– **Shamyrbekov Erkhan Shamyrbekovich** — His earlier point about alternatives as routes to the same objective was referenced; in class he raised the issue that one school’s scores cannot represent a whole district.
– **Juya Ali** — Helped operationalize grades by tying “good” scores to A/B ranges; suggested employment after graduation as a possible proxy for education quality, prompting a detailed discussion of how to operationalize employment.
– **Orolova Altynai Sharshenalyevna** — Used as the example of why one strong student’s result cannot stand in for an entire region’s education quality.
– **Hawton Kyle “Abu Bakr” Jarred** — Shared an objective about reducing deforestation to zero, which the instructor used to demonstrate how to convert a broad concept into measurable units like trees or square kilometers.
– **Kambarova Adilia Sagynbekovna** — Was called on during the random presentation segment to share her problem statement/principal objective for discussion of abstraction and measurability.
– **Uncertain student name** (possibly **Erikova Aidana Erikovna**, not fully clear from the transcript) — Presented an objective about reducing environmental damage, which became a full-class example of breaking down “environment,” “damage,” and “reduce” into measurable terms.
# Actionable Items
## Immediate / Before Next Class
– Students should complete **two readings on E-course**.
– Continue the **group operationalization workshop** on Tuesday using the saved group photo.
– Have students revise **problem statements** and **principal objectives** to reduce abstraction and improve measurability.
– Ask students to define **indicators, thresholds, and units** more explicitly in their policy topics.
– Follow up with **Kadyralieva Bereke Azamatovna** on missed class content and group continuity.
## Next Class Preparation
– Prepare transition from operationalization into **cost matrices**.
– Revisit the importance of **time frames** in principal objectives.
– Consider additional examples of **good vs. weak proxies** beyond test scores and AQI.
– Reinforce that students should challenge whether an indicator **actually measures** the concept it claims to represent.
Homework Instructions:
ASSIGNMENT #1: Complete the Two Assigned Readings for Tuesday
You need to complete the two readings assigned for next class so you are prepared to continue working on operationalization and begin discussing cost matrices. These readings support today’s lesson on moving from abstract concepts, such as “quality of education,” “air pollution,” or “environmental damage,” to concrete, measurable indicators that you can use in your policy memo.
Instructions:
1. Locate the two readings that have been posted for the next class session.
2. Read both texts in full before Tuesday’s class.
3. As you read, pay special attention to the idea of operationalization, which we discussed in class as the process of taking an abstract concept and explaining how it can be measured in the real world.
4. Review the examples from class as you read:
– a problem statement and a principal objective should be linked to one another,
– abstract terms such as “bad,” “better,” or “environmental damage” need to be made measurable,
– indicators or proxies should actually represent the concept you are trying to measure,
– weak operationalization can lead to weak policy analysis.
5. While reading, think about your own policy topic and ask yourself:
– What are the most abstract terms in your problem statement?
– What are the most abstract terms in your principal objective?
– How could you lower the level of abstraction and make those ideas measurable?
– What indicator could you use to show whether the problem is getting worse or better?
6. Pay attention to the second reading with the understanding that the class will move into cost matrices on Tuesday, which your professor specifically said will be the next topic.
7. Come to class on Tuesday ready to use the readings in discussion and group work, especially as you continue refining your problem statement, principal objective, and indicators.