Category: Policing

Three lessons in performance accountability from public education

Jared Knowles, copyright, www.jaredknowles.com

A lot of people ask me why I think building public accountability measures for police departments is possible. If police aren’t currently held accountable for their performance, why would communities begin to do that? How would we ever gather the data? Where would we begin?

The answer lies in public education.

In this post I share three lessons from education that we can use to inform the construction of a performance based accountability system for policing. These are:

  1. Demand performance analytics at all levels
  2. Use initial performance metrics to spur demand
  3. Understand that analytics can and should be contested

Let’s look at these a little closer.

Background

Twenty-five years ago schools were relatively unaccountable as well. Standardized tests were for students — not for measuring school performance — and schools reported a wide variety of unstandardized information touting their performance. Graduation rates had no common definition, either of what counted as a graduate or of who should be counted as a potential graduate (i.e. the denominator). There were no ratings systems for schools. And we certainly had much less insight into the different experiences of racial minorities and low-income students — students largely invisible in the education system. So what changed?

A movement happened. Not a coordinated movement orchestrated by a few small leaders; a grassroots demand for a better understanding of our schools and what they were doing. This movement was used by many people for many different purposes, but they shared one thing in common — a desire to better understand and contextualize the performance of public schools in order to improve their ability to shape the management of those schools.

Lesson 1: Demand performance analytics at all levels of government

Education researchers and public officials had been clamoring for better data for a long time — it’s what they do — but it wasn’t until school boards began feeling direct pressure from within that things started to change.1 That local pressure for better data on public schools came from all sides — from community members wanting to reduce property taxes and reign in spending, neighborhood advocates wanting to highlight the deep racial inequalities in their schools, special education advocates demanding more visibility for how schools serve their students, and parents faced with an increasingly complex supply of schooling options to choose from. School boards were challenged to justify the outcomes of their students and found they were often unable to do so — even when they had the will. There were no common measures for comparison. Learning about neighboring and peer school districts was hard. Every district was an island, measuring its performance with its own system of measures.

Advocates, sometimes in concert with local school districts, appealed to higher levels of government to solve this coordination challenge. They sought a wide range of reforms to make performance measurement of education easier: federal reporting requirements and infrastructure investments, state level standardization of accountability measures and funding for assessments, and tools to compare outcomes of schools in the community to neighboring school districts.

Pressure on all fronts created forward progress. Pressure on each front created positive feedback loops to sustain momentum on other fronts. States coordinated assessment programs for all their districts and the Federal government contributed to funding these assessments. The Federal government invested in data systems and data infrastructure for states, and states created tools to ease reporting burdens on schools and provide comparative information back to them.

But how were all these groups able to build success in their efforts?

Lesson 2: Use initial performance metrics to spur demand

Success in education came from the pragmatic approach of starting simple with available data, and building the case for better metrics from there. To understand why, a brief aside on performance analytics is necessary.

Performance analytics — done well — level the playing field between insiders in the government agency and outsiders consuming or funding the agency’s work. K-12 education in the United States is highly technical work enmeshed in a dense web of expectations; state and federal regulations; and ethical, moral, and political concerns. Lots of people care about it. And those delivering public education — school district employees — have an inside knowledge of how well they are doing, where they are struggling, and what their needs are.

Data has a wide appeal because it is an efficient way of transmitting some of this insider knowledge back out into the public sphere. Instead of having to interview and evaluate statements from lots of people inside the agency, we can agree on some pre-set attributes that are important about the agency and measure them quantitatively.

These measures are efficient statements about how the agency is doing. And they can be used by each and every party with an interest in public education to make their case. Because of their power and efficiency, it doesn’t take long for public interest groups to rely on and demand more performance metrics (Stone 2002).

Take a school district with a graduation rate of 87%, above the national average. The school district and supporters can claim that they beat the national average — the school is doing well. One set of critics can point out that, for the cost of education in the school district, the district should be doing better. Another set of critics can point to the graduation rate for low-income students at 78% and demand the school district do better.

The point is — the metric anchors the ability of the public to participate in performance management on an equal footing with the agency itself.

But even more — agencies themselves come to rely on these public reports because:

Often agencies themselves are flying in the dark about their own performance on many metrics that matter. Agencies are deeply curious who the leaders in their field are and how they can emulate them. Performance analytics are a powerful tool agency leadership can use to spur reform, identify new strategies, and inspire change.

And, the public, in the end, wants performance as well (Heinrich and Marschke 2010).

Lesson 3: Understand that analytics can and should be contested

You may already have found some things to quibble with in the school example. Is a graduation rate the right measure? Don’t graduation standards vary? Should schools produce a high average graduation rate or try to graduate all student groups at the same rate?

The fact is that performance analytics are insufficient to hold public services accountable. But the discourse that surrounds them is critical for that accountability (Behn 2003). The analytics themselves become contested definitions that competing interests can discuss and debate about, and in so doing, advance our understanding of what performance characteristics we value and how much we value them (Stone 2002).

Debates about measures are not just academic debates about trivial things — they are debates about who and what counts. As Stone (2002) writes:

“People change the activities that are being measured… The exercise of counting makes them notice things more. Measurers change the way they count because their measures affect how they, not only the measured, are treated. The things being counted become bargaining chips in a strategic relationship between the measurers and the measured, so that at different points in the relationship, there are very different pressures to reveal or conceal. The choice of measures is part of strategic problem definition, and the results of the measures take on their political character only with the costume of interpretive language.”

Analytics reshape the public discourse. They need not dominate it, but, by being part of it, they clarify thinking, help set priorities, and become an important way for communities to decide on what matters to them.

Analytics do not tell us everything we need to know to make our policing better but they do open the door to a more complex discourse that invigorates the public’s ability to have a say in what matters and why.

How do we bring this to policing?

So how do we get the ball rolling in policing? By using the data we already have to build measures. That’s how it started in education. Inevitably, (and also, probably correctly) there will be new objections raised about the data and about the measures. But it is with these initial objections – these debates about what the measure means — that the paradigm shifts and we move forward.

And it is this debate we need to spark in communities across the country.

References

Behn, Robert D. 2003. “Why Measure Performance? Different Purposes Require Different Measures.” Public Administration Review 63 (5): 586–606. https://doi.org/10.1111/1540-6210.00322.

Heinrich, Carolyn J., and Gerald Marschke. 2010. “Incentives and Their Dynamics in Public Sector Performance Management Systems.” Journal of Policy Analysis and Management 29 (1): 183–208. https://doi.org/10/bkgwng.

Stone, Deborah A. 2002. Policy Paradox: The Art of Political Decision Making. Rev. ed. New York: Norton.


  1. This is not my area of expertise, but I wouldn’t be surprised to find out that this grassroots effort was funded and subsidized by philanthropists advancing this agenda. That’s another topic for another day.