CompStat and Campbell’s Law

As you may have seen in the news, The New York City Police Department is conducting a comprehensive review of its crime stats.  Over the past months, reports have emerged that Precinct Commanders felt pressured to downgrade serious crimes to less serious crimes to both look good at their CompStat sessions and ensure that the overall crime rates did not climb upward.

This case brings to mind an oft forgotten idea in public policy called Campell’s Law.  Cambell’s Law is an idea posited by American social scientist Donald Campbell stating that, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”  As you can imagine, Campbell’s Law has been cited by many others in conversations about high stakes test scores, but it is important to remember that we see singular performance indicators driving bad behavior in just about ALL sectors.  Think no further than quarterly profit statements for Enron and WorldCom or loan sales at your favorite mortgage brokers (if they are still around).

So did CompStat and the drive to keep crime low in New York City “distort the social processes it was intended to monitor”?  I don’t think we’ll know the answer for a while, but as we’ve begun developing a statewide Stat process for the Race to the Top work in Rhode Island, we’ve been reminded of what the Stat process does in a new environment.  Whether it be CompStat in New York when it began under Bill Bratton or in any Stat process we develop with a client, the purpose is two fold.  First, it is to place the attainment of specific results at the forefront of a managers thinking as they make decisions about tactics, strategies, and resource deployment.  Second, it is to use the data itself in many disaggregated forms to inform and enrich the quality of our decisions and to objectively learn from past hypothesis on what works.  No one would argue that using this data in this way is bad management and “distorts the process it is intended to monitor”.  But at the end of the day, the use of data in management does not cure an organization of unsavory behavior, it simply changes the leverage points of where it can happen.

We’ve also been reminded of the importance of multiple measures.  Whether it be value added in teacher evaluation, test scores in AYP decisions for schools, or “crime” in CompStat, one measure never tells the whole story.  A good Stat process marries outcome metrics with survey, financial, and observational information to ensure that what gets measures not only gets done, but is what you want (BR)

3 Responses to “CompStat and Campbell’s Law”

  1. Rehva Jones Says:

    Corruption and “fudging the data” happen in environments where there is a lack of vulnerability-based trust. Trust is the basis for any environment that depend on people working towards the same goal to achieve success (however success is defined). When a stat process is introduced in any organization that has not achieved a high degree of trust or one that leans towards a “shame and blame” culture, the tendency will always be to “look good” vs. “do good.” That’s why, it is vital to ensure an entity is “ready” culturally to undertake a stat process. Stat impacts an organization’s cultural capacity in addition to its data-gathering or decision-making capacities. Implementation of a stat process absent a focused effort to strengthen and build an organization’s culture of trust, is at best, ill-advised.

  2. Zac Morford Says:

    Great article that raises some important issues. I think that part of the problem is that in response to this distortion, people simply turn back from data-driven management rather than working hard to create a more nuanced approach. My thoughts on some of these nuances are below (based on my work in DC Public Schools).

    What I have seen is that the rate of distortion is directly proportional to the stakes associated with the data. Direct accountability tends to increase the incentives to distort while casual conversation to identify needed supports tends to minimize this distortion. This makes it very important to determine at which levels of the organization to apply pressure and at which levels to apply support. It seems that the closer we are to student-teacher interactions in the classroom, the less pressure we want to put around the data.

    The challenge to this is working with managers to not simply transfer the heat that they feel from above to their employees below. As the article mentions, the benefits of data-driven decision-making can be easily undermined by bad managers. We’ve either got to do a better job of training program managers to maximize the effectiveness of their teachers or employees through support rather than intense accountability. I don’t think we’ve figured this one out yet. Maybe there are protocols around data conversations that could help improve people’s behavior?

    Even with all of these risks, there are times when the benefits of clarifying priorities, aligning focus, and driving data-driven management outweighs the risk of distorting data. If these risks can be reduced via mechanisms like using multiple measures or creating significant disincentives to alter data, then it makes the choices of where to use it easier.

  3. As important as stats are for reporting discrete and significant accountability measures, they cannot tell the whole story of an organization or point the way towards sustainable change. Numbers show outcomes and are certainly a beginning toward understanding where to focus remediation. Achieving larger, more significant and sustainable change also requires uncovering and retelling the “story” of the organization. It is only through this deeper level of understanding of organizational culture that the heart of the problem can be revealed. I suggest reading Seymore Sarson’s clascic book, “The Culture of Schools and the Problem of Change” for more on this.

    Recognizing the significance of organizational culture has led to bringing this additional focus into the accountability system used by reviewers for the national Office of Head Start (OHS). Reviewers are expected to combine the data that makes up an organization’s “OHS performance profile” with information gathered through focused observations and conversations with key players. OHS reviewers use this combined data to “tell the organizational story.” Recommendations to the grantee for targeted improvements are then made from within the cultural context of its programs.

Leave a comment