Archive for performance management

UPD’s Theory of Performance Management

Posted in Stat with tags , on September 22, 2014 by updconsulting

Performance Management seems to be one of the new buzz words in the market place.  A lot of this has been driven by the US Department of Education leveraging “Performance Management” as a means for sustaining many of the reforms initially implemented through Race to the Top.  And even more broadly, performance management is seen as a way to drive results to ensure that we see outcomes.

So what exactly is performance management?  It is “a process that methodically and routinely, monitors the connection, or lack thereof, between the work that we are doing, and the goals we seek… A process that compares what is to what ought to be.”

So let’s take a moment to reflect? The latter point of comparing what it is to what it ought to be is probably what most people think of when they think of performance management.  As long as they consistently ask themselves “Did we do what we said we were going to do?” and as long as they do this on some type of regular basis they consider this to be a successful performance management process.

Our area of expertise is in implementation.  But the problems that we look to address are most often adaptive problems rather than technical problems.[1]  Heifetz describes technical problems as issues that have a clear resolution path (i.e. fixing a broken leg), and adaptive problems where we are not clear on how to solve a problem and may need to try multiple strategies to attempt to solve the problem (i.e. fixing Obamacare).

Kasel's Blog Post

So then what is the difference between Performance Management and Stat?  Well Stat is our (UPD’s) implementation of Performance Management. It allows us to measure the progress of our work in an adaptive manner.  UPD’s Stat model has been proven to be an effective way of implementing Performance Management.

[1] http://www.youtube.com/watch?v=UwWylIUIvmo

Can Early Teacher Evaluation Findings Help Change the Debate?

Posted in Race to the Top, Teacher Evaluation System with tags , , , , on April 30, 2013 by updconsulting

Over the past few years, states and school districts across the country have devoted significant resources to the design and roll-out of new teacher evaluation systems.  Driven at least in part by requirements attached to Race to the Top funding, the new systems have inspired heated debate over the efficacy of factoring student achievement data into a teacher’s performance assessment. The New York Times recently shared some initial findings from states that have launched new evaluation models including Michigan, Florida and Tennessee, reporting that the vast majority of teachers- upwards of 95 percent in all three- were rated as effective or highly effective. Although the analysis of these numbers has only just begun, the Times reports that some proponents of the new evaluation models admit that the early findings are “worrisome”.  And even though it is still early, we can reasonably anticipate that if the trend continues- and the findings from the new evaluation systems reveal no significant departure from more traditional methods of evaluation- we may start to have a lot more people looking at the complicated data analysis driving teacher evaluation systems linked to student achievement data and asking “what’s the point?”

It’s a good question, really, and one that probably hasn’t gotten enough thoughtful attention in the midst of the controversy surrounding them: What is the point of linking student achievement data to teacher evaluations?  Should we take it for granted that a primary goal- if not the primary goal- of these efforts is to identify and eliminate bad teachers?  If this is the case then these early findings should be a cause for concern, especially given the time and money being spent to collect and analyze the data.  If replacing bad teachers with good ones is the magic bullet for public education reform, it will take a pretty long time at this rate.

Of course, even opponents of the new evaluation systems would probably admit that the magic bullet theory is an oversimplification. Furthermore, it’s much too early to look at these numbers and extrapolate any meaningful conclusions about the actual number of ineffective teachers or even the accuracy of the evaluations themselves. Hopefully what these findings might do is allow us to finally begin to broaden the scope of our national conversation about how the linkages between teachers and students could actually drive education reform.  States and school districts implementing new evaluation systems have tried with varying degrees of success to communicate the message that linking student achievement data to teacher practice isn’t just about punitive measures- it also has important implications for improving professional development and teacher preparation programs by identifying shared practice linked to positive student achievement and replicating those practices in classrooms across the country. But that message is often overshadowed by the anxiety surrounding the punitive side of evaluation and underscored by public struggles with local teacher unions. If nothing else, these early findings might create an opening in the current debate for a more thoughtful discussion about the broader possibilities for linking teacher practice to student growth.

-Jacqueline Skapik

The Baltimore Consensus

Posted in Human Capital Management, Performance Measurement, Stat with tags , , , , , , , , on February 9, 2011 by Julio

In 2008, the Copenhagen Consensus Center asked a group of the world’s top economists to identify optimal social “investments” that could best help reduce malnutrition, broaden educational opportunity, slow global warming, cut air pollution, prevent conflict, fight disease, improve access to water and sanitation, lower trade and immigration barriers, thwart terrorism, and promote gender equality.

The experts—including five Nobel laureates—examined specific measures to spend $75 billion on more than 30 interventions and indentified the most cost-effective: increased immunization coverage, initiatives to reduce school dropout rates, community-based nutrition promotion, and micronutrient supplementation.  Besides being resource efficient, some of these measures are also very low cost-per-user, such as micronutrient supplementation: providing Vitamin A for a year costs as little as $1.20 per child, while providing Zinc costs as little as $1.

This got us at UPD thinking: what would a Copenhagen Consensus in American K-12 look like?  After all, in an age of severe budget pressures, we need to know the best measures that boards and superintendents can implement to help boost student performance.  And it would be great if those high impact measures were low cost, so we pushed ourselves to find ideas that would not require vast new resources.

Our top nine ideas share two themes: leveraging existing data and technology investments to improve instruction and enhancing human capital management.  None of our suggestions require new expenses, though they will require changes in culture and time use. Here’s our top nine:

  1. Routinely examine data that comes from formative assessment data with groups of teachers, principals, and curriculum and instruction managers.  Provide the data ahead of time.
  2. Implement human capital reforms that bring mutual consent to all teacher hiring.
  3. Integrate student results into the performance evaluations of teachers.
  4. Establish performance management/accountability processes at all levels of the organization, from central office functions to RTI in classrooms.
  5. Improve targeting of professional development needs and resources in order to make average teachers better.
  6. Decentralize dollars and control to the school level coupled with changes in how principals are hired and evaluated (more like coaches in professional sports).
  7. Systematically capture data on student, teacher, principal participation in different interventions to effectively discern contributors to high performance.
  8. Leverage technology to automatically provide parents and guardians with content that helps them supplement the scope and pacing of student curriculum.
  9. Use predictive analytics to uncover students with likely future behavioral difficulties very early and mount high-impact interventions before its too difficult (JG).

What are your picks?

Predicting Crime with CompStat?

Posted in Performance Measurement, Race to the Top, Stat with tags , , , , , , , on January 25, 2011 by updconsulting

A great article in Slate from Christopher Beam highlights a CompStat program in Los Angeles which will begin to use predictive statistics alongside traditional CompStat figures.  CompStat traditionally tracks a slate of common crime stats for each precinct commander every two weeks to help focus that commander on the results of their tactics over that period.  This data normally includes statistics on crime incidents like robberies, assaults, and homicides as well as crime related measures like complaints and arrests.  The idea is to diagnose why crime seems to have happened and to deploy police resources to mitigate those factors.

But as the article points out, the process looks backwards.  In Los Angeles and Santa Cruz, statisticians have crunched the numbers to learn that certain events predict the occurrence of crime with some regularity.  A home robbery ups the odds that a repeat robbery will happen in the area.  A gang shooting increases the odds of reprisal.  And as research continues, the LA Police are bound to find other predictors that Precinct Commanders can use to strategically deploy their forces and keep their communities safer.

Who knew Policing would take some cues from Education after all these years of CompStat inspiring SchoolStat? Since 2007, we’ve seen similar predictive work with the use of early warning indicators to predict the risk of student’s dropping out of high school.  Based on research from the Chicago Consortium of School Research, a high school student’s course performance is the single most predictive factor in whether a student will complete high school.  Specifically, CCSR concluded that Chicago students who finish ninth grade with at least ten semester credits or five full-year course credits and have no more than one semester F in a core course are nearly four times more likely to graduate than those who do not.  CCSR used this finding to create an On-Track to Graduate Indicator for current students who complete ninth grade with five credits in core courses and no more than one semester F.  Principals use this data to deploy counseling resources .

Rhode Island has an early warning indicator planned statewide for its Race to the Top program to address dropouts as well, and will prove a great addition to the EdStat process driving their RTT reforms.  Where else can you see predictive indicators taking hold? (BR)

CitiStat and Law and Order

Posted in Performance Measurement, Stat with tags , , , on December 18, 2010 by updconsulting

Did you know that the Law and Order guy, Sam Waterson, did a video on CitiStat?  Check out this video with real shots of meetings and interviews with the originators, then Mayor Martin O’ Malley, Michael Enright, and Matt Gallagher.

Don’t Cross the Streams

Posted in Human Capital Management, Performance Measurement, Race to the Top, States with tags , , on December 10, 2010 by updconsulting

A quick report from the trenches. We’ve been working these past weeks in Rhode Island helping stand up their more complex projects with Race to the Top and to help them performance manage the many complex streams of work that will happen in parallel.

Of note, we’ve been helping develop their educator evaluation program which will bring all teachers in the state onto a common evaluation platform that incorporates observations, goal attainment, and multiple measures of student growth. As the state has worked to include the maximum array of grades and subjects into the process, we’ve run into a challenge that I am sure many states will see as well.

If a state or district uses value-added in a teacher’s evaluation and the state test is the feed for the model, you limit the grades and teachers that the value-added model can cover to between 15 percent to 20 percent of teachers. That leaves a lot of teachers out of the program. In an effort to include more grades and subjects, many states are scrambling to find more assessments to inject into the model. In this search, some states and districts are considering the use of formative and interim assessments that track student progress against state standards or curriculum throughout the year.

There is a big problem with this. Formative data is held separate from summative data for very good reasons. Summative data is designed to tell you which students met academic standards for AYP designations. When students take the summative tests, teachers teach them “test taking strategies” to help them do the best they can on items where they are not completely sure.

The opposite is true of formative assessments. Teachers use formative assessments to understand the connection, or lack of connection, between what they are teaching, and what their students are learning. It is meant to be honest and accurate. If a student does not know the answer, the teacher tells them not to try and guess. The result is a more accurate picture of the specific areas of strength and weakness where the teacher can re-tool instruction.

So imagine for a second if these states and districts incorporate formative and interim data into a teacher’s evaluation. Yes, it might be a good picture of what a teacher’s students know, but you have just upended the purpose of that formative assessment and destroyed its value. If a teacher knows a formative or interim assessment will be part of their evaluation, they will tell their students to represent that they know things that they do not and the teacher will lose a powerful tool in helping them meet the very goals the summative test is trying to measure.  Rhode Island realized this early on.

Its a lose-lose proposition and states and districts looking to incorporate student data into non-tested grades and subjects should resist the temptation to cross the streams (BR).

You should have been a doctor!

Posted in Human Capital Management, Performance Measurement with tags , , , on November 11, 2010 by updconsulting

My doctor friends are always amused when education folks hold up the medical field as the exemplary profession. But medical envy is rampant across classrooms, offices, and the blogosphere, where the frustrated opine: “If only our society held teachers in the same regard as doctors…” and “If only teachers were paid like doctors…” and the politically-charged, “If only teachers had self-organized as professional associations, rather than adopt the industrial union model…” (see Rotherham’s blog post for a harsh snippet comparing the AMA and the NEA).

Another example – Dr. Atul Gawande’s book Better: A Surgeon’s Notes on Performance, which describes the challenges of increasing performance in the medical field, is now required reading in some graduate-level education policy classes.

Over in the teacher-prep side, a recent National Council for Teaching Quality (NCTQ) study compares teacher prep programs in Illinois, with pretty sobering results. The NCTQ work is similar to an early-20th century study of medical schools – a study which contributed to the eventual shuttering of almost half of all medical schools due to abysmal performance.* Ben Carey also has an interesting take on preparation programs across sectors.

And coming up next month over in the data-driven part of town, Education Sector will tell us what can be learned from the medical field (not to mention Google and Farmville!) around data collection and use. Their seminar, Next Decade of Education Data takes place Dec. 7 in Washington, DC.

And so, readers out there – do you agree that the education field has much to learn from the medical field, especially around performance, preparation programs, and data?

*UPD is actually working with the NCTQ to bring this study national. More on that to come. (JF)