Archive for the Performance Measurement Category

Congratulations NCTQ on your Teacher Prep Review in US News and World Report!

Posted in Human Capital Management, Performance Measurement, Stat, Teacher Evaluation System with tags , , , , , , on June 18, 2013 by updconsulting

Hopefully on your drive to work today, you heard NPR’s story on the Teacher Prep Review just released by the National Council on Teacher Quality.  US News and World Report will publish the results in their next issue. Just like US World’s college and grad school rankings, this study rates how well our nation’s educator prep programs prepare teachers for the 21st century classroom.

UPD supported NCTQ in this project by helping them develop RevStat, a performance management process to stay on track and continuously monitor the quality of their analysis. You can read the report here and learn more about UPD’s stat process here. (BR)

Pure as the Driven Data

Posted in Human Capital Management, Performance Measurement, Stat on September 25, 2012 by updconsulting

I like numbers. Numbers are facts. My weight scale reading for today: 165 lbs. Numbers are objective and free of emotion. My pedometer tells me that I ran for three miles today. However, as objective and factual as numbers may be, we still inject meaning into them. The weight scale reading, for example, although 10 pounds lighter than I was last month, still crosses the threshold of “overweight”. And that four-mile hike I took around Lake Montebello meant a cherry-flavored slushy at Rita’s!

Which brings me to the school reform effort centered on numbers. Yes, I am talking about data-driven instruction—a way of making teaching less subjective, more objective, less experience-based, more scientific. In this era of increased accountability, nearly every principal has begun using data to help drive instructional practices. Many Principals in rapidly improving schools often cite data-driven instruction as one of the most important practices contributing to their success.

Data-driven decision making requires an important paradigm shift for teachers—a shift from day-to-day instruction that emphasizes process and delivery in the classroom to a teaching culture that is dedicated to the achievement of results. Educational practices are evaluated in light of their direct impacts on student learning. School organizations that are new to the focused, intentional analysis of student and school outcome data quickly find that most teachers and other instructional support staff are unprepared to adopt data-driven approaches without extensive professional development and training.

If educators constantly analyze what they do and adjust to get better, student learning will improve (Schmoker, M., 1999). By focusing initially on small, rapid improvements and then building upon those toward an ongoing process of continuous reflection about classroom instruction and student learning outcomes, teachers across the country are significantly impacting student achievement. When these teachers are also able to participate in professional learning communities and collaboratively identify and implement effective, strategic instructional interventions, their schools are not only surviving this new wave of accountability but indeed thriving in it.

CR

Managing for Mastery

Posted in Human Capital Management, Performance Measurement, Race to the Top, Value-added and growth models on October 25, 2011 by updconsulting

We have blogged about the topic of that last video post before, including a reference to Herzberg’s classic “One more time, what motivates employees?” And just like Herzberg, Daniel Pink points out that the three biggest factors that motivate people once the money is right is Autonomy (the desire to be self-directed), Mastery (the desire to get better at something), and Purpose (the desire to do something good). I ran across another article the other day about how they do human capital management at Google, and the same dynamic came through. Doing a good job seems to be the thing that we want. Companies that align their work and their purpose are flourishing. (Can you say, “Skype, Apple, and Whole Foods?”)

Given that our work is education, I am sure you can guess where this is all going. Race to the Top, the Gates Foundation, and a stalwart group of economists within the education reform sphere keep trying to incentivize high performing teachers (as measured by student growth) with bonus pay. We’ve talked about this before so we won’t belabor the point, but there is no evidence that pay motivates higher performance when you’re talking about complex work that requires thought, and if you’ve watched yesterday’s video, you now have another data point.

But what DOES seem to be motivating? Mastery, Autonomy, and Purpose. Education has at least one of these going for it right out of the gate: Purpose. And if you talk to teachers and principals like we do, you know that there is nothing more demotivating than having the “instructional coach” or “state observer” come into your classroom to watch your instruction for five minutes to tell you what you should be doing better. The autonomy variable is definitely at play here. To us, the trick in education, and with principals and teachers specifically, is how do we foster Mastery through our management?

Here is what we have seen: When student assessment data or classroom observation data is presented in a disaggregated way (vs. summarizations) and is turned around in a quick time frame after collecting the data (no more than one week), educators are much more likely to see the value of the data as a way to get better (or gain mastery). But when the turnaround of the same data is slow or the emphasis is on an aggregated “rating,” it becomes deeply demotivating, and in many cases fuels the political fire to slow down or stop the district or state’s reform efforts.

If purpose, mastery, and autonomy yield higher performance among teachers and principals, what does this then mean for the work of managers at the district level? And for the program designers at the state? We’d love to hear your opinion. (BR)

Motivation Animation

Posted in Human Capital Management, Performance Measurement, Race to the Top, States, Value-added and growth models with tags , , , , on October 19, 2011 by updconsulting

Every once and a while that friend that sends you three forwards a day hits on something interesting.  The other day, I received a link to a YouTube video from RSA that is a very entertaining visual walk through by Daniel Pink of the point we made on this blog about a year ago.  Enjoy! (BR)

Value-Added Data and Special Education

Posted in Human Capital Management, Performance Measurement, States, Uncategorized, Value-added and growth models on May 13, 2011 by updconsulting

At a gala for the American Association of People with Disabilities in March, Education Secretary Arne Duncan affirmed the current administration’s commitment to maintaining high expectations for special education populations, noting that “students with disabilities should be judged with the same accountability system as everyone else.” While most educators would readily support this goal, they would also probably tell you that achieving it is a lot easier said than done—especially when it comes to using student achievement data as a factor in evaluating special education teachers.

In an education reform landscape that seems saturated with increasingly complex questions about accountability systems (particularly around the use of value-added models in educator evaluation), determining where special education students and teachers fit into those systems poses some of the most complex questions of all. So what progress have we made to determine how value-added data should be used to measure achievement in special education students? The answer seems to be…not that that much.

There are plenty of pretty obvious reasons why value-added models pose fundamental problems in the special education world. One potentially insurmountable obstacle is the lack of standardized test scores. Most value-added models require at least two years’ worth of test data for each student. This makes it nearly impossible to collect value-added data for students with severe cognitive disabilities that qualify for their state’s alternate assessment. Alternative assessments, which were mandated as part of the reauthorization of IDEA in 1997, are scored on completely different scales than the state standardized tests. While some states have attempted to scale the scores and create comparable data for completing value-added analysis, most have chosen to exclude this group of students completely.

Assessment experts have also pointed out that the results that alternative assessments yield lack the “fine tuning” that is needed to complete value-added calculations with confidence. Although there is a strong push by the US Department of Education to substantially reduce the number of students with disabilities taking the alternate assessment (which is expected to be backed by the reauthorization of the Elementary and Secondary Education Act coming next fall), it will be years before states even have the option of including students from this group as part of their value-added calculations.

The challenges aren’t limited to using value-added data to measure progress for special education students who are taking the alternate assessment. A report by the National Comprehensive Center for Teacher Quality issued last July identified a number of obstacles that impact a wider group of students, including the fact that researchers have yet to identify an appropriate way to account for the impact of testing accommodations on test scores of special education students who take the regular state test.

Without a way to control for the impact of testing accommodations on student performance, the testing data from this group of students is difficult (if not impossible) to use to draw precise conclusions about the “value” added by special education teachers. Although states continue to work tirelessly to develop educator evaluation systems that incorporate value-added data, efforts to find new ways to incorporate precise measures that capture student achievement in the context of special educators’ evaluations seem to be lagging behind. While the challenges listed above (among a host of others) may represent valid reasons why standard value-added models may not work with special education data, there is important work to be done in developing other means for determining precise measures of progress for special education students.

This is not to say that special education teachers are excluded from the emerging high stakes evaluation models—they certainly aren’t. States have developed a variety of alternatives to using value-added data for evaluating special education teachers, but the accuracy and precision of the information they provide has far less backing by research than the models applied to general education populations. If the measures used to determine the effectiveness of special education teachers aren’t as precise as those used for general education teachers, states and districts will be limited in their ability use that data to drive meaningful professional development and support.

In a field that is historically lacking in quality professional development, it seems that states are missing a valuable opportunity to use their evaluation systems to make vast improvements in the quality of support special educators are afforded. If we aren’t doing enough to determine how to measure progress accurately for special education students, it means that we aren’t doing enough to support special education teachers in becoming more effective. (JS)

Why Naysayers on Teacher Pay for Performance are Missing the Mark

Posted in Human Capital Management, Performance Measurement with tags , , , , , , on March 10, 2011 by updconsulting

It seems simple, right?  Offer bonuses to teachers that bring big gains in student achievement, and you’ll get better performance out of your teachers.  But, a pack of studies over this past year seems to have rained on the teacher performance pay parade.  Back in June,  a study from Mathematica on an initiative in Chicago found “no evidence that the program raised student test scores”.  This study, like many of its type, compared the “value-added” of teachers participating in the performance pay program against those who did not as measured by student test scores.

In September, in one of the most comprehensive studies of its kind, the National Center for Performance Incentives at Vanderbilt concluded a three-year study on a performance pay program in Nashville and found that, “students of teachers randomly assigned to the treatment group (eligible for bonuses) did not outperform students whose teacherswere assigned to the control group (not eligible for bonuses).”

Just today, Ed Week reported that a study by Harvard Economist Ronald Fryer on a teacher pay program at over two-hundred schools in New York City found, “no evidence that teacher incentives increase student performance, attendance, or graduation, nor do I find any evidence that the incentives change student or teacher behavior. If anything, teacher incentives may decrease student achievement, especially in larger schools.”

Wow, that’s a lot of smart people supported by big research budgets saying that the education reform cabal wants to throw money in a hole.  Unfortunately, these studies missed the point and confused the policy question.  And, a simple look at management research on motivation over the last century would have confirmed the results of these studies before they began.

Each of these studies were constructed to test the notion that paying teachers who reach higher academic gains will incentivize those teachers to focus and work in a way that they otherwise would not.  The studies frame additional payment for performance as a motivator.  If it works in the private sector, it must work in education, right?

Except that a significant body of research has shown that it actually doesn’t work in the private sector.  Or any sector for that matter.  In one of the original and classic works of organizational psychology, “One More Time: How do you Motivate Employees?” (1968), Frederick Herzberg combined dozens of his own studies (of several sectors of the economy) with 16 other studies from all over the world on what motivates people in the work place.  Herzberg’s study found the strongest motivational factors across the board concerned “a sense of achievement”, “recognition”, and the “work itself”.  Where was pay on this list?  Pay was never actually found to be a motivator.  In fact, the combined results of these studies showed money to be more de-motivating than motivating.  And, on the scale of comparison with other factors, it was on the low end of influence in either direction.

Unsurprisingly, the Chicago, Nashville, and New York found results consistent with Herzberg.  However, that does not mean that pay for performance is bad human capital policy in education.  Where performance pay will ultimately prove effective is not as a motivator, but as a simple factor in labor economics.  Performance pay will not bring a teacher (who likely did not take their teaching job under the promise of performance incentives in the first place) to suddenly leave their B game at home and bring their A game.  But, higher pay will attract more talented people into the teaching workforce who might otherwise not consider it.  Performance pay will make competition around the average teacher vacancy more intense.  That competition would be good for students.  And, by framing pay for performance around the attainment of better than average student achievement goals, the performance bonus provides a market signal specifically to those who think they can hit the target (BR).

The Baltimore Consensus

Posted in Human Capital Management, Performance Measurement, Stat with tags , , , , , , , , on February 9, 2011 by Julio

In 2008, the Copenhagen Consensus Center asked a group of the world’s top economists to identify optimal social “investments” that could best help reduce malnutrition, broaden educational opportunity, slow global warming, cut air pollution, prevent conflict, fight disease, improve access to water and sanitation, lower trade and immigration barriers, thwart terrorism, and promote gender equality.

The experts—including five Nobel laureates—examined specific measures to spend $75 billion on more than 30 interventions and indentified the most cost-effective: increased immunization coverage, initiatives to reduce school dropout rates, community-based nutrition promotion, and micronutrient supplementation.  Besides being resource efficient, some of these measures are also very low cost-per-user, such as micronutrient supplementation: providing Vitamin A for a year costs as little as $1.20 per child, while providing Zinc costs as little as $1.

This got us at UPD thinking: what would a Copenhagen Consensus in American K-12 look like?  After all, in an age of severe budget pressures, we need to know the best measures that boards and superintendents can implement to help boost student performance.  And it would be great if those high impact measures were low cost, so we pushed ourselves to find ideas that would not require vast new resources.

Our top nine ideas share two themes: leveraging existing data and technology investments to improve instruction and enhancing human capital management.  None of our suggestions require new expenses, though they will require changes in culture and time use. Here’s our top nine:

  1. Routinely examine data that comes from formative assessment data with groups of teachers, principals, and curriculum and instruction managers.  Provide the data ahead of time.
  2. Implement human capital reforms that bring mutual consent to all teacher hiring.
  3. Integrate student results into the performance evaluations of teachers.
  4. Establish performance management/accountability processes at all levels of the organization, from central office functions to RTI in classrooms.
  5. Improve targeting of professional development needs and resources in order to make average teachers better.
  6. Decentralize dollars and control to the school level coupled with changes in how principals are hired and evaluated (more like coaches in professional sports).
  7. Systematically capture data on student, teacher, principal participation in different interventions to effectively discern contributors to high performance.
  8. Leverage technology to automatically provide parents and guardians with content that helps them supplement the scope and pacing of student curriculum.
  9. Use predictive analytics to uncover students with likely future behavioral difficulties very early and mount high-impact interventions before its too difficult (JG).

What are your picks?

Predicting Crime with CompStat?

Posted in Performance Measurement, Race to the Top, Stat with tags , , , , , , , on January 25, 2011 by updconsulting

A great article in Slate from Christopher Beam highlights a CompStat program in Los Angeles which will begin to use predictive statistics alongside traditional CompStat figures.  CompStat traditionally tracks a slate of common crime stats for each precinct commander every two weeks to help focus that commander on the results of their tactics over that period.  This data normally includes statistics on crime incidents like robberies, assaults, and homicides as well as crime related measures like complaints and arrests.  The idea is to diagnose why crime seems to have happened and to deploy police resources to mitigate those factors.

But as the article points out, the process looks backwards.  In Los Angeles and Santa Cruz, statisticians have crunched the numbers to learn that certain events predict the occurrence of crime with some regularity.  A home robbery ups the odds that a repeat robbery will happen in the area.  A gang shooting increases the odds of reprisal.  And as research continues, the LA Police are bound to find other predictors that Precinct Commanders can use to strategically deploy their forces and keep their communities safer.

Who knew Policing would take some cues from Education after all these years of CompStat inspiring SchoolStat? Since 2007, we’ve seen similar predictive work with the use of early warning indicators to predict the risk of student’s dropping out of high school.  Based on research from the Chicago Consortium of School Research, a high school student’s course performance is the single most predictive factor in whether a student will complete high school.  Specifically, CCSR concluded that Chicago students who finish ninth grade with at least ten semester credits or five full-year course credits and have no more than one semester F in a core course are nearly four times more likely to graduate than those who do not.  CCSR used this finding to create an On-Track to Graduate Indicator for current students who complete ninth grade with five credits in core courses and no more than one semester F.  Principals use this data to deploy counseling resources .

Rhode Island has an early warning indicator planned statewide for its Race to the Top program to address dropouts as well, and will prove a great addition to the EdStat process driving their RTT reforms.  Where else can you see predictive indicators taking hold? (BR)

CompStat and Campbell’s Law

Posted in Performance Measurement, Race to the Top, Stat with tags , , , , , , , on January 11, 2011 by updconsulting

As you may have seen in the news, The New York City Police Department is conducting a comprehensive review of its crime stats.  Over the past months, reports have emerged that Precinct Commanders felt pressured to downgrade serious crimes to less serious crimes to both look good at their CompStat sessions and ensure that the overall crime rates did not climb upward.

This case brings to mind an oft forgotten idea in public policy called Campell’s Law.  Cambell’s Law is an idea posited by American social scientist Donald Campbell stating that, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”  As you can imagine, Campbell’s Law has been cited by many others in conversations about high stakes test scores, but it is important to remember that we see singular performance indicators driving bad behavior in just about ALL sectors.  Think no further than quarterly profit statements for Enron and WorldCom or loan sales at your favorite mortgage brokers (if they are still around).

So did CompStat and the drive to keep crime low in New York City “distort the social processes it was intended to monitor”?  I don’t think we’ll know the answer for a while, but as we’ve begun developing a statewide Stat process for the Race to the Top work in Rhode Island, we’ve been reminded of what the Stat process does in a new environment.  Whether it be CompStat in New York when it began under Bill Bratton or in any Stat process we develop with a client, the purpose is two fold.  First, it is to place the attainment of specific results at the forefront of a managers thinking as they make decisions about tactics, strategies, and resource deployment.  Second, it is to use the data itself in many disaggregated forms to inform and enrich the quality of our decisions and to objectively learn from past hypothesis on what works.  No one would argue that using this data in this way is bad management and “distorts the process it is intended to monitor”.  But at the end of the day, the use of data in management does not cure an organization of unsavory behavior, it simply changes the leverage points of where it can happen.

We’ve also been reminded of the importance of multiple measures.  Whether it be value added in teacher evaluation, test scores in AYP decisions for schools, or “crime” in CompStat, one measure never tells the whole story.  A good Stat process marries outcome metrics with survey, financial, and observational information to ensure that what gets measures not only gets done, but is what you want (BR)

CitiStat and Law and Order

Posted in Performance Measurement, Stat with tags , , , on December 18, 2010 by updconsulting

Did you know that the Law and Order guy, Sam Waterson, did a video on CitiStat?  Check out this video with real shots of meetings and interviews with the originators, then Mayor Martin O’ Malley, Michael Enright, and Matt Gallagher.