Archive for the Human Capital Management Category

Taking a Closer Look at Value Added

Posted in Human Capital Management, Teacher Evaluation System, Uncategorized, Value-added and growth models with tags , , on June 20, 2014 by updconsulting

random-numbers_19-136890-266x300Last month I joined a team of UPD-ers and traveled around the state of Oklahoma training district-level trainers on Value-Added.  During one of the sessions, a participant raised his hand and asked our team how value added could be relied upon as a valid measure of teacher effectiveness when districts like Houston Independent School District[1] are currently involved in lawsuits surrounding the legitimacy of their value-added model, and the American Statistical Association (ASA) released a statement[2] that has been described as “slamming the high-stakes ‘value-added method’ (VAM) of evaluating teachers.”    Although we were familiar with both the Houston lawsuits and the ASA statement, this question created an opportunity to take a closer look at recent articles and information opposing (or seeming to oppose) value added.

 

First, a little background:  According to our partners at Mathematica Policy Research, “Value-added methods (sometimes described as student growth models) measure school and teacher effectiveness as the contribution of a school or teacher to students’ academic growth. The methods account for students’ prior achievement levels and other background characteristics.”  Value added does this via a statistical model that is based on educational data from the given state or district, and uses standardized test scores to evaluate teachers’ contribution to student achievement. Although value added and similar measures of student growth had been used in various places in the United States without much opposition, criticism peaked around 2010 when districts such as Chicago, New York City and Washington, DC began incorporating value-added into high-stakes teacher evaluation models.  Since then various individuals and organizations have published their views on the merits or pitfalls of value added including, most recently, the American Statistical Association (ASA).

 

The ASA statement has garnered considerable attention because as described by Sean McComb, 2014 National Teacher of the Year, “… I thought that they are experts in statistics far more than I am. So I thought there was some wisdom in their perspective on the matter.”[3] Of course as statistical experts they shed some light on what can and cannot reasonably be expected from the use of value-added measures, but here are a few ways that we can address parts of their statement that may be misunderstood:

  • The ASA mentions that value added models “are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results. Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model.”  Although it is true that the models themselves are complex and require advanced statistical expertise to compute, we would argue that people without this level of expertise can be trained on the concepts behind how the models work and also how results should be interpreted.  In Oklahoma, part of the training we provide is designed to help teachers build a conceptual understanding of the statistics behind value added.  Although we do not look at the regression formula itself, we help to define components of the measure including how it is developed, its precision, etc. so that teachers are able to better understand how value added can provide additional data to help inform their instruction.
  • In the report, the ASA cautions that since value added is based on standardized test scores, and other student outcomes are predicted only to the extent that they correlate with test scores, it does not adequately capture all aspects of a teachers effectiveness – “A teacher’s efforts to encourage students’ creativity or help colleagues improve their instruction, for example, are not explicitly recognized in VAMs.”  This statement is true and it is one that we are quick to highlight when we train on value added.  Value-added models are not designed to measure teacher effectiveness in isolation as they only tell part of the story.  When used as part of an evaluation system with multiple measures (such as classroom observations and student surveys), a more complete and stable picture becomes available.
  • Finally the ASA clearly states that “VAM scores are calculated using a statistical model, and all estimates have standard errors. VAM scores should always be reported with associated measures of their precision, as well as discussion of possible sources of biases.”[4] Since we are always transparent about the fact that all value-added estimates have confidences intervals, this is almost always something that trips people up during training sessions.  Many will say, “If there is a margin of error, then how can this measure be trusted enough to include in an educator evaluation system?”   What is easy to forget is that all measures, statistical or not, come with some level of uncertainty.  This includes more traditional methods of teacher evaluation such as classroom observations.  Although efforts should be made to limit or decrease the margin of error where possible, there will never be a way to completely eliminate all error from something as wide and deep as teacher effectiveness. Despite this, this does not mean that value added should not be used to evaluate teachers but, as mentioned previously, it should be considered alongside other measures.

 

By Titilola Williams-Davies., a consultant at UPD Consulting.

 

 

 

[1]Strauss, Valerie. April 30, 2014. ”Houston teachers’ lawsuit against the Houston Independent School District” Washington Post. http://apps.washingtonpost.com/g/documents/local/houston-teachers-lawsuit-against-the-houston-independent-school-district/967/

 

[2]American Statistical Association. April 8, 2014. “ASA Statement on Using Value-Added Models for Educational Assessment.” http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf

 

[3] Valerie Strauss. April 30, 2014. “2014 National Teacher of the Year: Let’s stop scapegoating teachers” Washington Post. http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/04/30/2014-national-teacher-of-the-year-lets-stop-scapegoating-teachers/?tid=up_next

 

[4] American Statistical Association. April 8, 2014. “ASA Statement on Using Value-Added Models for Educational Assessment.” http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf

 

Congratulations NCTQ on your Teacher Prep Review in US News and World Report!

Posted in Human Capital Management, Performance Measurement, Stat, Teacher Evaluation System with tags , , , , , , on June 18, 2013 by updconsulting

Hopefully on your drive to work today, you heard NPR’s story on the Teacher Prep Review just released by the National Council on Teacher Quality.  US News and World Report will publish the results in their next issue. Just like US World’s college and grad school rankings, this study rates how well our nation’s educator prep programs prepare teachers for the 21st century classroom.

UPD supported NCTQ in this project by helping them develop RevStat, a performance management process to stay on track and continuously monitor the quality of their analysis. You can read the report here and learn more about UPD’s stat process here. (BR)

How to Run a Computer Based Training Session: Three Indispensible Techniques

Posted in Data Systems, Human Capital Management, Management Consulting with tags , , , , , , , on March 13, 2013 by updconsulting

Image

This week I’m really delighted to introduce Frank Nichols a talented consultant from our strategic partners at Strategic Urban Solutions. Strategic Urban Solutions will be guest posting for us from time to time, and this week will be sharing a training post with us.  

At Strategic Urban we tend to do a lot of work with large institutions: Cities, Non-Profits, Schools, etc. Typically, these institutions will need to move on from their old paper-based methods of doing business and adopt an organizational system. Let’s face it, this is usually long overdue and necessary.  When an organization’s staff need training on these new systems, it can be both rewarding and challenging to be in the position of the Trainer. I will be honest and say that I have not always been good at this. In fact, I wouldn’t be able to offer up any of this wisdom if I haven’t been thoroughly beaten up along the way. After many years and nearly 100 training sessions, I’d like to offer up three techniques that I have found indispensable.

1. Don’t Be a Policy Middleman

Many times when you are introducing a new system or process, it is due to big changes in an organization. It is inevitable that you, as a trainer, will be seen as the middleman between staff and management. In order to prepare staff for the new system, you might have to give them an overview of recent policy changes. Make sure they also understand your role and purpose: to help them adopt new technology. Don’t let your training session become a place for the airing of grievances. Negativity about an organization’s changes can carry over to negativity about the technology that you are introducing.

If you are consulting for an organization, and are not management yourself, you can position yourself as an advocate on their behalf. Show sympathy for the staff, while also maintaining positive representation of the management. One way to avoid becoming the policy middleman is to have the contact information of the policy expert(s) on hand. Inform the staff that they can direct specific questions to that contact so that you don’t get off track. Even better yet; if a policy expert is available to address the policy implications in person, during the introduction, you’ll be free to focus on technology for the rest of the session.

2. Positives Before Challenges

Showing staff a new system or process and then asking for questions can sometimes, understandably, lead to a wave of complaints. If one person comes up with a complaint the rest of the staff in the room might feel compelled to pile on. This is why it is important to take a few breaks throughout the session to discuss Positives and Challenges. I always start with Positives by asking “Now that you have seen some of the system features, what do you like most? Why is this system an improvement on what you have done in the past?” You’ll want to discuss Challenges as well…but hold those Challenges hostage. I won’t move on to Challenges until someone can offer up something positive about the system.

For Challenges, I like to ask “Do you anticipate any challenges in applying this system to your work?”. When you frame it this way, you’ll get thoughtful anecdotes from the Staff instead of complaints. They will help you to understand what they are dealing with when they go back to work, and you’ll be better prepared to use that context for the rest of the session.

3. Demo Before Practice

If you have a room full of staff with a computer in front of them, good luck getting their attention. I’ve been in the front of many training sessions, but I’ve also been in the back. A computer is not only an invitation to check email and social media, but also an invitation to explore the system ahead of the instruction. Getting ahead of the class in a focused computer training session sometimes means getting lost. Each section of the system comes with explanations, demonstrations, and discussions…all of which will be missed by someone who is staring at their computer and going on their own personal journey. How many times have you tried to get through an entire demonstration, only to be interrupted at various stages because someone is trying to click on this or that and it is not working? The solution is: clearly state when you are demonstrating and that the opportunity to practice coming up next. Demo before practice.

Before you introduce a part of the system, explain that you are going to first do a demonstration. More eyes will be on you (More, not all…I’m realistic, you can’t get everybody) and those staff will clearly see the current system component, they will hear your explanations and guidance, and will have an opportunity to ask questions. THEN, you can put them on a mission: “Now that you have seen how this component works, go ahead and complete this step on your own.” The beauty of this is that you can free yourself up to walk around and help people individually, before you command their attention on the next demonstration.

I hope you find these techniques valuable and that you experience the reward of a successful training session. Happy training!

–Frank Nichols is a guest blogger from our friends at Strategic Urban Solutions

Securing Our Schools in the Wake of the Sandy Hook Elementary Tragedy — Pt I

Posted in Human Capital Management, Uncategorized with tags , , , on December 20, 2012 by updconsulting

The shooting last week at Sandy Hook Elementary School has prompted a great deal of debate across the country about gun control and access to mental health services.  The incident has also prompted increased scrutiny of school safety practices.  Of course, it is critical that schools review their lockdown procedures and other security measures on an ongoing basis, and ensure that staff members are well trained in those protocols.  School safety experts generally agree, however, that the security measures in place at Sandy Hook were appropriate and reasonable, and indeed saved lives.  Of course, all systems have limitations.  A criminal intent on breaking in at any cost will be difficult for any institution (other than a maximum security prison) to stop.  In fact, children are far safer in school than in other public places such as shopping malls, movie theaters, parks, playgrounds, etc.  And they are exponentially more likely to be killed in an auto accident than in an incident like the one that took place at Sandy Hook Elementary School.

Schools could increase police presence on campus.  Research indicates, however, that seeing armed police officers roaming the school can be scary for young children and undermine their feeling of safety and security. Moreover, those who criticize districts for spending too much on administrative as opposed to classroom expenses should be aware that school security, including on-campus police officers, is an administrative expense (which for many districts, is not insignificant).

Some, including Texas Governor Rick Perry, believe that allowing school personnel to bring guns to school is a valid solution.  They claim that a school employee with a gun who was properly trained could have stopped the perpetrator of the Sandy Hook shooting before he was able to kill so many people.   Statistically, however, it is far more likely that a legally purchased gun will be used not in defense of but against its owner or a member of his or her household (and by analogy the school it is intended to protect).

Even if this hypothetical gun-wielding employee turned out to be the James Bond in Governor Perry’s fantasy, i.e. capable of exercising good judgment and perfect accuracy under extreme pressure, allowing employees to bring guns into the workplace, and especially into schools, is a very bad idea.  The chance that most schools will ever experience anything like what happened at Sandy Hook is extremely slight.  Most schools, however, do experience some incidents of violence each year.  Add guns to this environment, regardless of who owns them, and the outcomes of those incidents are likely to be far worse.  It is a travesty that the perpetrator of the Sandy Hook shootings was able to gain access to legally purchased guns.  Locating more guns on-site and making them even more accessible will only escalate violence in our schools.  Moreover, if seeing police officers with guns on campus undermines children’s sense of safety and security, imagine what it would do to a child’s sense of security to receive a poor score on a homework assignment from a teacher packing heat.

If we want to invest in making our schools safer, we need to look at the areas of greatest risk to our students.  Fortunately or unfortunately, the greatest risk to students does not come from the outside.  The greatest risk comes from individuals students encounter on-campus with a colorable reason for being there.  That said, the one area in which many public school systems could be doing better is in conducting background checks of school employees, volunteers, contractors and others who come into contact with students on campus.  That subject, however, warrants a separate, more detailed discussion.  Accordingly, stay tuned for Part II, which will examine the ways in which some States’ and districts’ policies concerning background checks could be amended and/or supplemented to better protect students.  As for the adequacy of existing school security measures, and the suggestion that teachers be allowed to carry guns to school, please let me know you think.

The UPD blogger, Kim Clark is a senior consultant with UPD.  Prior to working with UPD, Kim served as the General Counsel for the Scottsdale Unified School District in Scottsdale, Arizona, as well as a labor and employment attorney at Steptoe & Johnson, LLP.

Pure as the Driven Data

Posted in Human Capital Management, Performance Measurement, Stat on September 25, 2012 by updconsulting

I like numbers. Numbers are facts. My weight scale reading for today: 165 lbs. Numbers are objective and free of emotion. My pedometer tells me that I ran for three miles today. However, as objective and factual as numbers may be, we still inject meaning into them. The weight scale reading, for example, although 10 pounds lighter than I was last month, still crosses the threshold of “overweight”. And that four-mile hike I took around Lake Montebello meant a cherry-flavored slushy at Rita’s!

Which brings me to the school reform effort centered on numbers. Yes, I am talking about data-driven instruction—a way of making teaching less subjective, more objective, less experience-based, more scientific. In this era of increased accountability, nearly every principal has begun using data to help drive instructional practices. Many Principals in rapidly improving schools often cite data-driven instruction as one of the most important practices contributing to their success.

Data-driven decision making requires an important paradigm shift for teachers—a shift from day-to-day instruction that emphasizes process and delivery in the classroom to a teaching culture that is dedicated to the achievement of results. Educational practices are evaluated in light of their direct impacts on student learning. School organizations that are new to the focused, intentional analysis of student and school outcome data quickly find that most teachers and other instructional support staff are unprepared to adopt data-driven approaches without extensive professional development and training.

If educators constantly analyze what they do and adjust to get better, student learning will improve (Schmoker, M., 1999). By focusing initially on small, rapid improvements and then building upon those toward an ongoing process of continuous reflection about classroom instruction and student learning outcomes, teachers across the country are significantly impacting student achievement. When these teachers are also able to participate in professional learning communities and collaboratively identify and implement effective, strategic instructional interventions, their schools are not only surviving this new wave of accountability but indeed thriving in it.

CR

Do States Lack the Capacity for Reform?

Posted in Human Capital Management, Race to the Top, States, Uncategorized with tags , , , , , , , on May 23, 2012 by updconsulting

Michael Usdan and Arthur Sheekey just wrote a great commentary on the complex and evolving relationship between federal policy, the State Education Agency, and the human capacity to get it all done.  In their essay, “States Lack the Capacity for Reform” over in Education Week, Usdan and Sheeky argue that, “In essence, most state education departments remain almost wholly owned federal subsidiaries, with well over half their budgets emanating from federal funds.”  Because of this, many states under-fund State Education Agencies (just as we have seen local governments under-fund their own school districts if the district is largely funded by the state—like here in Baltimore).  Take this on top of declining budgets and the huge push to reinvent state standards through the common core, implement new teacher evaluation systems, and develop new data tools, and you have a mountain to move.

Usdan and Sheeky point out the structural and organizational changes that Delaware and Tennessee are making in response to these pressures.  This is absolutely needed. But, I can’t help but think that the brand of the poor state education bureaucrat needs some scrubbing as well.  After all, the success or failure of all education reform today rests on the weary shoulders of a few talented managers in the states and districts taking them on.  These managers live and die by the axiom of what I call, “the burden of being useful” in districts and SEAs.  The “burden” afflicts talented managers who are found to possess the unique ability to carry water on difficult projects and deliver time and time again.  Drowning in complex new challenges, districts and SEAs not only give these people the hardest and most difficult projects, but every other project they can throw at them as well.    These stars burn bright, but they usually burn out in two to three years.  This has to change if we expect the current wave of reform to sustain.

If you have had the luck of working in an SEA or district taking on the reform challenge, you know it is mix of politics, bridge building, data crunching, sweat, organizational psychology, and managing a to-do list a mile long.  On the worst days, it feels like hell. But, by and large, shouldering the work of education reform to me feels like what I imagine it must have been like in Silicon Valley in the 80s.  We are writing history as we go.  And the possibility that we will build a fundamentally better system of education for our nation’s kids is before us.  If you are coming out of your MBA or MPA program, TFA or TNTP class, or are tired of your middle manager job in corporate America, this is the most exciting place to be in America.  And, your talents will grow substantially by pressing your shoulder against this plow.

To complete and sustain education reform, we need talented managers in School Districts and SEAs.  And to attract these talents to education and relieve the burden that they currently feel, we can start by recasting the story of what it means and what it is like to work for school districts and State Education Agencies (BR).

Managing for Mastery

Posted in Human Capital Management, Performance Measurement, Race to the Top, Value-added and growth models on October 25, 2011 by updconsulting

We have blogged about the topic of that last video post before, including a reference to Herzberg’s classic “One more time, what motivates employees?” And just like Herzberg, Daniel Pink points out that the three biggest factors that motivate people once the money is right is Autonomy (the desire to be self-directed), Mastery (the desire to get better at something), and Purpose (the desire to do something good). I ran across another article the other day about how they do human capital management at Google, and the same dynamic came through. Doing a good job seems to be the thing that we want. Companies that align their work and their purpose are flourishing. (Can you say, “Skype, Apple, and Whole Foods?”)

Given that our work is education, I am sure you can guess where this is all going. Race to the Top, the Gates Foundation, and a stalwart group of economists within the education reform sphere keep trying to incentivize high performing teachers (as measured by student growth) with bonus pay. We’ve talked about this before so we won’t belabor the point, but there is no evidence that pay motivates higher performance when you’re talking about complex work that requires thought, and if you’ve watched yesterday’s video, you now have another data point.

But what DOES seem to be motivating? Mastery, Autonomy, and Purpose. Education has at least one of these going for it right out of the gate: Purpose. And if you talk to teachers and principals like we do, you know that there is nothing more demotivating than having the “instructional coach” or “state observer” come into your classroom to watch your instruction for five minutes to tell you what you should be doing better. The autonomy variable is definitely at play here. To us, the trick in education, and with principals and teachers specifically, is how do we foster Mastery through our management?

Here is what we have seen: When student assessment data or classroom observation data is presented in a disaggregated way (vs. summarizations) and is turned around in a quick time frame after collecting the data (no more than one week), educators are much more likely to see the value of the data as a way to get better (or gain mastery). But when the turnaround of the same data is slow or the emphasis is on an aggregated “rating,” it becomes deeply demotivating, and in many cases fuels the political fire to slow down or stop the district or state’s reform efforts.

If purpose, mastery, and autonomy yield higher performance among teachers and principals, what does this then mean for the work of managers at the district level? And for the program designers at the state? We’d love to hear your opinion. (BR)

Motivation Animation

Posted in Human Capital Management, Performance Measurement, Race to the Top, States, Value-added and growth models with tags , , , , on October 19, 2011 by updconsulting

Every once and a while that friend that sends you three forwards a day hits on something interesting.  The other day, I received a link to a YouTube video from RSA that is a very entertaining visual walk through by Daniel Pink of the point we made on this blog about a year ago.  Enjoy! (BR)

Value-Added Data and Special Education

Posted in Human Capital Management, Performance Measurement, States, Uncategorized, Value-added and growth models on May 13, 2011 by updconsulting

At a gala for the American Association of People with Disabilities in March, Education Secretary Arne Duncan affirmed the current administration’s commitment to maintaining high expectations for special education populations, noting that “students with disabilities should be judged with the same accountability system as everyone else.” While most educators would readily support this goal, they would also probably tell you that achieving it is a lot easier said than done—especially when it comes to using student achievement data as a factor in evaluating special education teachers.

In an education reform landscape that seems saturated with increasingly complex questions about accountability systems (particularly around the use of value-added models in educator evaluation), determining where special education students and teachers fit into those systems poses some of the most complex questions of all. So what progress have we made to determine how value-added data should be used to measure achievement in special education students? The answer seems to be…not that that much.

There are plenty of pretty obvious reasons why value-added models pose fundamental problems in the special education world. One potentially insurmountable obstacle is the lack of standardized test scores. Most value-added models require at least two years’ worth of test data for each student. This makes it nearly impossible to collect value-added data for students with severe cognitive disabilities that qualify for their state’s alternate assessment. Alternative assessments, which were mandated as part of the reauthorization of IDEA in 1997, are scored on completely different scales than the state standardized tests. While some states have attempted to scale the scores and create comparable data for completing value-added analysis, most have chosen to exclude this group of students completely.

Assessment experts have also pointed out that the results that alternative assessments yield lack the “fine tuning” that is needed to complete value-added calculations with confidence. Although there is a strong push by the US Department of Education to substantially reduce the number of students with disabilities taking the alternate assessment (which is expected to be backed by the reauthorization of the Elementary and Secondary Education Act coming next fall), it will be years before states even have the option of including students from this group as part of their value-added calculations.

The challenges aren’t limited to using value-added data to measure progress for special education students who are taking the alternate assessment. A report by the National Comprehensive Center for Teacher Quality issued last July identified a number of obstacles that impact a wider group of students, including the fact that researchers have yet to identify an appropriate way to account for the impact of testing accommodations on test scores of special education students who take the regular state test.

Without a way to control for the impact of testing accommodations on student performance, the testing data from this group of students is difficult (if not impossible) to use to draw precise conclusions about the “value” added by special education teachers. Although states continue to work tirelessly to develop educator evaluation systems that incorporate value-added data, efforts to find new ways to incorporate precise measures that capture student achievement in the context of special educators’ evaluations seem to be lagging behind. While the challenges listed above (among a host of others) may represent valid reasons why standard value-added models may not work with special education data, there is important work to be done in developing other means for determining precise measures of progress for special education students.

This is not to say that special education teachers are excluded from the emerging high stakes evaluation models—they certainly aren’t. States have developed a variety of alternatives to using value-added data for evaluating special education teachers, but the accuracy and precision of the information they provide has far less backing by research than the models applied to general education populations. If the measures used to determine the effectiveness of special education teachers aren’t as precise as those used for general education teachers, states and districts will be limited in their ability use that data to drive meaningful professional development and support.

In a field that is historically lacking in quality professional development, it seems that states are missing a valuable opportunity to use their evaluation systems to make vast improvements in the quality of support special educators are afforded. If we aren’t doing enough to determine how to measure progress accurately for special education students, it means that we aren’t doing enough to support special education teachers in becoming more effective. (JS)

Why Naysayers on Teacher Pay for Performance are Missing the Mark

Posted in Human Capital Management, Performance Measurement with tags , , , , , , on March 10, 2011 by updconsulting

It seems simple, right?  Offer bonuses to teachers that bring big gains in student achievement, and you’ll get better performance out of your teachers.  But, a pack of studies over this past year seems to have rained on the teacher performance pay parade.  Back in June,  a study from Mathematica on an initiative in Chicago found “no evidence that the program raised student test scores”.  This study, like many of its type, compared the “value-added” of teachers participating in the performance pay program against those who did not as measured by student test scores.

In September, in one of the most comprehensive studies of its kind, the National Center for Performance Incentives at Vanderbilt concluded a three-year study on a performance pay program in Nashville and found that, “students of teachers randomly assigned to the treatment group (eligible for bonuses) did not outperform students whose teacherswere assigned to the control group (not eligible for bonuses).”

Just today, Ed Week reported that a study by Harvard Economist Ronald Fryer on a teacher pay program at over two-hundred schools in New York City found, “no evidence that teacher incentives increase student performance, attendance, or graduation, nor do I find any evidence that the incentives change student or teacher behavior. If anything, teacher incentives may decrease student achievement, especially in larger schools.”

Wow, that’s a lot of smart people supported by big research budgets saying that the education reform cabal wants to throw money in a hole.  Unfortunately, these studies missed the point and confused the policy question.  And, a simple look at management research on motivation over the last century would have confirmed the results of these studies before they began.

Each of these studies were constructed to test the notion that paying teachers who reach higher academic gains will incentivize those teachers to focus and work in a way that they otherwise would not.  The studies frame additional payment for performance as a motivator.  If it works in the private sector, it must work in education, right?

Except that a significant body of research has shown that it actually doesn’t work in the private sector.  Or any sector for that matter.  In one of the original and classic works of organizational psychology, “One More Time: How do you Motivate Employees?” (1968), Frederick Herzberg combined dozens of his own studies (of several sectors of the economy) with 16 other studies from all over the world on what motivates people in the work place.  Herzberg’s study found the strongest motivational factors across the board concerned “a sense of achievement”, “recognition”, and the “work itself”.  Where was pay on this list?  Pay was never actually found to be a motivator.  In fact, the combined results of these studies showed money to be more de-motivating than motivating.  And, on the scale of comparison with other factors, it was on the low end of influence in either direction.

Unsurprisingly, the Chicago, Nashville, and New York found results consistent with Herzberg.  However, that does not mean that pay for performance is bad human capital policy in education.  Where performance pay will ultimately prove effective is not as a motivator, but as a simple factor in labor economics.  Performance pay will not bring a teacher (who likely did not take their teaching job under the promise of performance incentives in the first place) to suddenly leave their B game at home and bring their A game.  But, higher pay will attract more talented people into the teaching workforce who might otherwise not consider it.  Performance pay will make competition around the average teacher vacancy more intense.  That competition would be good for students.  And, by framing pay for performance around the attainment of better than average student achievement goals, the performance bonus provides a market signal specifically to those who think they can hit the target (BR).