Archive for value added

Taking a Closer Look at Value Added

Posted in Human Capital Management, Teacher Evaluation System, Uncategorized, Value-added and growth models with tags , , on June 20, 2014 by updconsulting

random-numbers_19-136890-266x300Last month I joined a team of UPD-ers and traveled around the state of Oklahoma training district-level trainers on Value-Added.  During one of the sessions, a participant raised his hand and asked our team how value added could be relied upon as a valid measure of teacher effectiveness when districts like Houston Independent School District[1] are currently involved in lawsuits surrounding the legitimacy of their value-added model, and the American Statistical Association (ASA) released a statement[2] that has been described as “slamming the high-stakes ‘value-added method’ (VAM) of evaluating teachers.”    Although we were familiar with both the Houston lawsuits and the ASA statement, this question created an opportunity to take a closer look at recent articles and information opposing (or seeming to oppose) value added.

 

First, a little background:  According to our partners at Mathematica Policy Research, “Value-added methods (sometimes described as student growth models) measure school and teacher effectiveness as the contribution of a school or teacher to students’ academic growth. The methods account for students’ prior achievement levels and other background characteristics.”  Value added does this via a statistical model that is based on educational data from the given state or district, and uses standardized test scores to evaluate teachers’ contribution to student achievement. Although value added and similar measures of student growth had been used in various places in the United States without much opposition, criticism peaked around 2010 when districts such as Chicago, New York City and Washington, DC began incorporating value-added into high-stakes teacher evaluation models.  Since then various individuals and organizations have published their views on the merits or pitfalls of value added including, most recently, the American Statistical Association (ASA).

 

The ASA statement has garnered considerable attention because as described by Sean McComb, 2014 National Teacher of the Year, “… I thought that they are experts in statistics far more than I am. So I thought there was some wisdom in their perspective on the matter.”[3] Of course as statistical experts they shed some light on what can and cannot reasonably be expected from the use of value-added measures, but here are a few ways that we can address parts of their statement that may be misunderstood:

  • The ASA mentions that value added models “are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results. Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model.”  Although it is true that the models themselves are complex and require advanced statistical expertise to compute, we would argue that people without this level of expertise can be trained on the concepts behind how the models work and also how results should be interpreted.  In Oklahoma, part of the training we provide is designed to help teachers build a conceptual understanding of the statistics behind value added.  Although we do not look at the regression formula itself, we help to define components of the measure including how it is developed, its precision, etc. so that teachers are able to better understand how value added can provide additional data to help inform their instruction.
  • In the report, the ASA cautions that since value added is based on standardized test scores, and other student outcomes are predicted only to the extent that they correlate with test scores, it does not adequately capture all aspects of a teachers effectiveness – “A teacher’s efforts to encourage students’ creativity or help colleagues improve their instruction, for example, are not explicitly recognized in VAMs.”  This statement is true and it is one that we are quick to highlight when we train on value added.  Value-added models are not designed to measure teacher effectiveness in isolation as they only tell part of the story.  When used as part of an evaluation system with multiple measures (such as classroom observations and student surveys), a more complete and stable picture becomes available.
  • Finally the ASA clearly states that “VAM scores are calculated using a statistical model, and all estimates have standard errors. VAM scores should always be reported with associated measures of their precision, as well as discussion of possible sources of biases.”[4] Since we are always transparent about the fact that all value-added estimates have confidences intervals, this is almost always something that trips people up during training sessions.  Many will say, “If there is a margin of error, then how can this measure be trusted enough to include in an educator evaluation system?”   What is easy to forget is that all measures, statistical or not, come with some level of uncertainty.  This includes more traditional methods of teacher evaluation such as classroom observations.  Although efforts should be made to limit or decrease the margin of error where possible, there will never be a way to completely eliminate all error from something as wide and deep as teacher effectiveness. Despite this, this does not mean that value added should not be used to evaluate teachers but, as mentioned previously, it should be considered alongside other measures.

 

By Titilola Williams-Davies., a consultant at UPD Consulting.

 

 

 

[1]Strauss, Valerie. April 30, 2014. ”Houston teachers’ lawsuit against the Houston Independent School District” Washington Post. http://apps.washingtonpost.com/g/documents/local/houston-teachers-lawsuit-against-the-houston-independent-school-district/967/

 

[2]American Statistical Association. April 8, 2014. “ASA Statement on Using Value-Added Models for Educational Assessment.” http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf

 

[3] Valerie Strauss. April 30, 2014. “2014 National Teacher of the Year: Let’s stop scapegoating teachers” Washington Post. http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/04/30/2014-national-teacher-of-the-year-lets-stop-scapegoating-teachers/?tid=up_next

 

[4] American Statistical Association. April 8, 2014. “ASA Statement on Using Value-Added Models for Educational Assessment.” http://www.amstat.org/policy/pdfs/ASA_VAM_Statement.pdf

 

The Follower’s Manifesto

Posted in Interesting Non-Sequiturs, Uncategorized with tags , , , on December 18, 2012 by updconsulting

In my six years of teaching, I had plenty of colleagues who carried on non-stop private conversations through every faculty and department meeting they attended. The very educators who brought down the wrath of God on misbehaving or inattentive students became pouty, apathetic, or downright antagonistic when another adult had the gall to suggest that there was something these individuals needed to know or had yet to learn.

I know this mindset well, as I possessed it for a time:

“What does Vice Principal Smith know? He hasn’t been a teacher for 10 years…”

“I wish they’d let me get back to my classroom—I have so much to do and this is useless.”

“How could a consultant, who has never taught, possibly give me any advice about education?”

To be sure, some of this anger and indifference is well founded. I cannot count the number of faculty meetings I sat through where the principal read aloud (verbatim) from a schedule that affected 1/10th of the school’s population. But to focus on this smaller point is to obscure a larger one: as much as we often hear that we lack good leaders in the education world, I believe the bigger problem is that we lack good followers.

Very few people have the privilege of holding a role in life in which they are consistently leaders, always laying out an agenda to be executed by those around them. Instead, most of us hold a more nebulous position—we are leaders of some and followers of others, and these roles change over time. Teachers are the perfect example of this—student achievement in the classroom requires great leadership on their part, but that leadership must be informed and supported through the following of administrative guidance, research-based standards of practice, community desires, and expert advice.  Yet while educational literature is rife with treatises on leadership (one of my primary introductory packets to Teach For America in 2004 was called Teaching as Leadership), there is little talk of following.

So what are the characteristics of a good follower, and how will they make a difference in education? With the help of the comparatively sparse followership literature[1], I’ve compiled this non-comprehensive list:

  1. Good Followers are Open-minded. Too often in education, we assume that the best ideas for student achievement are contained in our own heads, or at the very least within our own dogma. We must be willing to adjust our approaches based on the advice, feedback, and new sources of information we receive.
  2. Good Followers Disagree and Commit. Even good leaders will make decisions that their followers may not always agree with. This is perfectly reasonable, and followers should feel free to communicate that disagreement to leaders. However, once a decision has been finalized, followers must commit to act upon it as if it was their own. Refusal to act upon a decision prevents evaluation of the decision’s effects further down the road. This is the piece that I and my colleagues most often struggled with as teachers. It was easier to poo-poo a new administrative initiative about backwards planning for a million little reasons, than it was to buy into this initiative and change our ways.
  3. Good Followers are Active Listeners and Collaborators. Listening to and participating in a conversation requires full attention and critical, collaborative thinking. The non-stop responsibilities of most jobs (especially teaching) can also function as excuses to mentally (or even physically) check out of one’s listening responsibilities. Grading takes precedence over listening to a department head, lesson planning replaces one-on-one time with a mentor. I know—I’ve been there. But I also know that listening and participating in collaborative opportunities is an important part of creating school culture and promoting practices that improve student achievement. It is through this collaboration that decisions are made and tested, and that leadership is held accountable.

With support from UPD’s Bob Pipik, Nick Goding, and (former employee) Dustin Odham, Highland Park High School in Topeka, KS has taken advantage of a federal grant to install a collaborative process of student and classroom data evaluation. Every progress report and grading period, teacher teams meet to examine trends in student attendance, grades, behavior, and test scores, both within their classroom and throughout the team. Students who are at risk are identified and intervened with as a team or individually using a “Student Tracker” created and molded through an iterative process of teacher and administrative feedback. This approach has led to a narrowing of the achievement gap between African American and White students, and has improved student test scores overall by almost 10 percentage points. And all of this has come as a direct result of attentive and excellent followership. It is true that school administration wrote the grant and initiated the data evaluation process (and for that they should be praised), but it was the school’s teachers who approached the process with an open mind, contributed to its functioning through collaboration with leadership, with outsiders (UPD), and among themselves during the teacher team meetings, and they have remained committed to its functioning for the past two and a half years.

It should be obvious that we can’t all be leaders all of the time, but that doesn’t mean we must resign ourselves to lives as desk jockeys, pushing paper for the man.  While my examples throughout this blog are based at the school level, the call for good followers is a universal one in the field of education (and beyond). Equity and excellence in public education will require that most of us make a commitment not just to lead, but to follow. From teachers to bureaucrats to consultants, we can shape and challenge our leaders, and the world around us, through our openness, our commitment, our action, our honesty. It’s time that “follower” stopped being a dirty word.

TM


[1] See Kellerman, Barbara. Followership: How Followers Are Creating and Changing Leaders. Harvard Business School Press, Cambridge, MA. February 18, 2008 as a prime example of the emerging field.

–Tim Marlowe

Why We Still Can’t Understand Value-Added

Posted in Human Capital Management with tags , , , , , on January 18, 2011 by updconsulting

Analogies are important. While often imperfect, they help us make connections and better understand our world.

That’s why I think it’s a shame the education field has yet to come up with good analogies for value-add data and models, especially in the teacher evaluation context. For example:

  • Is value-add like a high-school GPA, providing a threshold for certain decisions (you can’t get into College ABC without a certain GPA) but best used in conjunction with other factors (were you a varsity athlete? Do you do community service? How was your admissions essay?)
  • Is value-add like a batting average, telling us how good teachers are at some skills (hitting), but not others (fielding)?
  • Is value-add like a credit score, highly dependent on input qualities (did the bank get everything right?) with the potential to change over time (decreasing when you took on student loans, increasing as your credit history grows)?

The inability of education reformers to clearly explain value-add has confused the conversation. For example, yesterday I went to a terrific conference sponsored by the National Center for the Analysis of Longitudinal Data in Education Research (a mouthful! – so they go by CALDER). The brightest minds in education research presented papers on value-add. But it was clear that throughout the day, the non-researchers in the audience struggled to grasp the policy implications. In a somewhat tense moment (as these things go), an audience member suggested that value-add doesn’t take into consideration x situation so it can’t be used by itself to assess teachers, to which an exasperated panel member replied “I just heard [my colleague] say that!” Something was clearly lost in translation – an analogy would have been useful (JF).

What analogies would you offer for value-add?

Don’t Cross the Streams

Posted in Human Capital Management, Performance Measurement, Race to the Top, States with tags , , on December 10, 2010 by updconsulting

A quick report from the trenches. We’ve been working these past weeks in Rhode Island helping stand up their more complex projects with Race to the Top and to help them performance manage the many complex streams of work that will happen in parallel.

Of note, we’ve been helping develop their educator evaluation program which will bring all teachers in the state onto a common evaluation platform that incorporates observations, goal attainment, and multiple measures of student growth. As the state has worked to include the maximum array of grades and subjects into the process, we’ve run into a challenge that I am sure many states will see as well.

If a state or district uses value-added in a teacher’s evaluation and the state test is the feed for the model, you limit the grades and teachers that the value-added model can cover to between 15 percent to 20 percent of teachers. That leaves a lot of teachers out of the program. In an effort to include more grades and subjects, many states are scrambling to find more assessments to inject into the model. In this search, some states and districts are considering the use of formative and interim assessments that track student progress against state standards or curriculum throughout the year.

There is a big problem with this. Formative data is held separate from summative data for very good reasons. Summative data is designed to tell you which students met academic standards for AYP designations. When students take the summative tests, teachers teach them “test taking strategies” to help them do the best they can on items where they are not completely sure.

The opposite is true of formative assessments. Teachers use formative assessments to understand the connection, or lack of connection, between what they are teaching, and what their students are learning. It is meant to be honest and accurate. If a student does not know the answer, the teacher tells them not to try and guess. The result is a more accurate picture of the specific areas of strength and weakness where the teacher can re-tool instruction.

So imagine for a second if these states and districts incorporate formative and interim data into a teacher’s evaluation. Yes, it might be a good picture of what a teacher’s students know, but you have just upended the purpose of that formative assessment and destroyed its value. If a teacher knows a formative or interim assessment will be part of their evaluation, they will tell their students to represent that they know things that they do not and the teacher will lose a powerful tool in helping them meet the very goals the summative test is trying to measure.  Rhode Island realized this early on.

Its a lose-lose proposition and states and districts looking to incorporate student data into non-tested grades and subjects should resist the temptation to cross the streams (BR).