Archive for school reform

inBloom, Train Wrecks, and Ed-Fi

Posted in Data Systems, Stat with tags , , , , , , , on May 16, 2014 by updconsulting

IMG_0794

As I sat down to write this entry, my day was interrupted most unusually.  Doug texted me the picture to the left.  The caption said simply, “Say hello to 26th street and the railroad track.”  In the picture I saw the same view I see every work morning from the “Big Table” here at UPD where many of us sit.  After more than 4 inches of rain over 36 hours, the ground right outside our office gave way taking more than a dozen cars and half the street with it.  If you watch the video (found below) of the ground as it collapses underneath the cars, you will see that it left the wall with nothing to hold, and fell under its own weight.  The stories on the news have since revealed that the neighborhood has know this was a problem for years, but their complaints and concerns met a deaf ear in the city and with the rail company.

 

It’s hard to see such a calamity and think not metaphorically about my originally intended subject: the collapse of inBloom. inBloom was, in lieu of a more boring technical description, a cloud based data integration technology, that would enable districts and states to connect their data to an “app store” of programs and dashboards that could sit on top.  The vision was a seamless and less expensive way for teachers and principals to gain easy access to data about their students.

 

inBloom was a very big deal.  Started in 2011, several big funders and education heavies devoted their credibility and more than $100 million to try to make it successful.  Their efforts succeeded in garnering several state and district partners.  But since its inception, consumer groups, parents, and privacy advocates have worried that placing their students data in the hands of a third party would not be safe.  Or worse, inBloom might “sell” their student’s data to the highest bidder.  Then came Edward Snowden, and what was a niche news story went prime time.

 

If you look at the technology within inBloom that transfers and stores data in the cloud, the critics did not have much of a leg to stand on. inBloom’s data protection technology is as good or better than just about any existing state or district.  If you look at inBloom’s license agreement, parents and privacy advocates had more explicit protections than they have now with many student data systems.  What caused inBloom to collapse as quickly as the wall outside my window was more fundamental: trust.  As citizens, we trust districts and states with our students’ data.  And for all of inBloom’s technical explanations on the security of the data, they never made the case that we could trust them as an organization.  With the withdrawal of Louisiana, New York, Colorado, and several districts, nothing could hold inBloom up.

 

Over the past year at UPD, we’ve done a lot of work with the Ed-Fi data integration and dashboard suite.  We successfully rolled out the system for the entire State of South Carolina in about nine months (public dashboards here) and are very excited to start work with the Cleveland Metropolitan Public Schools to implement Ed-fi there.  Ed-Fi is very different than inBloom, even though they both utilize the same underlying data model.  Based on extensive research on what teachers and principals say they need, Ed-Fi provides a set of powerful data integration and dashboard tools that a district or state can download for free.  Rather than shooting data up into the cloud, Ed-Fi lives where most people already trust, in the data centers of the district or states.  19 states and more than 7,000 districts have licensed Ed-Fi.

 

The tragedy of inBloom is that it was a great idea ahead of its time and stood to do a lot of good in education.  But the protectors of the status quo should see no victory in its collapse.  Teachers and principals are clamoring for better information to help their students.  Ed-Fi seems ready to pick up where inBloom left off, and do so with the trust this work requires.

———————————

This blog was written by Bryan Richardson. Bryan is a Partner at UPD Consulting and brings over thirteen years of experience in private and public sector management. Bryan holds national expertise in performance management, data systems, and complex project implementation. 

Reinventing The Wheel

Posted in Management Consulting, Uncategorized with tags , , , , , on May 21, 2013 by updconsulting

For years, many cities have undertaken the task of developing a citywide plan, agenda, goals, etc. around children and youth development and success.  In most cases, this work is a collaboration between multiple organizations, including the school district, city agencies (parks and recreation, libraries), city funded agencies and community based nonprofits.  While the core values that these organizations have around youth success are common, bringing these organizations together to discuss and arrive at a common mission and set of goals, objectives, standards, and measures to work towards can take years to accomplish.  Examples of this type of work are the Nashville Children and Youth Master Plan, Milwaukee Succeeds, Grand Rapids Youth Master Plan, Minneapolis Youth Coordinating Board, Chicago Out-of-School Time Project, Ready by 21 Austin, among many others.  Even more examples are included here, on the National League of Cities site.

A sampling of some of these plans is included in a table below.  Even doing a quick scan of these initiatives reveals many common threads in the goals and objectives that were the result of the months/years of collaborative work: youth/children are prepared for school, succeed academically, are healthy, are supported by caring adults, and contribute to the community.

In a recent conversation I had in discussing how to start this type of work, the question was raised “why don’t we just use what has already been done?”  So why spend years redoing the work when it has already been done?

Reinvent the wheelto waste time trying to develop products or systems that you think are original when in fact they have already been done beforeCambridge Idioms Dictionary, 2nd ed. Copyright © Cambridge University Press 2006

The reason for spending the time, effort and resources is because the participation in this type of process is as important or more important than the output.  Bringing together leaders across the city who may or may not have worked well together in the past to discuss not only their own organizations, but also how as a city they can work towards a common set of goals and objectives is incredibly powerful.  Building these relationships and knowledge about each other’s work should increase the chances of success in work towards the common goals.

Even though there is a lot in common with the outputs (master plan, goals/objectives) from each of these efforts, they also each have a unique aspect to them.  Each of the efforts involved a unique set of people and organizations who have their own perspectives about priorities in their city and communities.  These citywide plans and goals are something that (hopefully) these organizations will be working together on for a long time to come, so it should be something that they each feel a connection with – something that they helped create.

Of course, this does not mean that efforts like this should happen in isolation, when there are clearly good examples of what worked well (and what didn’t work well) in the past.  So, these type of resources should be utilized to learn from, but not for the purpose of cutting out any of the important work in the development of the end product.

At the same time “reinventing the wheel” is an important tool in the instruction of complex ideas. Rather than providing students simply with a list of known facts and techniques and expecting them to incorporate these ideas perfectly and rapidly, the instructor instead will build up the material anew, leaving the student to work out those key steps which embody the reasoning characteristic of the field.”

Questions like this continually come up in the work we do.  Why spend months developing a particular school district process with participation from unions, principals, teachers, parents, etc. when there are good examples that have already been developed using this same type of process in other districts?  Why hold another community meeting or  focus group session if you think you already know what people think about a particular topic?  Because the process of “inventing” is as important as the “invention.”

 

This blog was written by Cari Reddick. Cari is a Project Manager at UPD Consulting and has over 12 years of project management experience.

 

Samples of Citywide Youth Master Plans

Nashville Milwaukee Grand Rapids Minneapolis
All children and youth will have a safe and stable home and a supportive, engaged family. All children are prepared to enter school Early childhood development, life-long learning & education All Minneapolis children enter kindergarten ready to learn
All children and youth will have safe places in the community, where they are welcomed and supported by positive adult relationships All children succeed academically and graduate prepared for meaningful work and/or college Employment & financial independence All Minneapolis children and youth succeed in school
All children and youth will develop valuable life skills, social competencies, positive values and become law abiding, productive citizens All young people utilize post secondary education or training to advance their opportunities beyond high school and prepare for a successful career Basic, physical & psychological needs All Minneapolis young people have access to quality out-of-school opportunities
All children and youth will have confidence in themselves and in their future Recognizing the difficult economic realities facing our families, all children and young people are healthy, supported socially and emotionally, and contribute responsibly to the success of the Milwaukee community Mentoring, afterschool, cultural activities & strategic planning All Minneapolis children and youth people have opportunities to prepare themselves for the responsibilities of an active civic live
All children and youth will have opportunities to have their voice heard and positively impact their community Civic engagement, training & leadership
All children and youth will experience social equity regarding access to opportunities, resources and information that are critical to their success in the 21st century
All children and youth will experience a safe and caring school environment that supports social, emotional and academic development
All children and youth will achieve academically through high quality, engaging educational opportunities that address the strengths and needs of the individual
All children and youth will be physically healthy
All children and youth will learn and practice healthy habits and have access to the resources that support these habits
All children and youth will be mentally healthy and emotionally well
All children and youth will have access to and participate in quality programs during out-of school-time
All children and youth will have safe outdoor spaces in their neighborhood that provide opportunities for play and recreational activities
All children and youth will have safe transportation options that allow them to engage in activities, and access services and supports that the community has to offer  

 

 

Can Early Teacher Evaluation Findings Help Change the Debate?

Posted in Race to the Top, Teacher Evaluation System with tags , , , , on April 30, 2013 by updconsulting

Over the past few years, states and school districts across the country have devoted significant resources to the design and roll-out of new teacher evaluation systems.  Driven at least in part by requirements attached to Race to the Top funding, the new systems have inspired heated debate over the efficacy of factoring student achievement data into a teacher’s performance assessment. The New York Times recently shared some initial findings from states that have launched new evaluation models including Michigan, Florida and Tennessee, reporting that the vast majority of teachers- upwards of 95 percent in all three- were rated as effective or highly effective. Although the analysis of these numbers has only just begun, the Times reports that some proponents of the new evaluation models admit that the early findings are “worrisome”.  And even though it is still early, we can reasonably anticipate that if the trend continues- and the findings from the new evaluation systems reveal no significant departure from more traditional methods of evaluation- we may start to have a lot more people looking at the complicated data analysis driving teacher evaluation systems linked to student achievement data and asking “what’s the point?”

It’s a good question, really, and one that probably hasn’t gotten enough thoughtful attention in the midst of the controversy surrounding them: What is the point of linking student achievement data to teacher evaluations?  Should we take it for granted that a primary goal- if not the primary goal- of these efforts is to identify and eliminate bad teachers?  If this is the case then these early findings should be a cause for concern, especially given the time and money being spent to collect and analyze the data.  If replacing bad teachers with good ones is the magic bullet for public education reform, it will take a pretty long time at this rate.

Of course, even opponents of the new evaluation systems would probably admit that the magic bullet theory is an oversimplification. Furthermore, it’s much too early to look at these numbers and extrapolate any meaningful conclusions about the actual number of ineffective teachers or even the accuracy of the evaluations themselves. Hopefully what these findings might do is allow us to finally begin to broaden the scope of our national conversation about how the linkages between teachers and students could actually drive education reform.  States and school districts implementing new evaluation systems have tried with varying degrees of success to communicate the message that linking student achievement data to teacher practice isn’t just about punitive measures- it also has important implications for improving professional development and teacher preparation programs by identifying shared practice linked to positive student achievement and replicating those practices in classrooms across the country. But that message is often overshadowed by the anxiety surrounding the punitive side of evaluation and underscored by public struggles with local teacher unions. If nothing else, these early findings might create an opening in the current debate for a more thoughtful discussion about the broader possibilities for linking teacher practice to student growth.

-Jacqueline Skapik

Education Reform and Counter Insurgency

Posted in Race to the Top, States with tags , , , , on October 29, 2010 by updconsulting

Our good friend Justin Cohen over at the “Turnaround Challenge” hit it spot on the in an entry on the relationship between good policy and good execution.  Justin mentions a Matt Yglesias quote on (of all the things to compare to education reform) counter insurgency strategy.  Yglesias says,

“… you can’t initiate a large complicated undertaking that involves coordinated action by hundreds of thousands of individual human beings and then make success contingent on perfect implementation.”

Fresh from a day of pondering state Race to the Top strategy, Justin notes, “I’m increasingly frustrated by the extent to which [education] policy discussions are execution-agnostic.”  We’ve been helping three states implement their Race to the Top, and we’ve seen the same thing from the front line.

Think about it.  An RTT winner has to now coordinate at least 20 separate new and inter-woven (not to mention politically risky) projects internally AND monitor and support the progress of around 10 projects at each of the school districts participating in their program (which could be as low as 55 and as high as more than 700 depending on the state).  This is a management super-lift in organizations that have rarely been rewarded for or capable of managing large complicated projects on their own.  Yet, when we look at any state’s application or at a district’s scope of work, we see work plans written as if they weren’t doing anything else, there was no angry teacher’s union waiting for them to mess up, and they have a bench of Harvard MBAs.  They are assuming near perfect implementation.

Our advice to these states has been to design themselves around the inevitability of imperfect implementation.  In education reform generally, and in RTT specifically, there is no recipe or checklist that we can follow for it to work.  We must instead live in a constant cycle of making a hypothesis of the best path forward, executing in earnest, reflecting frequently on our progress, mid-course correcting, repeat.

We’ll get into this in more detail in the weeks ahead. (BR)

For Whom the Bell Curves

Posted in Human Capital Management, Stat with tags , , , on October 25, 2010 by updconsulting

The debate, if you can call it that—“jibber jabber” might be a better term (thanks, Mr. T!)—over linking student achievement data to teacher and principal performance started on the wrong foot and seems to be stuck hopping around on it. Using performance data to identify and reward rock star teachers and “weed out” the ones who probably should be in another profession only tinkers at the margins.

Take a typical bell curve for teachers, with student achievement outcomes as the measure of performance. Now, I don’t know what the exact shape of this curve looks like for public school teachers across the country, but notwithstanding the sad fact that the “The Widget Effect” study conducted by The New Teacher Project found 99 percent of all public school teachers get rated as “satisfactory” or better by district evaluation systems, let’s assume, for argument’s sake, that the true bell curve actually looks something like this:

If the part of the curve that represents the poor performers is 10 percent of the total number of teachers, and if we were able to get rid of them all overnight, the next morning we’d still be left with 3.2 million of the same teachers who teach in public school classrooms every day. And while it often sounds like it (especially if you listen to the most strident promoters of achievement-based teacher evaluations), I don’t think anyone is actually advocating getting rid of, say, the lower half of the bell curve and trying to replace nearly two million teachers. There simply isn’t a pipeline of quality teachers who could fill those classrooms.

At the other end of the curve, the notion that paying high performers more will incent average teachers to step up their game is flawed on its face (as we’ve noted in previous blog postings here). But even if it weren’t, the additional money alone doesn’t help the average teachers know how they need to change their day-to-day actions to join the high performing ranks. Just knowing that great teachers can make a lot more money doesn’t tell you what successful teachers actually do.

In reality, the only way to truly improve overall teacher performance—and thereby improve student outcomes—is to move the entire bell curve to the right; essentially getting the overwhelming number of teachers in the “average” range to perform better.

If that’s the goal, then emphasizing the use of student performance data for bonuses and firings is really missing the point. Instead, the data should be used first and foremost to manage performance: feedback to teachers to help them understand whether or not what they’re doing in the classroom is working; feedback to principals to help them target limited school-based resources (i.e., master teachers, mentors, observations, teaching assistants, etc.); and feedback to central office administrators to help them identify professional development that actually makes a difference and target it toward those who truly need it.

If districts and states stressed these and other less-threatening uses of performance data and spent the first two or three years of a reform initiative getting its teachers and principals comfortable with the necessity and helpfulness of performance data for managing and improving their behaviors and actions, there would be a lot less resistance—and a lot less jibber-jabber—when districts and states eventually start applying the data to high-stakes performance evaluations and merit pay. (DA)

Conflation Frustration

Posted in Human Capital Management with tags , , , , , , on October 11, 2010 by Julio

Data-driven does not mean value-added metrics (VAM).  And VAM is not the same thing as merit pay.

The last few weeks have been an exciting time for K-12.  There’s all the media attention being showered on Waiting for Superman, as well as the release of the Nashville merit pay study.  Unfortunately, I’ve detected a consistent conflation of different concepts – data-drive decision-making, value-added metrics, and merit pay – that threatens to undermine the education and reform communities’ ability to make steady improvements to instructional and human capital operations.

As part of a very compelling and critical review of Waiting for Superman, Dana Goldstein quotes LA teacher and social justice unionist Alex Caputo-Pearl.  In the process of advancing an exciting vision of what teachers unions could be in the future, Caputo Pearl says:

“Data! There’s a good term out there,” he says with a laugh. “There are all sorts of problems with standardized tests, but that doesn’t mean you don’t look at them as one small tool to inform instruction. You do. The problem with value-added, on top of its severe lack of reliability and validity, is that if you use it in a high-stakes way where teachers are constantly thinking about it in relationship to their evaluations, you will smother a lot of the beautiful instincts that drive the inside of a school, with teachers talking to each other, collaborating and teaming up to support students.”

What I find distressing about his quote is the conflation of the different data concepts, a pattern that I see often in the current discussion about the direction of education reform.  Just look at the comments section of any online piece about the Nashville study and you’ll get a sense of what I mean.

This conflation is very problematic.  Let’s return to the Caputo-Pearl quote to examine why.

For starters, it’s unclear what exactly Caputo-Pearl means by “data”.  Data and data-driven decision-making in K-12 is about a lot more than Bill Sanders and his value-added methodology.  At the most basic level, it’s about helping teachers make good instructional decisions by providing them assessment data either in print outs or through a web portal.  At most districts, this is the real labor-intensive “data” work – to tease out insights from assessments to improve instruction, and it’s also an area that few  (if any) districts master at every campus under their jurisdiction. Such data-driven work is distinct from VAM or merit pay, and quite important to core instruction.

Further, Caputo-Pearl prefers a limited use of VAM for high stakes decisions including evaluation. This is perhaps the strangest forced dichotomy in media coverage and I was annoyed that someone with Goldstein’s reporting chops didn’t press about what proportion he/she favors.

In the real world every practitioner I have ever talked to realizes that it’s not 100% either/or but rather a share of each. For example, in real-world districts with VAM, there is usually a (disjointed) mix of quantitative and qualitative information that goes into teacher evaluation. And teacher evaluation is but just one of the high-stakes decisions.  VAM information can be helpful in targeting teacher recruitment, it can be helpful in creating optimal mixes of teachers in schedules, and it can be helpful in uncovering  the impact of specific professional development.    I thought the national discussion had moved on to figuring the optimal mix depending on local contexts, but in the national coverage, such as NBC’s Education Nation,  it still is made to sound like a binary that it simply is not when actually implemented.

Moreover, the idea that somehow there some robot-like district that gets a VAM score for teachers and immediately fires up some well-oiled performance-based termination machine is incorrect.  There is no such district.  In real-world districts, there are countless veto points to teacher termination even after an improvement plan fails.  Some veto points are good (parent support, campus support) and some bad (nepotism, patronage).  VAMs are useful in that mix to complement other information. It’s not deterministic now and is unlikely ever to be.   It’s not perfect, but it is valuable, and criticisms that can be made of VAM apply even more so to subjective observation.  Anybody that’s ever tried to help a district create an instructional observation tool that can be database-backed knows the perils that poor inter-rater reliability, social influence, and unfriendly interfaces can wreak on even the most concrete and well thought-out qualitative tools.

Finally, let’s move into the high stakes decision that is most often conflated with the push for more data-driven decision-making and creation of VAMs: merit pay.  For starters, any discussion of merit pay that sticks with the monolithic idea of “merit pay” is a useless discussion.  In the real world, there are very specific, and greatly varied compensation arrangements and implementations.  Scholarly work always points this out, but this doesn’t seem to filter out in the larger public discussions.  There is no one merit pay scheme.  And differences in their design and implementation matter.

Critics of data-driven decision-making, value-added metrics, and merit pay not only conflate them, but often try to tar them with the aura of corporate irresponsibility.  For example, in The Death and Life of the Great American School System, high-visibility reform critic Diane Ravitch often lumps the aforementioned ideas together under the banner of “business,” or “free-market” or “privatization.”  Goldstein herself refers to the reform movement as “free-market”.

The problem is that the conflation is about normative judgments, not about advancing knowledge. Normative language and conflation are shutting down conversations that can yield incremental, steady improvements that are win-wins for administrators, teachers, and advocates.

For example,  we need to get better at helping school districts scale helping teachers to understand and mine assessment data.  We need to replace the black-box model of VAM calculation with open source, crowd-verified models that are the result of multi-district consortiums.  Critics of VAM and merit pay raise many great points; a lot of their criticism is useful feedback in determining the optimal way of calculating value-added.  We also need a national focus on data quality both for quantitative data that fuels VAMs, but also the qualitative data behind classroom observations. We need to continue to try and determine the optimal components of compensation policies to improve teacher quality.

But we can’t have those conversations if we use polarizing language that reifies binary thinking. What we should focus on instead is learning from the evidence and having empathy for each other.  Simplistic statements like “charters are better” or “merit pay doesn’t work” tell us more about the ideological blinders behind those making those statements than about how we can use the expanding body of knowledge to optimize our school systems for student outcomes. (JG)

BEHAVE!

Posted in Human Capital Management with tags , , , , on October 1, 2010 by updconsulting

Anyone who is involved in establishing pay-for-performance compensation models for teachers and principals should spend a little time in advance reading up on Dan Ariely (The Upside of Irrationality), Daniel Pink (Drive) and other behavioral economists before embarking on a pay system based on the conventional wisdom about what motivates people. Unfortunately, most people, including several prominent superintendents, believe that money—in the form of better pay and performance bonuses—is the key to attracting higher quality teachers to the profession and motivating them to perform better.

I do believe that higher pay would attract more people to teach, though the relatively low pay for teachers compared to other professions is probably not as big a barrier to a better teacher talent pool than the filter of requiring a degree from a teachers college. (But that’s a topic for another blog.) Yet, the premise that once someone decides to become a teacher we still need to provide some sort of bonus structure to ensure that they bring their “A” game to the classroom is flawed for many reasons, but I’ll just tackle two of them.

The first has to do with why people get into the teaching profession in the first place, and it is not to make a lot of money. Ask any teacher why he or she became a teacher and the answer is typically about being inspired by a favorite teacher they had, wanting to give back to the community, an intellectual fascination with a particular subject area, or a desire to work with children and help them learn. If you got into teaching for any of these reasons or their many variations, you don’t turn it off because you’re not paid enough. There are deeper drivers at play, and if we don’t pay attention to the intrinsic motivations our teachers bring with them, we could actually do more harm than good when we set up pay-for-performance systems. As noted in Pink’s book Drive:

“Careful consideration of reward effects reported in 128 experiments lead to the conclusion that tangible rewards tend to have a substantially negative effect on intrinsic motivation. When institutions…focus on the short-term and opt for controlling people’s behavior, they do considerable long-term damage.”

So, if you inadvertently chip away at the intrinsic rewards teachers get from teaching—which is the main reason they enter the profession in the first place—how is that likely to impact classroom outcomes? (That’s a rhetorical question, in case you were wondering.)

The second flaw in the pay-for-performance premise has to do with how bonuses linked to high-stakes outcomes might negatively affect a teacher’s performance. In The Upside of Irrationality, Ariely describes several experiments that got at this issue. His conclusion is that moderate and high bonuses work well for tasks that are mundane, require little creativity or problem-solving, and are largely within one’s control. So, an incentive for a professional basketball player to make a higher percentage of free throws might work well, as might a bonus for higher productivity on an assembly line. But those types of tasks don’t come close to relating to what a teacher does in the classroom. And for tasks that require innovation, creativity and problem-solving, moderate and high level incentives actually make performance drop. As Ariely notes:

“[W]hen the incentive level is very high, it can command too much attention and thereby distract the person’s mind with thoughts about the reward. This can create stress and ultimately reduce the level of performance.”

None of this is to say that we shouldn’t link pay and bonuses for teachers and principals to performance. No one who has ever worked with us can think that we don’t support such accountability. But there needs to be much more nuance in setting them up than is occurring in most states and districts that are trying it. And what is likely to happen when they fail is that, by association, all pay-for-performance models will be tainted by their failure. (DA)