Tag Archives: performance measurement

Evidence +/vs Innovation

Paul Carttar has an interesting post up over at the Bridgespan Group’s blog entitled Evidence and Innovation – Friend or Foe?

Carttar frames the discussion with an anecdote:

…during a recent discussion about what makes a nonprofit organization “high-performance.” One participant nominated innovation as a critical factor. To my astonishment, this stirred an impassioned dissent from another participant, a recognized and vocal proponent of evidence and accountability, who argued that in the nonprofit world the word “innovation” typically implies the generation of exciting new ideas, apparently free of any bothersome, killjoy demands for validation of merit.

Carttar talks about how this is nothing new – that during his time running the white house Social Innovation Fund, he often heard complaints that evaluation stifles innovation. And I’ve certainly seen numerous innovative approaches shut down or left un(der)funded because they’re not “evidence based” – but Carttar makes two important distinctions: 1) Innovation is less about “something new” and more about “something better,” and 2) ” hard evidence of relative performance is the most legitimate, productive way to determine what actually is better.”

Carttar then goes on to discuss the varying types of “hard evidence,” clearly stating that not all types are appropriate for all efforts. He makes the crucial distinction between startup and mid-stage enterprises, and what type of evaluation and “evidence” makes sense for each.

At its best, evidence serves as innovation’s good friend by stimulating continued improvement and providing potential beneficiaries, funders and other stakeholders an objective basis for determining whom to turn to and to support. In this way, evidence can not only “cull the herd” but actually propel the growth and scaling of the best innovations, enabling them over time to become the prevailing practice. In fact, that’s the hopeful theory underlying the SIF.

To be sure, there are plenty of opportunities for conflict between evidence and innovation, which must be diligently managed. Potential funders may demand unrealistically rigorous standards of evidence to assess relatively immature, still-evolving programs—potentially stifling the development of promising solutions. Ill-timed, poorly executed, or inaccurately interpreted evaluation studies can also prematurely choke off development. Or backers of a program with a robust empirical basis may hesitate to invest in further improvements (that is, continued innovation) for fear of undermining the program’s evidentiary support and perceived competitive advantage.

The discussion continues in the comments, and is worth reading for its thoughtfulness and appreciation of nuance.

Performance Measurement vs Impact Evaluation

Bridgespan’s e-newsletter pointed me to an article in the Nonprofit Times on performance measurement. It starts by saying that we often treat performance measurement like a math test, looking to see if we got the right or wrong answer, when we should be treating it like an essay, with multiple drafts each working toward improvement. As the authors say,

the primary question is not “‘are we doing it right?”, but rather “is this useful?”

The article goes on to discuss the difference between performance measurement (a managerial tool for continuous improvement) and impact evaluation (an effectiveness tool most often used by external funders):

Evaluation grows out of social science, an academic discipline with peer-reviewed methodologies. Specifically, impact evaluation often seeks to attribute causation (e.g., ‘did this specific program cause that outcome?’), and often is intended for an audience beyond the nonprofit program being evaluated (policy-makers, funders, academics, practitioners more broadly).  Impact evaluations tend to make more sense for well-established program models, not experiments and start-ups.

 

Performance Measurement, in contrast, is a management discipline, closely related to continuous improvement and organizational learning. It seeks rapid, incremental improvements in programs and their execution, and thereby outcomes for participants.

The article goes on to list some examples of the types of performance measurement, and some concrete steps for moving forward (including logic models development and a pilot evaluation process). The tools themselves were familiar to me, but the framing was not and I found it powerful. I particularly liked the closing sentence, “performance measurement implies a burden of action—not a burden of proof—to learn and improve.”

The article mentioned a couple of additional resources that look at performance measurement and impact evaluation:  “Measurement as Learning,” by Jeri Eckhart-Queenan and Matt Forti and Working Hard & Working Well by David Hunter. The latter is listed as a companion to Mario Morino’s Leap of Reason, which has been sitting on my office bookshelf unread for too long.