Category Archives: nonprofit

Evaluating Complex Initiatives

Srik Gopal, who co-leads social sector consulting company FSG’s Strategic Learning and Evaluation practice, recently blogged at SSIR about evaluating complex social initiatives. He discusses two funders of large-scale, complex initiatives and their attempts to change the way they evaluate success, because as he says:

They are building on the recognition that a traditional approach to evaluation—assessing specific effects of a defined program according to a set of pre-determined outcomes, often in a way that connects those outcomes back to the initiative—is increasingly falling short.

He continues, stating that because complex systems are always changing, the evaluation tools used need to be “adaptive, flexible and iterative.” I’m a big fan of the idea that we need to get beyond simplistic “cause and effect” models of evaluation, which often ignore context. This is particularly true when dealing with complex initiatives and initiatives launched in unstable/changing or multiple environments – it’s important to go beyond whether something works in each location and get into why it does or doesn’t work.

The blog post gives a good feel for the flavor of the work that FSG has done to try to recognize this complexity and deploy tools that can capture the information needed to  create evaluations that can adapt to changing circumstances and capture not just outcomes but relationships and system dynamics, including a chart that shows 3 of their 9 propositions for evaluating complexity alongside what those propositions mean in the real world and some existing tools that can be used in order to capture the needed info.

However, this blog post doesn’t go beyond giving some flavor. It’s worth clicking the link they offer and going through the free signup process to see the full 30+ page report they offer. This includes all 9 propositions, both in snapshot form (page 5) and with full descriptions and case studies. There is also a chart similar to the one in the blog post on pp 31-32 that shows each proposition alongside a brief description of the proposition and some helpful evaluation tools/methods. Beyond the charts and descriptions, there are also 3 case studies.

This looks to be a great resource for those looking to design a thoughtful evaluation process for complex initiatives, as well as a way to re-think what organizations may be hoping to learn and capture for even more simple evaluation practices.

The evolution of strategic thinking in noprofits

Jed Emerson’s twitter feed (@blendedvalue) pointed me to this SSIR article by Barbara Kibbe entitled Five Things Strategy Isn’t, which Emerson describes in the comments as “just a really nice framing of how we got here and where we’re headed.”

The first half of the article is just that – a great summary of how “the dynamic duo of strategy and evaluation” has evolved from flip charts that simply record discussions, to logic models, to SROI and beyond. I’m oversimplifying what Kibbe herself calls an oversimplification, but that’s because it’s worth checking the article itself if you’re not familiar with the evolution of evaluation processes, tools and thinking from the 80s to now. My own nonprofit journey didn’t start until the 90s, but being in small local nonprofits, we certainly used 80s tools/thinking in the 90s (and sometimes still today).

The second half of the article opens by saying that the current debate over the value of strategic philanthropy is healthy, but in order to have that debate we should be careful in defining our terms. And Kibbe starts that definitional discussion by pointing out five times when what we call strategic thinking isn’t actually strategic:

  • when it’s fixed – good strategy is never fixed, nor is it a single tool (or a pair of tools). Kibbe quotes Rosabeth Moss Kantor of HBS: “Strategy is a lot like improvisation—setting themes, destinations, directions, and then improvising around those themes.”
  • when it’s insulated – context is key, and if strategy and evaluation are not considered together then both will suffer
  • when it doesn’t consider people – strategy needs to be flexible enough to deal with the complexities of human beings
  • when it’s old, hidden or boring – strategy needs to be current, compelling and shared in order for others to understand it and buy in
  • when we are too attached to it – here Kibbe quotes The Independent Sector’s founder John Gardner saying “Philanthropy is the only source of truly flexible capital for the social good” and following up with how important it is for foundations to listen to and support good ideas from the field.

Kibbe closes with her hopes for the future, including this quote that I really liked:

When we look strategy (and evaluation) in the eye, we will see a useful and evolving suite of tools—no more, no less. Practiced well, and in tandem, they will continue to be powerful aids for decision-making but never substitutes for judgment.

 

Performance Measurement vs Impact Evaluation

Bridgespan’s e-newsletter pointed me to an article in the Nonprofit Times on performance measurement. It starts by saying that we often treat performance measurement like a math test, looking to see if we got the right or wrong answer, when we should be treating it like an essay, with multiple drafts each working toward improvement. As the authors say,

the primary question is not “‘are we doing it right?”, but rather “is this useful?”

The article goes on to discuss the difference between performance measurement (a managerial tool for continuous improvement) and impact evaluation (an effectiveness tool most often used by external funders):

Evaluation grows out of social science, an academic discipline with peer-reviewed methodologies. Specifically, impact evaluation often seeks to attribute causation (e.g., ‘did this specific program cause that outcome?’), and often is intended for an audience beyond the nonprofit program being evaluated (policy-makers, funders, academics, practitioners more broadly).  Impact evaluations tend to make more sense for well-established program models, not experiments and start-ups.

 

Performance Measurement, in contrast, is a management discipline, closely related to continuous improvement and organizational learning. It seeks rapid, incremental improvements in programs and their execution, and thereby outcomes for participants.

The article goes on to list some examples of the types of performance measurement, and some concrete steps for moving forward (including logic models development and a pilot evaluation process). The tools themselves were familiar to me, but the framing was not and I found it powerful. I particularly liked the closing sentence, “performance measurement implies a burden of action—not a burden of proof—to learn and improve.”

The article mentioned a couple of additional resources that look at performance measurement and impact evaluation:  “Measurement as Learning,” by Jeri Eckhart-Queenan and Matt Forti and Working Hard & Working Well by David Hunter. The latter is listed as a companion to Mario Morino’s Leap of Reason, which has been sitting on my office bookshelf unread for too long.

Nonprofit Seeks Acquirer

I’ve been involved in a number of nonprofit mergers and acquisitions (and potential ones that did not happen) at the board and executive staff level. It’s always an interesting process, with a great deal of discussion around why the prospective parties would or would not fit together, potential synergies, etc. However, this is the first time that I’ve seen a call go out from a nonprofit/project seeking a sponsor. Usually, if it is a project seeking sponsorship, there are a few larger organizations that they have in mind, and they approach those folks individually and quietly. Is this more public approach by Social Actions indicative of the social web’s tendency towards openness, transparency and inclusiveness? A recognition that casting a wider net might bring unexpected partners to the table? Simply a commitment to using some of the tools that the organization helps others to utilize? Or just the ignorance of the founders as to “how things are normally done” allowing for fresh thinking on how to do them?

Whatever the answers (most likely a combination of all of the above), the slide show that they’ve put together (below) is great – clear, concise, and gives both the “what’s-in-it-for-me” for the future fiscal sponsor and the “what-we’re-looking-for” from the organization itself. I’ll be very interested to see how this plays out.

Money CAN buy happiness…

…if it is given away.  At least according to research by HBS professor Michael I. Norton and colleagues Elizabeth W. Dunn and Lara B. Aknin, described in the journal Science.

How money is spent seems to influence personal happiness more than how much money is made.  Great news for charitable organizations, and perhaps a reason for the social entrepreneurs to rethink language about social impact investments as opposed to charitable gifts.