Tag Archives: measurement

Evidence +/vs Innovation

Paul Carttar has an interesting post up over at the Bridgespan Group’s blog entitled Evidence and Innovation – Friend or Foe?

Carttar frames the discussion with an anecdote:

…during a recent discussion about what makes a nonprofit organization “high-performance.” One participant nominated innovation as a critical factor. To my astonishment, this stirred an impassioned dissent from another participant, a recognized and vocal proponent of evidence and accountability, who argued that in the nonprofit world the word “innovation” typically implies the generation of exciting new ideas, apparently free of any bothersome, killjoy demands for validation of merit.

Carttar talks about how this is nothing new – that during his time running the white house Social Innovation Fund, he often heard complaints that evaluation stifles innovation. And I’ve certainly seen numerous innovative approaches shut down or left un(der)funded because they’re not “evidence based” – but Carttar makes two important distinctions: 1) Innovation is less about “something new” and more about “something better,” and 2) ” hard evidence of relative performance is the most legitimate, productive way to determine what actually is better.”

Carttar then goes on to discuss the varying types of “hard evidence,” clearly stating that not all types are appropriate for all efforts. He makes the crucial distinction between startup and mid-stage enterprises, and what type of evaluation and “evidence” makes sense for each.

At its best, evidence serves as innovation’s good friend by stimulating continued improvement and providing potential beneficiaries, funders and other stakeholders an objective basis for determining whom to turn to and to support. In this way, evidence can not only “cull the herd” but actually propel the growth and scaling of the best innovations, enabling them over time to become the prevailing practice. In fact, that’s the hopeful theory underlying the SIF.

To be sure, there are plenty of opportunities for conflict between evidence and innovation, which must be diligently managed. Potential funders may demand unrealistically rigorous standards of evidence to assess relatively immature, still-evolving programs—potentially stifling the development of promising solutions. Ill-timed, poorly executed, or inaccurately interpreted evaluation studies can also prematurely choke off development. Or backers of a program with a robust empirical basis may hesitate to invest in further improvements (that is, continued innovation) for fear of undermining the program’s evidentiary support and perceived competitive advantage.

The discussion continues in the comments, and is worth reading for its thoughtfulness and appreciation of nuance.

Advertisements

GPM Calculator

In June, I blogged about Fuqua professors Rick Larrick and Jack Soll and their push to improve fuel efficiency and consumer behavior by simply changing the measurement from MPG to GPM.  Today, Duke Research Advantage blogged that this work was featured in the New York Times Magazine’s “Year in Ideas” issue.  They’ve also launched a new GPM calculator to find your current GPM, compare cars, or see the GPM for all 2009 cars.  More information about this research, including an interactive fuel-efficiency quiz and a video of Larrick and Soll discussing their work is available at mpgillusion.com.

Conversation: The Future of Social Enterprise

Harvard Business School professors V. Kasturi Rangan and Susan McDonald are hosting a conversation based on their recent paper, The Future of Social Enterprise. Click here to read a summary of their findings and join in the conversation.

The questions posed center around social sector evolution and measuring ROI and social impact – the conversation started today and already has some interesting posts.  These web forum conversations generally only last a week or two, so check it out now in order to participate!

Marketing Metrics

Catching up on some of my HBS Working Knowledge newsletter reading, I found an interesting Q&A with Professor Gail McGovern. She discusses some of the major changes in marketing strategy in the past decade, particularly CRM. Particularly salient is this point:

“Indeed, popular metrics such as customer satisfaction, acquisition, and retention have turned out to be very poor indicators of customers’ true perceptions or the success of marketing activities. Often, they’re downright misleading. High overall customer satisfaction scores, for example, often mask narrow but important pain points—areas of major dissatisfaction—such as unhappiness with poor customer service or long wait times.”

She then goes on to promote the executive dashboard – a concept that seems to be all the rage lately. When our team evaluated software vendors at the Museum, the inclusion of a comprehensive yet user-friendly dashboard giving an overview of the Museum’s current business position was a key component. Of course, as with any data-driven tool, a dashboard is only as good as the data included (i.e. the old programming mantra, “garbage in, garbage out.”). Knowing which measures are important as actual business drivers and which measures are merely distractions is obviously the key. The best way to figure this out seems to be the key lesson that was hammered into my team as we defeated the competition in our Marketing Strategy simulation in business school: listen to your customers.