Mind the Gap

| No responses | Posted by: Neil Reeder | Theme: Social Innovation & Investment

Stronger ties between academic evaluators and social innovators would hugely benefit both sides, argues Young Foundation fellow Neil Reeder in a blog entry for the Stanford Social Innovation Review.

Evaluation and assessment seems an unlikely topic for passionate debate. But as I found out in developing the recently published report “Strengthening Social Innovation in Europe: Journey to Effective Assessment and Metrics”, there are widely different views on what assessment should be for and how it should be undertaken.

When I shared some interim findings on the topic for Social Innovation Europe’s Sharing Insight, Shaping Action conference in Poland, it sparked a lively follow-up discussion. The innovators were most interested in quick, cheap methods to identify how to improve their services, gain more funding, and avoid paperwork. The policymakers wanted clear signifiers of program success or failure. The academics opted for a patient build-up of robust knowledge and were as interested in methodology as they were in implications.

Such divergences run deep. Yet imagine how much more dynamic our approach to gaining knowledge would be if we combined innovators’ responsiveness, policymakers’ broad views, and academics’ processes of scrutiny. How to achieve that was a key issue for the interviews, discussions, and case studies that fed into the final report.

There are four ways that social innovators can and should build up systematic knowledge on their performance and how to improve it:

1. Introduce the “wisdom of crowds” to spotlight the outcome metrics that are useful and cost-effective, and those that are not.

Thousands of metrics exist, but rarely is there clear signposting to the ones that are simple to use, relevant, and robust. This hampers benchmarking and learning. If social innovators could more readily see user ratings (say, “2 out of 5 for usability”) and comments (“the survey took far too long for our teens”), prospects for consensus would much improve. Libraries of indicators—such as the matrix of tools of outcomes for young people commissioned by the UK’s Department for Education—are a basis for conversations, but only a start. The introduction of web 2.0 principles to metric design is long overdue.

2. Do more continuous learning on whether a scheme’s approach “clicks” and goes with the grain of how beneficiaries think and feel.

Waiting years before asking, “Did it work?” feels archaic when we have the tools to track feedback promptly. One simple but powerful approach was adopted by Nesta’s Neighbourhood Challenge. Over the course of a year, this provided funding and networking support to seventeen schemes in the UK working to reinvigorate their local community. Instead of regular performance reports back to Nesta, each month an open blog from each group set out what had been done, what had been learned, and what achievement its team was most proud of.

A more sophisticated but equally time-conscious route was taken by Cedar, a program helping children recover from domestic abuse in Scotland. Through web surveys, data monitoring, interviews with participants, peer reflection groups, and exchanges of learning, the organization achieved a strong culture of evidence to drive year-on-year improvement.

3. Shift to more formal processes of assessment and evaluation as and when you can.

Different stages of innovation have different knowledge and evaluation needs. A light touch approach is best for early stages, according to practitioners in our discussion in Poland. However, when projects are looking to expand, studies on the use of evidence suggest that time spent finding out true program impact and its key drivers often provides valuable insights for strategy; it also bolsters funders’ confidence.

4. Press for better access to wider bodies of research and data.

Many innovators lack the time and money that corporations have to access big data, or trawl through libraries of evaluations. Instead, they need to work in partnership, either with peers or government, to bridge that shortfall. One example comes from the Netherlands, where a project in the city of Almere brings together companies, knowledge, and facilities to capture, analyze, and share data such as health risks. And the Socially Integrated City approach to renewing Germany’s North Rhine—Westphalia region is also oriented to partnerships, involving resident workshops, roundtables, self-evaluation, social context indicators, and interdisciplinary business planning.

Social innovators sometimes feel more comfortable with small-scale pilots that they can tweak using intuition. Policymakers sometimes settle for what’s simplest to report, not what’s most insightful. Academic evaluators are sometimes more comfortable collecting data, building spreadsheets, and writing slow-but-thorough reports. But ultimately all share the goal of improving understanding. Harnessing the strengths of the different groups will certainly not be easy, but I believe it can be done.

Neil Reeder is director of Head and Heart Economics, a fellow of the Young Foundation, and a researcher at the London School of Economics.

Strengthening social innovation in Europe: journey to effective assessment and metrics was written by Neil Reeder and Carmel O’Sullivan, and published by the European Commission late in 2012.

This article was originally published on 25 February 2013 on the Stanford Social Innovation Review blog. 


If your comment is published, it will be displayed along with your name. We only ask for your email address to verify that you’re a person and not a robot! Your details will not be added to any list or shared with any 3rd parties.

Otherwise, the submitted information will be deleted within 28 days.

See our privacy policy for more details.

  • (will not be published)