Guest Blog: Why ‘What’s Our Impact?’ is the Wrong Question – and a Better Question

by The SIAA team

Why ‘What’s Our Impact?’ is the Wrong Question – and a Better Question

by Caroline Fiennes, Giving Evidence

Since there are so many ways that charities and funders can use their finite resources, they must make choices: choices between competing goals (reduce homelessness or reduce child poverty), between competing approaches (reduce homelessness by providing more housing or by lobbying), and between competing target groups (provide more housing in Hull or in Halifax?)

So understanding the impact (or even potential impact[i]) of various programmes or organisations is fundamentally a comparative exercise. The important question is: ‘what does the impact and associated risk of Programme A relative to that of Programme B suggest about how we should allocate our limited resources?’

The normal question of ‘what is the impact of Programme A’ is pretty unhelpful. Often this question is in principle impossible to answer[ii], but even when it isn’t, it’s not helpful. Let’s suppose that Programme A is one whose impact is known: it’s an education programme whose effect is evident in children’s scores in year-end tests, and that it increases those scores by 10% on average. Let’s further suppose that we know the unit cost, which is £50 per child. (By the way, this is a great deal more clarity on impact than is normally available.)

Well great, but this still doesn’t help because the choice wasn’t between doing Programme A and doing nothing[iii]. The choice is always between competing alternatives, so at a minimum you need data on two alternatives. The key question for impact-istas is ‘how does our impact (or that of a programme we might run/fund) compare to what our impact could be if we did something else? That is, are we taking the best approach, or are we leaving value on the table?’ Therefore we need to figure out what information do we need in order to see if we’re on the best track or if we could improve.

Like most good insights, this sounds rather obvious – but you wouldn’t guess it from much of the discourse about impact.

For instance, I talk to many charities and donors who want to know what their total impact is. Suppose they knew, and let’s say that they prevented 274 people from becoming homeless, which cost £10,000 per person. So what? What does that suggest about where they could improve, which parts of their brilliant model they should share with others, and which parts of their work are superfluous (or even counterproductive) and should be dropped?

Nothing. It’s operationally useless. There is not one management decision for which those impact data are useful.

{I often wonder why people ask this question. They sometimes imply they want to report to some external entity – a funder or board perhaps. But either that entity is going to use the information, in which case it needs to be compared with something for the reasons discussed, or they’re not, in which case reporting on it is pointless. Maybe I’m missing something. }

How shall I compare thee…?

The trick is comparisons – between options which are comparable(!) Do other programmes prevent homelessness of comparable people more cheaply? Can other approaches improve students’ ability at tests more cheaply or quickly? These are useful questions because they can be used in management decisions: they address the central question above, by showing whether we’re on the best track or if (where) we could improve. Hence in several spheres – such as international development, health and education – ‘multi-arm studies’ are common, which compare the effect of several programmes.

The comparison could just be between one organisation’s performance over time. But again, it’s not useful simply to know whether impact is rising or falling: rather, what’s useful is to know how a particular change which the organisation made (changing its operational model, for instance) affected impact, because this informs the decision about whether to continue with that change or to revert.

In many spheres, much information about effects, comparative effects and comparative costs of competing approaches is available – publicly and freely – for instance from academics or other independent sources[iv]. Charities and funders looking to see whether they’re on the best track and if/where they could improve do well to use it.

Take me to your ruler

The problem of course is that often options are not comparable. The units of improved education are totally different to those of better air quality or preserved heritage or good medical research. Hence we cannot use the same ‘ruler’ for them all: each social impact ‘ruler’ only works for a few activities. [The various attempts to squash multiple dimensions into single units - to develop a ruler for any sector - are incoherent, I think[v].]

So for an organisation which runs or funds multiple types of work, the answer to ‘what’s our impact?’ will normally include several pieces in different units.

This should be no surprise. Consider the question of ‘how’s the economy doing?’ You’d expect the answer to include various quantitative measures – GDP/capita, growth, unemployment, and perhaps social mobility, inequality and even happiness – as well as some narrative assessment. Such as we find in most newspapers on most days.

As impact-istas, we need better rulers and better ways of measuring. But no ruler of itself shows whether one line is longer than another, nor how any line could make itself longer or better over time. Understanding impact may be possible in isolation: increasing and maximising impact is fundamentally a comparative exercise.

 

Caroline Fiennes is Director of Giving Evidence, and author of  ”It Ain’t What You Give, It’s The Way That You Give It” which covers these issues in more detail.

The question of how charities and funders can see whether they’re on the best track and if/where they could improve has several other aspects, which will be explored in future articles.


[i] Sometimes, charities and funders consider work of which the impact is unknown because it’s innovative and/or simply hasn’t been done before. This can include some of the highest impact options – because they may unearth some wide-reaching break-through finding – and often falls to charitable money since commercial and public money may be unable to take the risks involved.

[ii] For instance, consider the work that the Wellcome Trust did to ensure that the code for the human genome is freely available to anybody. How many discoveries will that enable? How many patients will those discoveries help? How many Quality-Adjusted Life Years do those discoveries add? We’ve no idea, since the effect may be felt for the rest of human history. It’s probably not meaningful to calculate. Should the uncertainty have deterred Wellcome from funding that? Hardly: it’s probably the single highest impact use of charitable money anywhere, ever.

[iii] Weirdly, clinical medical trials often do compare a particular intervention with doing nothing. This is unhelpful for the reasons given here, and hence is blasted by Dr Ben Goldacre in Bad Pharma which catalogues systemic problems in the collection and use of evidence in medicine.

[iv] J-PAL provides this for some international development areas, for instance. See http://www.povertyactionlab.org/policy-lessons/health/child-diarrhea and http://www.povertyactionlab.org/policy-lessons/cost-effectiveness

[v] This is discussed in detail in It Ain’t What You Give, Chapter 16 on Charities’ Results.

Post published: Thursday, January 17th, 2013. Categories: SIAA Blog.

4 Responses to “Guest Blog: Why ‘What’s Our Impact?’ is the Wrong Question – and a Better Question”

  1. [...] This article was first published by the Society of Impact Assessment Analysts [...]

  2. Good piece Caroline. The fundamental value of evalutation is, or at least should be, to identify opportunities for improvement. I would argue that there is value in assessing impact in monetarist terms, as it does tend to be something that funders and potential funders respond to. You make the arguement well however that the cost of entering into this process should always be taken into account and organisations should not be caught up in the contemporary faciniation with measurement for its own sake. Identifying opportunities for improvement will lead to more practical outcomes than any measurement in its own right.

  3. [...] In a guest post featured on Social Impact Analysts Association (SIAA) blog, Caroline Fiennes reviews social impact metrics and their central question, “What’s our impact?” [...]

  4. sian edwards says:

    Absolutely! So, there is value in having commonality in outcome measures across sectors, with a focus on key indicators that relate directly to the problems we have quantified to justify the need in the first place! I for one was disappointed when the Every Child Matters framework was scrapped (seemingly!) as this was a start on that process, for all people working with children, in any case. At a time when we are all faced with difficult decisions in the allocation of funds, perhaps we have to comprimise on depth and variety in order to support a better understanding of progress on a fewer indicators of wellbeing.

Leave a Reply