Guest Blog: Easy ways for philanthropic donors to see if they’re doing well

Easy ways for philanthropic donors to see if they’re doing well

by Caroline Fiennes, Giving Evidence

Some skiers are better than others. Some singers are better than others. The same for teaching, nursing and curling. So it seems reasonable to suppose that some people are better at supporting charities than others.

But how do you tell? Curlers can easily see if they beat their opponents, and surgeons see if patients live or die, but success in philanthropy is less evident. Whereas businesses get feedback immediately and constantly – unpopular or over-priced products don’t sell – donors don’t. They can’t rely on charities telling them since they’re daren’t bite the hand that feeds them. Steve Jobs cited the difficulty of knowing if you’re giving well or badly as deterring him from giving much at all.

Happily, it is possible – and not terribly hard. Giving Evidence, a consultancy and campaign which helps donors to give well by using sound evidence, has found various tools which help almost any donor to understand their performance. They’re collated in a new white paper, and are simple: they may even seem rather obvious, but have proven useful to individuals, companies, and foundations who give. They are:

  • Monitoring the ‘success rate’: the proportion of your gifts which do well, do alright and which fail. Though clearly the definition of success varies between grants, presumably each one is made with some purpose: this tool simply asks how many succeed in their own terms. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is succeeding if barely any of its grants succeed. We’re not saying that every grant should succeed: many funders sensibly support experimental or exploratory work, but, like venture capitalists, donors should expect some failures, though should have some system for noticing which grants those are to enable learning from the patterns. The Shell Foundation (attached to the energy company) used this measure to triple its success.
  • Tracking whether ‘the patient is getting better’: whether biodiversity is increasing around the lake, or whether malaria is becoming less prevalent. This of course indicates nothing about why anything is changing nor the donor’s contribution. Nonetheless, it’s imperative to know if the problem is worsening – in which case, we might re-double our efforts or invite other funders in – or if it’s gone away. Often data from public or commercial sources shows progress on a funder’s goals.
  • Measure the costs created for charities (and others) by the funder’s application and reporting processes. These can be huge: as a charity CEO myself, I had some grants where the donor’s processes consumed 90% of the grant given. It can be even worse: a physicist at Colombia University calculates that some grants leave his lab worse off, and we’ve heard stories of application processes which cost twice the amount eventually given.  Grantees may make great progress despite a meddlesome funder. The avoidable costs from application and reporting processes in the UK alone are estimated at about £400m every single year. BBC Children in Need has examined its process and found ways to make savings, and other large donors can too.
  • Hearing what your grantees think. When I ran a charity, I often saw ways that donors could be more helpful but never told them because the stakes are too high: charities can’t risk offending people whose help they may need in future. So the learning is lost. Yet listening to grantees and beneficiaries has brought great benefits in medicine and social services – and to many philanthropic donors.
  • Lastly, clarify what you’re learning, and tell others. Funders do publish, but mainly about their successes. ‘Publication bias’ in medicine in which positive stories are disproportionately likely to be shared means that ‘the true effects of loads of prescribed medicines are essentially unknown’, according to epidemiologist Dr Ben Goldacre. Philanthropy can avoid the same fate. We’re currently working with a foundation to clarify and publish the ‘whole truth’ about how an innovative programme fared. Tales of failure and challenges, however inglorious, teach us a great deal.

Perhaps ‘measuring impact’ is too hard and too off-putting, and we should all instead talk about ‘understanding performance’. The tools in this white paper help with that. Giving Evidence is working with donors on several of them, and will happily talk to anybody about them.


Caroline Fiennes is director of Giving Evidence, a consultancy and campaign promoting charitable giving based on evidence. She is author of It Ain’t What You Give, It’s The Way That You Give It, and serves on boards of The Cochrane Collaboration, the US Center for Effective Philanthropy, and Charity Navigator, the world’s largest charity ratings agency. Follow @carolinefiennes on Twitter


Interested in similar blogs and resources? Why not sign up to our newsletter or visit our resources centre

 

 

 

 

 

 

 

 

Beyond Measurement

Beyond Measurement

by Kate Ruff, PhD Candidate in Accounting at Schulich School of Business, York University, Canada

Echoing the views of Tris (read them here), I think we do ourselves a disservice when we frame discussions about social impact as discussions about social impact measurement.

I’ve noticed a tendency to view social impact measures like Hogwarts Sorting Hats. We talk about social impact measurement as if the resulting data (qualitative and quantitative) will unambiguously tell us if the program was effective, where to invest/donate our money, or which strategy to deploy to reduce poverty.  When measures fail to deliver unambiguous answers, we seek better measures, more standardized measures, more rigorous measures, even more expert measurers.

I wonder if this emphasis comes at a cost? What might we see if we look beyond measurement? What if we took the emphasis off measuring and turned our attention toward the difficult task of making careful nuanced decisions based on imperfect measures. I’m looking forward SIAA 2013 Beyond Measurement to start exploring how can we get better at using measures to improve social impact.


For more information about SIAA’s 2013 Annual Conference Beyond Measurement click here.

Guest Blog: Selecting the Appropriate Measurement System

Selecting the Appropriate Measurement System

by Simon Anderson, Social Asset Measurements

Socially oriented organizations, whether they are non-profits, foundations or social purpose businesses, seek, now more than ever, to articulate how they create social change. Understanding the change that is created is important as it assists staff in understanding their organization’s performance, where adjustments can be made, and enables them to showcase and promote their work to the public.

In order to develop the required assessment framework, an organization must consider its core goals and objectives. This process involves identifying the organization’s key activities and intended areas of influence. In addition, another important stage is to determine what kind of indicators to track, whether they are input, output, or impact oriented indicators. This is an interesting decision because there is not a “one size fits all” approach as the correct framework depends on several internal factors. The following will discuss this decision making process, drawing from Social Asset Measurement’s experience working with organizations across Canada.

Given the current interest in measuring impact, organizations are often drawn to developing an impact assessment framework. While measuring impact ensures that the organization captures the change it is creating, it should be recognized that in order to develop and implement a credible system, an organization needs to devote significant internal resources to the project. This issue highlights the first consideration when developing a measurement system: organizational capacity. While it would be ideal for all organizations to have dedicated staff for developing and implementing an impact assessment framework, in reality, this is not always the case. Therefore, if an organization recognizes that it does not have the resources for impact assessment, an alternative is to develop a framework that captures the outputs from their programs. While this does not provide information on the efficacy of the program, it may be a better option than pursuing an impact assessment that is implemented without adequate resources.

Another key consideration is the structure of the program in question. In some cases an organization may maintain strong connections with their program participants after the program is complete. If this is the structure, then capturing data on outcomes that materialize in the future is possible. However, in other contexts, organizations may lose contact with program participants, and while the infrastructure can be adjusted to facilitate ongoing connection, until that is achieved, it may be difficult to capture outcomes that that arise after the program is completed. If this is the case, indicators which capture change that occurs within the duration of the program should be selected.

The final element I will speak to is that of program type.  This issue affects whether an impact analysis can be extended to a social return on investment (SROI) analysis (where a financial value is applied to some of the indicators to enable the calculation of a benefit cost ratio). For some programs, such as employment training or education promotion, there are well defined financial proxies that can be used to assign a financial value to the indicators. However, in other situations, such as an after school art program, it is more difficult to identify a financial value. In these cases it is better to focus on the impact rather than force the assessment into a SROI analysis because using poor financial proxies taints the credibility of the assessment at large. To this end, when determining whether a SROI analysis is applicable, the nature of an organization’s programs needs to be considered.

It is important for organizations to capture data on their work to better understand their programming and how they create change. In doing so, it is an imperative to develop the most appropriate system for an organization as this ensures the organization has the capacity and required infrastructure to utilize the framework, and, so potential data collection opportunities are not left unused. What are your thoughts regarding the selection of a measurement framework? What needs to be considered? What have been your experiences? It would be great to hear your comments below!


 

Simon is the manager of Western Canada at Social Asset Measurements (SAM) where he works with non-profits and foundations to develop and implement impact measurement frameworks. Prior to joining SAM, he worked with Human Resources and Skills Development Canada where he focused on social finance policy issues.

Simon holds a Master of Public Policy (University of Toronto) and B.A. in International Development and Economics (University of King’s College).