Listen More, Measure Less — Uncover the Real Impact of your Non-Profit
7 questions non-profits must ask the people they serve
Imagine judging the success of a new restaurant without talking to people who’ve eaten there. You could look at the menu, speak to staff and food critics, analyse the accounts, and draw on a host of other sources. But, without the views of real customers, your assessment would miss all the juicy details: Does the food tingle diners’ tastebuds? Was it a memorable experience? Will they come again? Without this information and actions that follow-on, the business is unlikely to survive.
Now suppose this restaurant does not operate for profit. It’s a non-profit that provides free meals to the homeless. Should it be any less concerned about the views and experiences of the people that eat there? No. And here’s why.
The absence of a financial transaction in fact should compel this non-profit to build deep engagement with the people receiving those meals. Only then, can the organisation truly understand its own impact.
Honest feedback is the most valuable thing non-paying customers can give non-profits. To be worthy of it, organisations must ask sincerely, put aside their egos, and take feedback to heart…by translating it into action.
The value of gathering direct feedback is well-recognised by non-profits but not widely practised. A recent survey of 1,986 US non-profit organisations found that about half elicit feedback only sporadically or seldom. When asked why, 65% of them cited time and resources as the key limitation. 10% said it was too complicated.
The irony is that eliciting feedback can be relatively quick and cost-effective. The rise of mobile phone use and other remote technologies makes integrating participant views into day-to-day programme operations a real possibility. SMS, automated voice response (IVR), and call centers enable continous and cost-effective data collection in diverse contexts and geographies. Despite this, many non-profit programs continue to rely on easy to count “take-up” or “reach” data to construe how useful people are finding their product or service. In the example above, it may be the number of people served, or the nutritional content of meals. While such metrics are important, using them to demonstrate the difference the service is making in peoples lives is misleading. For one, we don’t know if other alternatives are available to them. Nor can we tell how queuing up for free food makes people feel. Without knowledge of these experiences (not easy to count), non-profits can’t tell whether the cost of delivering a program is justified.
“Not everything that counts can be counted, and not everything that can be counted, counts” — Albert Einstein
Measuring What’s Not Easy to Count
So how do you measure what can’t be counted? The conventional and by far most dominant approach is to commissioning external evaluations, usually towards the end of a program. Perspectives and experiences of program recipients typically flow into such studies and can reveal important insights. However, they can be time-intensive, technically dense, and expensive. Most significantly, key findings often come to light too late to allow for any meaningful change.
One way to circumvent this problem is by seeking feedback early on, and on an on-going basis. Doing this helps inform and focus more rigourous studies later on. Together, these findings paint a rich and textured understanding of the impact of your non-profit program.
Below are 7 feedback questions I find particularly well-suited to recipients of non-profit programs. Is your organisation able to answer them?
Q1.Does [product/service] provided by [organisation]ease a problem you experience?
The sense in asking this seemingly obvious question stems from an inherent non-profit challenge: Distinguishing passive recipients of a program from those who actually benefit from it. (Q5 addresses those who could be negatively impacted).
If most people, answer “No” or “Partly”, it can only be because:
(i) The problem you set out to solve is not really (or no longer) a problem,
(ii) The problem is being experienced but not by those you are asking,
(iii) The problem is being experienced but not solved by your intervention.
The next step of course is to understand what combination of (i), (ii), and (iii) is at play. Only then can you get down to the work of directing your organisation’s resources to more effective and efficient use.
Note, the only answer that warrants celebration here is “yes”. If this is the case, your program has reached the right people with an effective solution. But even then, the solution may not be sustainable, it may be causing harm in other areas, or you would want a more holistic understanding of how it works. That’s why we need to ask at least 6 more questions.
Q2. What alternative(s) to [product/service provided by non-profit] are available to you?
Many non-profit programs aim to improve access to something. A frequently heard claim is that their work reaches marginalised, hard to reach places and populations. This question or variations of it, such as “what were you using before […]” shed light on the real size of the gap between the demand for what the program is providing and supply. Other like-minded non-profits or public agencies may be operating in the same area. It’s also possible that existence of a of free alternative has crowded out private sector operators.
This question allows you to gain insight on the wider context of your intervention. With this knowledge, a non-profit could then optimize their offering by considering the entire landscape of possibilities; enabling better use of resources. This question can also reveal something about how risky or innovative your program is. That’s assuming the need for your program has already been established. (Refer to answers to Q1).
Q3. How long have you been receiving [product/service provided by non-profit]?
The amount of time someone has been on the receiving end of your program can be interpreted in different ways. If a high number of respondents have only been using it for a short time, and the program has been running for a some years, drop out rates are high. This could be because the program successfully solves their problem or it could be that the cost of participation outweighs the benefits. Again, checking the answers to this with responses to Q1 can help identify which is more likely.
On the other hand, if most respondents have been part of the program for several years for instance, this could imply that their underlying problem is not being effectively addressed.
Q4. How likely are you to recommend [product/service provided by non-profit] to a friend or colleague?
Widely used in the corporate world, this question is known as the Net Promoter Score (NPS), and is a popular way to measure customer loyalty.
Poeple are asked to respond on a scale from 0 to 10 from not likely at all to very likely. Those answering 9 or 10 are classed “promoters”, 7 or 8 are “passives”, and 0 to 6 are “detractors”. The percentage of detractors is then subtracted from the percentage of promoters, to give the NPS. (Passives are ignored). A positive number indicates potential for success, a negative one points to impending failure.
Whilst non-profits may not primarily be concerned with loyalty, they do care about how well their program is working. If a program has more promoters than detractors, it implies demand for it will grow just through word of mouth. Knowing this can help inform discussions on scalability of the intervention, which in turn help budgeting and fundraising decisions. Beyond this, NPS is a convenient way of tracking whether changes or supposed improvements to the program are actually making a difference.
A word of caution however: NPS must be used in conjunction with other data (answers to Q6 & Q7 are essential) as the score on its own doesn’t provide actionable data. Moreover, it can easily become a vanity metric especially as recipients can be incentivised to become promoters. Indeed, the value of NPS only surfaces when it is used continuously, as an operational tool. NPS can nurture experimentation, learning and improvement but it must be used in that spirit — not as a measurement tool for M&E.
Q5. Has anything changed in your household since receiving [product/service provided by non-profit]?
Individuals operate within social structures, most commonly households. If a program has made any kind of meaningful impact — negative or positive, it’s likely to manifest in intra-household dynamics. This question therefore goes to the core of impact: It identifies whether behaviour has changed, and if so how.
This question can also hint at the complexity of what may be considered a simple intervention. For example, a program that provides nutritional advice to women may change household decisions on crop planting, or how much money is spent on food. Providing such changes are significant and not short-lived, they would emerge naturally in answers to this question.
Q6. What do you like/appreciate most about service?
Q7. How can we improve?
These last two questions give non-profits the chance to build on their successes. I’m still astounded by how often I hear sentences like “I think we’ll get better results if we just invest in X or promote Y” — assumptions that more often than not, are incorrect. Q6 and Q7 steer organisations towards tackling issues with proven significance for the end users.
Using digital tools to listen more and measure less doesn’t just work in favour of the people non-profits serve, it offers organisations themselves the chance to work smarter, faster, and cheaper.
*A Side Note on the Pros and Cons of Remote Data Collection
I’ve often heard people say that information collected remotely — via SMS, phone calls, or internet is less reliable than that collected face to face. There’s a (legitimate) concern that data may be biased as it only captures responses from those who have access to the technology and the time to respond to questions. Second, there’s a belief that people are less likely to provide false information if they have eye to eye contact with the person asking.
Whilst there are situations in which this may be the case, it’s important to note that the same risks apply to face to face data collection. Collective interviews such as focus groups discussions can influence participants answers profoundly. People tend to concur with the views of the most vocal or of the majority. Here too, only the views of those who have time to attend are captured.
Apart from cost advantages, remote data collection allows questions to be answered privately and anonymously— particularly important if you’re asking sensitive questions and seeking honest feedback. Digital tools also enable better timing of data collection . For example, aligning the exercise to when the data will be best recalled. Lastly, the downsides of remote data collection can be mitigated by (i) Testing out questions beforehand. Observing respondent reactions to, and their interpretation of questions before rolling out a survey, and (ii) validating a sub-sample responses of with face to face interviews. Both of these measure can help improve and check the credibility of the data being collected.
Most methods of data collection are susceptible to bias. (See this article for a complete analysis.) Yet, listening to the voices, opinions, and experiences of people using a non-profit offering is an irreplaceable source of authentic data. The key is to be 100% sure the answers you receive are at least 90% correct. Get it the other way around and you’re in trouble.