THE DOS AND DON’TS OF SOCIAL IMPACT MEASUREMENT #2
INTERVIEW WITH DATA MANAGEMENT EXPERT MEGAN O’NEIL-RENAUD
Originally published on WhatAMission.com on 13th August 2017 - 9 min read
What do you think about qualitative vs. quantitative feedback and how do you harmonise the two?
I’m super quantitative so I think that people buy quantitative data much faster and easier than they buy qualitative. So for example disadvantaged people getting free laundry. Now they can have clean clean and they’re not afraid to go out of their house – what does that mean? We’re back to unpacking. If I were doing the impact measurement on that, I would unpack that into dollars and cents (or whichever currency). Now that person is looking for a job, that person is going to shorten their job search by x number of months so what does that mean for government services? And then I’d take it farther, I’d keep on unpacking it That means great health, less burden on other systems like healthcare, education, generational poverty etc. Keep unpacking until you can’t go any further. But I know qualitative is so important. How confident did you feel coming into this, how confident did you feel going out? Do you feel better equipped to look for a job? 50% said yes. Well that’s a great statistic. This is how you quantify feelings. But of course hearing the storytelling is what gets people. When people hear stories, the community really gets behind you.
In terms of thinking about grant applications (because obviously it’s very important to demonstrate your impact) what do you think that grantmakers want to hear, see, and know when reading a grant application?
The first thing I’ll say is plain language. So you don’t have to get all fancy, don’t pull out your English degree. Keep it plain, a lot of times your grant judges are just ordinary citizens. You don’t want to lose them. Tell the story. Absolutely start with the story if that’s an opportunity. They want to see numbers. For example, in a current project of mine we used a milestones chart which ended up being our outputs and outcomes measurements but it started out just our milestones. So developing that creates your project plan, plus an outcomes chart at the same time. It’s really simple: we’re going to hold a workshop and 40 people are going to attend, then we’re going to hold another workshop and ten people are going to come. So it was a really laid out simple project plan called a milestones chart and then at the bottom we had five outcomes that we thought all the things in this chart would lead to. So because we’re doing this workshop, now we think more people will know about social enterprise. It was actually brilliant, I loved how it became more than just a chart.
And again always include those bums in seats measurements, they want to know, because governments all the way up the chain haven’t done systems change at the top. They’re thinking about how to justify how they’re spending taxpayers’ money by proving how many people are benefitting – so how many people went to that workshop. It’s not a great system of measurement but that’s the way it is right now.
Clear simple outcomes measurements – don’t go too big like ‘our impact is going to help a million people!’ Even if it’s true, don’t do it unless you have hard solid proof to back that up. Temper your expectations.
How can the average social innovator justify the more relational work they’re doing where they’re linking networks and groups and stakeholders and then trying to scale that impact through those relationships? How can they justify the systems work they’re doing to grantmakers for example?
Collaboration is the buzzword right now. Suddenly there was a big shift a couple of years ago where collaboration became everything. For grant applications, I always have a page full of organisations that I have already contacted to say ‘hey can we collaborate?’. Most often my email says ‘you don’t have to do any work but I want do this and I want to tap into this resource that you have’. And that justifies you saying, we’re trying to create change and we can’t be the sole engine; we want to work with other community players because we want your grant dollars to stretch the best they can. Collaboration is it now. It’s about breaking away from that competitiveness. I find that there are so many agencies who feel like they have to compete and they won’t work with another agency because they don’t want to give up control or they say think they’re going to do it better or that they won’t get the grant if they join forces with others. This may or may not be true but if your goal is impact then your goal should be that impact no matter how. That’s a really hard ego piece to step away from because this is the superhero complex that social innovators have of ‘I’m going to change the world’ instead of volunteering for an organisation that really is changing the world and/or bringing that thought or idea to them.
Do you see any space emerging for collaboration of organisations on grant applications?
Yes. Every one of the last four grants I’ve written has been a collaborative network of organisations. I’d say go for it as long as you articulate it and meet those milestones and have a fairly good agreement between you and your partners, you need a lead, you need someone to head up the committee. Other than that go for it.
When I’m making surveys for example for feedback, I worry about leading questions. Do you have any advice for people to help them think outside of themselves so we’re not asking biased questions in our impact measurement?
I would ask the question in multiple forms. So draft it once and keep it and then below your first try, try and ask it in different ways. Try in the most positive upbeat way and then the most leading negative way and test other answers against the most positive leading and the most negative leading and see how other people read it. But ultimately the question is do you want to avoid leading questions. In terms of pure impact or pure research, yes you want to have it neutral but then you need to have multiple testing questions and in that case you’re going to find an agency to gather proper surveys. You know it’s 50 questions and questions 1 to 10 are just tested multiple times throughout to test the knowledge. A survey is typically ten questions in 50 forms. Everything else is a questionnaire. If you’re not a researcher and you’re looking for grants results and you’re trying to encourage behaviour change, I think a leading question is probably ok. You’re going to game the system a little tiny bit because then you want to lead with guilt (laughs). You want to create that change by asking about it, the very act of them answering that leading question will create that behaviour to a certain extent especially if you say can I follow up with you next year. Then you follow up and say hey, remember last year we chatted when you came to our workshop, what have you done since? You want them to think ‘Oh gosh!’. So when you’re in pure research you don’t want to lead and that’s when you need a proper survey – there’s a big difference between survey and questionnaire and if I’m filling out a grant I almost never say survey, I’ll always say questionnaire. I don’t want to have to create a proper research database unless I’m acting as a researcher. If I’m managing a project or programme, a questionnaire is easier and this might actually lead to research and long-term research funding. The difference between a questionnaire and a survey is a very common mistake.
What would you say to people measuring impact who don’t see themselves as a numbers person?
Gather the qualitative. Most people aren’t numbers people, most people are qualitative. So gather the storytelling, sticky notes, dot charts. Dot charts are fantastic. We’ve had a lot of success getting people to fill out stuff because they don’t have to fill anything out, everybody loves stickers, and you just put your stickers on your favourite thing.
Then you don’t have to worry about numbers because it’s automatically quantified. Get people to stick dots on charts and then you can say, 100 people showed to the event and 75 people (75%) said ‘this’… then you’re not a numbers person, all you have to do is count dots and if you don’t like counting dots, find someone who does (laughs). The other way are those smiley faces that rate the cleanliness of airport toilets for example. So you have four choices: you’ve got a really angry face, a ‘meh’ face, a happy face, and an overjoyed face and people can just fill that out and there’s your ranking. You can say ‘people coming in were ‘meh’ at the workshop and when they left they were overjoyed – that’s a x number percent increase!’
Megan O’Neil-Renaud is the manager of Social Enterprise and Social Finance at Pillar Nonprofit. She helps social enterprises move beyond idea stage and into launch then sustainability. She leads data management for the social enterprise and finance clusters while working on the front-line with some pretty incredible social entrepreneurs. You can get in touch with her via LinkedIn.