How to improve things for your community - and your not-for-profit - by measuring outcomes
Key points addressed in this guide
- Outcomes measurement is an achievable and necessary activity for not-for-profits
- The process is now well-established and expected by many funders and government agencies
- Don’t get lost in the definitions: get started and learn along the way
- There are risks for organisations that fail to measure outcomes and rewards for those that do
- Best-practice practitioners should aim to allocate 10–20% of their program budget to outcomes measurement
- Not-for-profits should seek to appoint a dedicated outcomes specialist or a staff member with time dedicated to the role
- Resources, training and peer support are available for organisations prepared to do their homework (don't miss the list at the end of our guide!)
Not-for-profits shouldn’t be afraid of data collection and outcomes measurement, and the benefits are well worth the investment, Our Community’s chief impact officer says.
Data and evaluation specialist Jen Riley said too many organisations were “paralysed” by fear of the unfamiliar field and struggled to even get started.
Her view was that “you’ve got to start somewhere”, and that not-for-profit leaders should be confident they will be able to learn by doing.
Not-for-profits could not afford to be left behind the global trend, she said, adding that outcomes measurement must be considered an essential operational cost.
Without good measurement, she said, not-for-profits risked losing grants, contracts, and stakeholder and donor support, and worst of all, being incapable of demonstrating they were achieving their mission.
“They need to take on the challenge, because how do they know if what they're doing is making a difference if they're not measuring outcomes that could contribute to long-term change?”
She said there was a great deal of support available to not-for-profits ready to take up the challenge, including affordable training, resources, professional networks and expert consultants ready to help organisations get to the next level.
Our Community is committed to helping organisations measure their worth, and has built on this commitment with Ms Riley’s appointment, the free book Measuring What Matters, resources and advice from the Innovation Lab, the development of tech tools such as SmartyFile, the Outcomes Engine and the social sector taxonomy CLASSIE, and help sheets including this one. The organisation has also developed strong partnerships with thought leaders in the field, such as the Social Impact Measurement Network Australia (SIMNA).
What is outcomes measurement? And how does it relate to outputs, impact and evaluation?
First, don’t be afraid of these and other terms you’ll discover in this field.
As the Community Directors Council’s Sonja Hood explained at the Institute of Community Directors’ Practical Impact conference in 2019: “We spend a lot of time tying ourselves in little knots trying to understand the difference between outputs, outcomes and impacts. I mix them up all the time. But … you know a lot more than you think you do.”
She said not-for-profits understood that good data gave them the knowledge they needed to achieve their goals.
According to the Centre for Social Impact’s The Compass: Guide to Social Impact Measurement and the Productivity Commission’s 2010 report into NFPs, which addressed evaluation, key terms you should understand include:
- inputs, the resources that go into a program
- activities, the services and “interventions” you deliver, such as running workshops, rescuing animals, counselling teens or planting trees
- outputs, the direct products or services resulting from your program or interventions
- outcomes, the changes in attitudes, values, behaviours or conditions resulting from your program or interventions
- impact, the longer-term social changes generated by a program
- evaluation, the assessment of a program’s effectiveness using one of a variety of methods.
According to the Social Impact Measurement Network Australia (SIMNA), a membership body aimed at not-for-profits, measurement can help an organisation understand:
- how successful and effective it has been in delivering its social purpose or mission
- which aspects of its work could be improved
- whether resources could have been applied in another way to achieve its desired outcomes
- how well it is communicating its outcomes to stakeholders
- how social impact measurement can drive decision making.
Why measure outcomes? And what are the risks of not doing it?
Not-for-profits have always reported to governments, funders, supporters and their boards on their contribution to the social good, but recent decades have seen the field of measurement growing, changing, and becoming increasingly sophisticated, powered by better technology.
As Ms Riley points out, “If you’re not measuring outcomes and reporting on what is or isn’t working, you should expect negative implications for your funding or income, especially over the longer term.
“I also think it’s reckless and irresponsible to continue to spend money not knowing whether what you’re doing is making a difference.”
She said organisations had a moral and ethical obligation to their stakeholders and to funders. “It’s about transparency, and it’s about social justice.”
She said organisations were already “intuitively and constantly evaluating themselves”, and outcomes measurement helped formalise that process.
Ms Riley said a milestone for the sector’s evolution in outcomes measurement was the 2010 publication of the Productivity Commission’s landmark report Contribution of the Not-for-Profit Sector.
That report recommended that Australian governments:
- adopt a common framework for measuring the contribution of the sector
- ensure reporting and evaluation was best practice
- fund reporting and evaluation by NFPs
- establish a Centre for Community Service Effectiveness.
While those recommendations are yet to be fully implemented, the appetite for their adoption is growing, and Australian not-for-profits should be ready for the inevitable transition.
In late 2021, the Federal Government confirmed its support of good outcomes measurement with $6.7 million earmarked for an outcomes measurement initiative to further build the sector’s capacity to define, measure and communicate outcomes.
The initiative would include a website to provide a central source of “credible and practical information on outcomes measurement” as part of a wider agenda of social impact investing.
How do we get started? And which method should we use?
Ms Riley said outcomes measurement should answer the question that is essential to all not-for-profits: “Did we make a difference?”
In simple terms, outcomes measurement entails the following steps:
- Design a project using a “program logic” model or a more complex “theory of change” model
- Establish appropriate measurements and a related framework
- Collect and analyse your data
- Produce your reports.
Ms Riley accepted that in the still-evolving outcomes field, “it's easy to get overwhelmed” by the multitude of models and methods, and the lack of standardisation.
“Often there's no absolute right or wrong, it's just about learning and giving something a go. People are not going to learn this until they try it. It often takes a couple of cycles … like any change process. You’ve got to prototype it, test it, learn, and improve on it.”
She said that when NFP leaders provide measurement data to their board, CEO or funder, they will know whether it is doing what’s needed.
“They're going to look at it and say, ‘This tells me what I need to know,’ or it doesn't.”
She suggested novice organisations should keep things simple, and just “pick one approach and stick with it”.
“I would just pick a program logic framework and get comfortable working out outcomes for your project. There are lots of program logic frameworks online which look very similar,” she said.
Hold on, what’s ‘program logic’?
Our Community’s book Measuring What Matters gives a good summary.
“A logic model explains how particular activities lead to particular outcomes – if we do this, it will (we believe) result in that.
“A logic model is a useful way for you (and your funders and stakeholders) to test your assumptions about a project (or about anything, really). It provides a way for you to plot a causal chain from what you are proposing to do, all the way to your eventual goals.”
A program logic model typically uses a flowchart structure to map out desired outcomes.
Here’s how that might look in an example also taken from our book, in which
you set up an anti-smoking campaign in schools (note that this example is entirely fictional and not based on evidence).
- And as a result: Campaign materials reach migrant schoolchildren.
- And as a result: Knowledge of smoking-related conditions among migrant schoolchildren increases.
- And as a result: Rates of smoking among migrant schoolchildren decrease.
- And as a result: Rates of smoking-related disease among these children when they grow up decrease.
- And as a result: Rates of smoking-related disease in the state decrease.
Know your goals before you develop your measures
Ms Riley said organisations must clearly understand their long-term aims to achieve the outcomes they seek.
Examples of long-term aims (or goals) might include improving the health of a particular cohort, getting them into jobs, or helping parents get their kids ready for school, she said.
Ms Riley said organisations could “work backwards from there” to understand the progression of short-, medium- and long-term outcomes required to reach those long-term goals.
Once those goals and outcomes are established, organisations are well placed to identify the measures that will demonstrate success in those areas, and thereby build a measurement framework.
Effectively, then, a logic model involves your organisation articulating its goals and what it wants to change, and then going back to the start and documenting the inputs – such as funding, labour, skills and other activities – that you believe will lead towards those goals.
That means thinking through and articulating how you will know you're progressing towards your goals.
This might include identifying the outputs, such as the number of work-readiness classes hosted, or the number of trees planted. Each give you an indication of progress.
From that you might establish outcomes measures, such as the number of people who participate in a work-readiness class who report a positive experience, or who go on to get a job. Another might be a positive change in soil health following tree planting activities.
While outcomes measures can be tricky to develop, when done well, they provide the most meaningful indication of your progress towards your goals.
Ms Riley said many organisations are collecting anecdotal evidence already and are good at marketing their work.
“Organisations love to tell stories about the positive results of their projects, and that traditionally has been the way that they demonstrate their effectiveness, but obviously they pick out the winners, because it helps generate funds and donations, but a robust and holistic approach to measurement is often what is missing.”
Ms Riley advocates a “mixed methods” approach that combines a variety of qualitative methods and quantitative data to create a rigorous data set. This includes using a mix of “open” and “closed” questions in investigations to uncover unintended impacts as well as intended ones.
Qualitative information includes surveys, interviews and polls, while quantitative information might include internal administrative data, she said.
What’s the difference between a logic model, a ‘theory of change’ and other outcomes terms?
Groups developing measurement methods are likely to come across a related methodology known as “theory of change”, but Ms Riley urged new practitioners not to be put off.
“What not-for-profits need to know is that theory of change, program logic, logic model, impact map, causal chains and many other terms are used interchangeably in the sector here and abroad.
“While this can lead to confusion, what all these labels have in common is that they are visual schematics or drawings that attempt to describe the relationships between inputs, outputs and outcomes.”
Those models are often hypothetical, and sometimes evidence-based, but all models aim to visually depict proposed activities, outcomes and achievements.
Ms Riley said that theory of change “purists” would argue that the schematic should also include a narrative that explained such things as the “meta theories” at work, such as social modelling, and should also contain causal links and other documentation that explained the assumptions involved.
"There are a lot of courses and much is written about how to formulate a program logic or theory of change, but the key is that they be developed collaboratively with stakeholders.”
The consensus at Clear Horizon (her former employer) was that there was no “magic methodology” or “one size fits all” approach to creating a theory of change or program logic model, but that it’s crucial to reach a common understanding about whichever model you adopt.
A panel of experts from Clear Horizon suggested that understanding should cover:
- what the theory of change or program logic model is going to be used for
- who needs to be engaged in building it
- the key issue or problem that the intervention aims to addressed
- how the desired change can occur.
“The bottom line is that for most people, a logic model and a ‘theory of change’ mean the same thing. Many evaluators can argue why they are different, but most would agree that what matters is the process of engaging people and the thinking that goes with that.”
Allocate resources and seek help for better outcomes measurement
Ms Riley said that early in the process, organisations should seek peer feedback about their selected model and methods, and if possible, secure the help of an experienced mentor to provide guidance.
“You need to bounce off others and learn from others about what's worked, and whether what you’re doing is fit for purpose,” Ms Riley said.
To build a peer group, organisations could tap into existing expertise and networks such as the Social Impact Measurement Network Australia (SIMNA) or the Australian Evaluation Society (AES).
Whatever support you might want or need, Ms Riley said all organisations must commit time and resources to outcomes measurement and should appoint someone to adopt it as a specialist responsbility.
“There’s a reason we have accountants, marketers and other specialists with deep knowledge of their field,” she said.
You’ll need someone who is “interested and excited about it … somebody in your team who has a bee in their bonnet about it” and is able to follow the process “from end to end”.
While there is a plethora of relevant software platforms available for advanced users, Ms Riley said newer entrants to the field should ignore the promises of easy software solutions.
“Consider outcomes measurement software applications later down the track once you've got some confidence about what you need as an organisation,” Ms Riley said.
On the other hand, she said, expert consultants could be worth the investment for early projects, but only if they took a collaborative approach and helped organisations design processes they could use and own in future iterations.
“They need to not just write a framework and dump it on you, but should co-build it with you, show you what they’re doing, share skills and build your capacity.”
“Of course, not every organisation can afford this, and that’s where building networks and training courses can help.”
How much should we spend on outcomes measurement?
Ms Riley said a good rule of thumb was to spend 10–20% of the cost of a project on outcomes measurement.
That equates to approximately one in five staff on a project being dedicated to proving that you’re achieving your stated objectives.
People should be thinking of this as a “necessary program cost”, she said.
Ms Riley said the United Nations, global non-government organisations (NGOs), community organisations of all kinds and even multinational corporations were spending a similar amount.
She cited the efforts of companies such as Google and Meta (formerly Facebook), which are “completely driven by data”.
“They’re measuring performance, looking for efficiencies and making sure their outcomes are achieved,” she said.
In a world where “data is gold” and “knowledge is power”, she said there was an enormous opportunity for the social sector to deploy data well.
Whereas some funders, donors and stakeholders are obsessed with minimising overheads, Ms Riley said measurement, in contrast, meant knowing resources were properly allocated.
“Wouldn't you rather spend eight dollars out of that 10 and know it works versus spending 10 and having no idea?”
“This overheads conversation is the wrong conversation,” she said.
How can you tell whether an evaluation is any good?
To assess whether an existing evaluation has been worthwhile, Ms Riley suggests asking the following questions:
- Were key stakeholders involved in the evaluation?
- Will the findings be used?
- Does the evaluation answer the key questions for the key stakeholders?
“When producing any outcomes or evaluation report there should be an audience with an information need.
“For example, community elders may want to know whether an early reading program is improving connection to culture and language.
“If it isn’t, that might require improvements to a program, or a different initiative entirely.”
As well as being the right kinds of measures, outcomes measurements should be realistic, appropriate, applicable, timely, accessible and understandable, Ms Riley said.
Use technology to speed up your assessments
The demand for useable evaluations has meant a shift away from “big 200-page evaluation reports” and towards insight reports, dashboards, and brief information hits.
“We need data points that cut through and learning points that are useful and applicable,” Ms Riley said.
Forget about spending months on a large report and then dropping it “after everyone has left the building”.. Evaluations should be more like “feedback loops” that can provide information as a program is being implemented.
Real-time feedback – using phones and simple online applications such as spreadsheets – can allow organisations to quickly change course if a program is not working, Ms Riley said.
An example might be a long-term sports program in which a lead organisation could text participants and request short responses after training, providing an easy 1–10 scale to rate their experience.
“It’s better to check in along the way rather than wait until the end of three years to find out that no one turned up, everyone hated the coach, and they felt there was a gender bias,” Ms Riley said.
What can we learn from others about success and failure?
Examples of effective evaluations can be seen in the social impact measurement work of the 2021 crop of winners of the SIMNA Awards, which recognised:
- Excellence: Jointly awarded to Hireup and Uniting NSW.ACT
- Innovation: Uniting NSW.ACT for its 360-degree outcomes measurement
- Outstanding collaboration: Barang Regional Alliance for the Ngiyan Wayama indigenous data sovereignty project
- Leading Funder: Equity Trustees for the Greater Shepparton Lighthouse Project
Ms Riley said failed projects were often more instructive than successful ones, but conceded most organisations were reluctant to share those experiences externally.
A notable exception is Engineers Without Borders in Canada, which routinely publishes a Failure Report ; the Australian chapter of the organisation sometimes does the same. New Philanthropy Capital has compiled a handful of other examples.
Where can I get help?
Ms Riley said many organisations provide outcomes measurement resources and training, including her former employer Clear Horizon, which runs the Clear Horizon Academy.
She also recommended visiting the web pages of the Melbourne-based Better Evaluation, a registered charity which hosts resources and blogs and explains many of the key themes and approaches used in the field.
The Social Impact Measurement Network Australia (SIMNA) is another useful resource, with a stated purpose of fostering social impact measurement, and it encourages networks, training, consistency in method and professionalism.
As well as sponsoring recent SIMNA awards, Our Community has hosted a webinar on methods for “managing for outcomes”, a process of “defining organisational goals, rigorously measuring performance against those goals, and then continuously managing the organisation in line with those goals and measures”.
Ms Riley said while organisations naturally placed a high value on results for clients, members and supporters, they must also invest in systematic measurement, analysis and reporting of outcomes.
"There’s good reason for the adage, ‘We measure what we value and value what we measure’.
“Organisations should not be afraid to put more effort into outcomes measurement. Those that do this the best will be the most successful in achieving their mission, attracting funding and winning the plaudits they deserve.”
More information and resources
Ten questions every director needs to ask about measuring outcomes
Watch now: Free intro to outcomes webinar from Jen Riley
Free PDF book: Measuring What Matters
Webinar: Managing for outcomes (a director’s guide with the help of SIMNA)
Innovation Lab: Developing Data Capability in Your Not-for-Profit
Help sheet: Five steps to becoming impact led in the new environment
Case studies: Social impact measurement award winners
Social Value International: The Principles of Social Value
Social Ventures Australia: A Guide to Social Impact Measurement
Centre for Social Impact: Roadmap to Social Impact (A step-by-step guide to planning, measuring and communicating social impact)
Clear Horizon: Developing a theory of change – is there a right way?
Spark Strategy: How to demonstrate your impact (PDF download) | Your starting point
Conference summary: How directors can measure their impact better
Funding perspective: How are you measuring social impact?
Centre for Evidence and Implementation: Research, policy and practice expertise