Measuring impact without ideological bias

Posted on 10 Nov 2025

By Jen Riley

JEN RILEY on "Navigating complexity & building resilience" in Radical Moderate.


Adopting a Radical Moderate approach to grantmaking analysis can help recognise bias, manage it transparently, and design more inclusive, ethical, and useful measurement systems writes JEN RILEY, former Chief Impact Officer, SmartyGrants.

In the grantmaking world, impact measurement has become an increasingly central concern.

Funders want to know if their investments are making a difference. Boards want evidence of outcomes. Grantees want space to tell authentic stories of change.

Yet across my work at SmartyGrants, supporting hundreds of grantmakers and not-for-profit organisations, I saw how easily impact measurement can become distorted by hidden ideological biases.

Jen Riley
Jen Riley

At a time when public discourse is often polarised, and when data itself is increasingly politicised, distorted, or disregarded, there is a particular responsibility on those of us working in evaluation and measurement to take a balanced and inclusive approach.

We must resist the pull of both extremes: the temptation to treat data as an unquestioned arbiter of truth, and the impulse to reject evidence entirely in favour of narrative alone. The Radical Moderate path in this work is one of principled pragmatism, striving to honour diverse perspectives while building robust systems for understanding impact.

The risk is that we end up privileging what is easiest to quantify or what looks impressive, at the expense of the deeper,
more complex changes that matter most.

The goal is not to eliminate bias, as that is impossible.

Rather, we should aim to recognise it, manage it, and design more balanced, inclusive, and ethical measurement systems. This is particularly important for community directors and leaders, whose decisions shape what gets valued and whose voices get heard.

Across the grantmaking and community sectors, five key ideological tensions often shape the way we measure impact:

  • accountability vs. learning
  • positivism vs. constructivism
  • external evaluator vs. community ownership
  • rigour vs. relevance
  • data-driven vs. human-centred.

These tensions have been well documented by thought leaders such as Michael Quinn Patton and Patricia Rogers. They remind us that evaluation is never neutral, and that choices about what we measure and how we interpret evidence always reflect underlying values.

Can we truly measure impact without ideological bias? In short, no. All measurement is shaped by values: what we choose to measure, how we collect evidence, and whose perspectives we privilege. However, while we cannot eliminate bias, we can strive to recognise it, manage it transparently, and design measurement systems that are more inclusive, ethical, and useful.

Based on my work with the SmartyGrants community and broader sector practice, I offer several reflections on how we can navigate this challenge in practice.

First, we must be transparent about values. Evaluation is inherently value-driven, so we should clearly state the worldviews and priorities behind what we measure. For example, a grant program might prioritise community-defined outcomes over standard KPIs. Being explicit allows funders, boards and communities to understand and question these choices, rather than accept them as neutral.

Next, we need to value multiple forms of evidence. A pluralist approach to evidence, combining numbers with stories, community insights, and reflective practice, offers a richer and more authentic picture of impact. Funders must guard against privileging quantitative evidence at the expense of other ways of knowing.

A program that builds trust and cultural safety may not shift short-term numbers but may lay the groundwork for long-term change. This perspective is supported by Thomas Schwandt’s work on practical hermeneutics, which reminds us that meaning is co-constructed and that qualitative and interpretive approaches are essential for understanding complex social change.

Crucially, we need to co-design what success looks like. Too often, grantees are asked to report against predefined funder outcomes that do not reflect community priorities.

In one grant program I reviewed, several First Nations organisations were asked to report on employment outcomes when their actual priority was cultural strengthening and language revival. Where possible, we must support grantees and affected communities to define what success looks like in their own terms.

There is a growing global movement toward decolonising evaluation, led by Indigenous and Global South scholars and practitioners such as Kataraina Pipi Wehipeihana (Aotearoa), Bagele Chilisa (Botswana), Fiona Cram (Aotearoa), and Zenda Ofir (South Africa). This movement promotes Indigenous-led and community-driven approaches and challenges dominant Western paradigms of what constitutes valid evidence and success.

If we are to take these shifts seriously, we must also embed reflexivity in the evaluation process. Evaluation should never be mechanical. We must encourage evaluators, funders, and boards to reflect on their own assumptions and biases, and to ask whose voices are heard, and whose are missing.

Radical moderate Measuring impact without ideological bias article image
Credit: Philip Thurston/iStock
"We must keep the focus ethical. It is not enough to ask whether outcomes are achieved; we must also ask how, for whom, and with what consequences."

Across the SmartyGrants community, I saw growing interest in embedding reflexive practice into grant reporting cycles. This aligns with Donna Mertens' work on transformative evaluation, which centres reflexivity, power awareness, and the pursuit of social justice through evaluation practice.

At the same time, we must strike the right balance between rigour and usefulness. Evaluation should be fit for purpose, designed to match the decisions that need to be made and the complexity of the program. High standards of rigour are important, but not if they render findings irrelevant
or unusable.

In one large government-funded program, I saw grantees producing 80-page evaluation reports to meet funder requirements, with little uptake or learning. In contrast, a set of short, user-friendly insights reports created in partnership with grantees were read and acted upon by board members and policy teams. Patricia Rogers’ advocacy for fit-for-purpose evaluation offers a valuable lens here.

Finally, we must keep the focus ethical. It is not enough to ask whether outcomes are achieved; we must also ask how, for whom, and with what consequences. This demands a willingness to disaggregate data to examine equity impacts and to challenge models that produce positive aggregate outcomes while leaving some groups behind.

These ideas are already being explored in practice. In one national grant program, I saw grantees under pressure to report only positive outcomes.
The result? Final reports were so "polished" as to be laughably good, and unusable as genuine learning tools. Separating learning-focused reflection from accountability reporting would have supported a more honest and useful process.

In another program, community organisations were asked to report against standard funder outcomes that did not align with local priorities. Following strong advocacy from grantees, the funder shifted to a dual reporting model, tracking both standard metrics and community-defined outcomes, a small but important step toward a more balanced approach.

Encouragingly, several local government partners across the SmartyGrants community are now embedding participatory workshops to co-design evaluation frameworks for their social impact grants. This process builds trust, surfaces community priorities, and leads to a richer, more meaningful understanding of impact.

Can we measure impact without ideological bias? No, and claiming otherwise would only obscure the values already shaping our work. We can, however, strive to recognise bias, manage it transparently, and design more inclusive, ethical, and useful measurement systems. Building on the work of leaders such as Michael Quinn Patton, Patricia Rogers, Kataraina Pipi Wehipeihana, Donna Mertens, Thomas Schwandt, and many others, we can move toward practices that strengthen community voice, support genuine learning, and foster evaluation approaches that honour nuance, balance evidence with lived experience, and contribute to more just and equitable outcomes.

In doing so, we enact the very spirit of Radical Moderate leadership, holding space for nuance, fostering dialogue across differences, and building resilient organisations equipped to navigate the complexities of our time.

That is an ongoing journey, and one worth taking.

Jen Riley is the former Chief Impact Officer at SmartyGrants. She works across government, philanthropy, and the community sector to help grantmakers and social purpose organisations design more ethical, inclusive, and practical approaches to measuring impact. Jen is a longstanding advocate for pluralist evaluation and community voice.

More Radical Moderate

Become a member of ICDA – it's free!