Why radical moderates build stronger boards
Posted on 11 Nov 2025
I’ve seen what happens when fear of conflict wins out over taking a principled stand.
Posted on 10 Nov 2025
By Dan Lalor
DAN LALOR on "The future of radical moderation" in Radical Moderate.
We need leaders who are prepared to take a brave, but moderate approach when it comes to navigating the risks and rewards offered by AI, says DAN LALOR, Director of Client Success at Home Made Digital.
I still remember vividly the first time I tried an AI image generator. It was early 2023 and Twitter was humming with excitement about the latest release of a tool called MidJourney. I was initially sceptical. "Surely, AI can’t develop anything close to useful for my day-to-day," I thought.
As a fan of AI artists such as Claire Silver and Orkhan Mammadov, I had been exposed to AI-generated creations earlier than most and was certainly open-minded to its use cases. So a tool like MidJourney, which I might be able to use myself without any design skills, was nevertheless enticing.

Prompting was easy enough, however every image just seemed "off" in some way. We were deep in the "uncanny valley" of outputs, where every human had seven fingers, faces blurred into backgrounds, and prompts didn’t quite translate into finer details.
I felt validated. "This is cool but isn’t replacing designers any time soon." I cancelled the MidJourney subscription and shut the laptop. Many people did the same and stopped here.
For me though, with each update of MidJourney, alongside the launch of large language models (LLMs) such as ChatGPT and Claude, I’d (re)subscribe each time and repeat the same process for testing image generation, this time with the inclusion of copywriting task prompts as well.
Prompt a few things, be unhappy with the output quality,
shut the laptop.
Each time leaving a little more deflated with the reality vs hype of it all – but also always with the nagging awareness that this was the worst it would ever be, given how fast the technology was improving.
This process continued over the proceeding years, until recently, the laptop didn’t shut. I was now suddenly in awe of the output. It was harder to tell the humans were AI generated. The copywriting was more contextual and refined. Details sharpened. My prompts translated with surprising accuracy. I felt more at the wheel of it all.
"AI isn’t coming; it has firmly arrived."
My most tangible experience with AI had been in this creative space, but the technology was already transforming many other fields too. From medical diagnoses and drug discovery in the healthcare sector, to legal contract reviews and policy simulations in the law industry. From fraud detection AI tools in finance, to personalised learning plans and automated AI grading in the education sector.
It was at that moment, where my own tangible experience with AI had improved so greatly, in such a short amount of time, that the future became clearer to me. We are living through a moment that may prove as transformative as the Industrial Revolution – a time when the nature of work, value, and human contribution was irrevocably changed.
AI isn’t coming; it has firmly arrived.
Like the steam engine before it, AI is poised to reshape industries, redefine professions, and rewire our relationship with labour and creativity. Yet, many people, workplaces, and entire sectors – the charity sector included – remain fundamentally unprepared.

In my professional life working with nonprofits, this technological shift is landing at a time when charity fundraising is getting tougher each year. Recent benchmarking data shows donor numbers remain flat; with overall revenue for social impact much the same.
The sector seems fixated on the looming intergenerational transfer of wealth – and what it means to be "ready" for it – but little attention is being paid to how emerging technologies like AI will interface with donors, reshape fundraising teams, and surface new risks and opportunities to consider.
In this context of shrinking donor numbers and an intensified focus on building lasting relationships, the question becomes: what role will AI play in finding and engaging donors, and who sets the ethical guardrails for its use?
The temptation, of course, is to rush in. When budgets are tight, it’s easy to see a new tool like AI as a silver bullet. And to be fair, the appeal is real. One of the most enticing promises of AI in fundraising is its ability to target with near-surgical precision. Predictive modelling, AI-powered donor scoring, algorithm-driven copywriting, and machine-learning-optimised messaging all offer the potential to lift conversion rates while reducing cost. Huzzah!
Surely these developments aren’t inherently bad? In fact, from a donor perspective, more relevant and timely communications might be a welcome improvement. The benefits alone, however, don’t guarantee responsible use of these new tools.
AI isn’t inherently unethical – but it is undeniably powerful.
And in a potential system optimised for metrics such as clicks, revenue and conversion, there’s a risk we may unintentionally reward manipulative or exclusionary strategies.
Even when donors benefit, they may not have offered true informed consent. AI is often bundled into broader customer relationship management (CRM) systems without transparent opt-ins or clear privacy settings. We are left trusting that the model’s training data is unbiased – an assumption that can quietly distort targeting strategies over time.
There’s also the real risk of data privacy breaches where input material containing sensitive information unknowingly gets resurfaced for all to use, forever part of the AI’s training data.
In this context, those rushing to be radically innovative might, in fact, need to be reined in.
What’s needed is a radically moderate approach. Perhaps the most ethical – and truly radical – thing a fundraising leader can do right now is to pause the use of AI, allowing for time to build their own (and their team's) AI literacy and to make some conscious decisions as to what AI systems you will (and won’t) use and for what purpose.
At a recent fundraising conference I attended, it was clear through a basic canvassing of audience hands that most not-for-profits have started to experiment with AI tools in some way, without an overarching organisational AI policy, training or conversation to help guide them on the risks and opportunities.
"Importantly, we need leaders who are neither tech evangelists nor technophobes. Leaders who take a considered, moderate approach, open to both the promise and the pitfalls. Not a middle ground, but a higher ground."
In a sector built on public trust, our obligation is clear: every fundraising activity, AI-powered or not, must be conducted with honesty, dignity, and transparency. Supporting your team’s education in this new technology starts to provide the appropriate guardrails for its use.
Simply, understanding how AI tools work is no longer optional. It is a core aspect of professional due diligence. Leaders must ensure that their use of data-driven targeting and content generation upholds the values enshrined in the Fundraising Institute Australia (FIA) Code: dignity, respect for privacy, and a clear, informed choice for donors. AI doesn’t absolve responsibility; it amplifies the need for thoughtful governance.
Good governance within our own organisations, though, is just the beginning.
The choices we make as a sector ripple outward, impacting how the public experiences giving and how trust is built or eroded. That’s why ethical AI in fundraising is not just a sector issue – it’s a public trust issue. Governments, tech companies, philanthropists, donors and society generally, all have a stake in shaping the norms that will define this new era.
What we need are open standards, shared frameworks, and deep transparency across the tools we use.
Importantly, we need leaders who are neither tech evangelists nor technophobes. Leaders who take a considered, moderate approach, open to both the promise and the pitfalls. Not a middle ground, but a higher ground.
A space where we embrace the tools of the future that help inspire people to give, without ever losing touch with the raw humanity that giving represents.
Dan Lalor has more than 15 years' experience in the not-for-profit industry, in various leadership roles with some of Queensland and Australia’s leading charities. He is committed to the advancement of the fundraising profession and sits on several non-profit Boards. The views expressed in the article are Dan’s alone.
Posted on 11 Nov 2025
I’ve seen what happens when fear of conflict wins out over taking a principled stand.
Posted on 10 Nov 2025
JOEL DEANE on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
ROD MARSH on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
DR TRISH PRENTICE on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
DR SIMON LONGSTAFF AO on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
DAN LALOR on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
SIMON WALLER on "The future of radical moderation" in Radical Moderate.
Posted on 10 Nov 2025
KATE TORNEY OAM on "Navigating complexity & building resilience" in Radical Moderate.
Posted on 10 Nov 2025
KOS SAMARAS on "Navigating complexity & building resilience" in Radical Moderate.
Posted on 10 Nov 2025
BRETT DE HOEDT on "Advocacy and social change" in Radical Moderate.
Posted on 10 Nov 2025
JEN RILEY on "Navigating complexity & building resilience" in Radical Moderate.
Posted on 10 Nov 2025
CATHERINE BROOKS on "Diversity, inclusion and community engagement" in Radical Moderate.