Why radical moderates build stronger boards
Posted on 11 Nov 2025
I’ve seen what happens when fear of conflict wins out over taking a principled stand.
Posted on 14 Aug 2025
By Nina Laitala, training lead, Institute of Community Directors Australia
Artificial intelligence (AI) has existed for decades, but generative AI models (such as ChatGPT, Gemini and Claude) have entered the community and not-for-profit (NFP) sector only in recent years – and fast. As adoption accelerates, so do the ethical risks.
Community Directors has developed an AI governance framework to help NFPs implement AI safely and ethically. However, one of the biggest challenges is not implementing AI, but thinking through its ethical implications for the organisation.

Boards play a critical role in ensuring these conversations about ethics happen early and involve all the right people, before tools are deployed or decisions are made. The following questions can help shape a strategic and ethical approach to AI in your organisation.
This is a risk, especially if staff use publicly available tools without understanding how the data might be stored or reused. Organisations should set clear rules about data input and avoid entering confidential or personal information into unsecured platforms.
A plain-language data and AI use statement should be developed and made available to clients, staff and partners. This should outline which tools are in use, for what purpose, and any associated risks.
Many AI tools learn from user input. If you’re not using a product with strong data protections that meets professional standards, there is a risk that sensitive or proprietary content could be stored or repurposed in ways beyond your control.

Generative AI models consume large amounts of energy, particularly when hosted in cloud data centres. Boards should ask for information on environmental impacts and factor this into procurement decisions.
Yes. These include smaller models, offline tools, and tools hosted in green data centres. Organisations can include sustainability guidelines in their AI policy.
"AI should support, not replace, human insight."
This requires a values-based conversation. Boards should consider the climate impacts of AI use in the context of who bears the cost and who benefits from the gains.
How will we ensure that human judgement and experience remain central to decisions involving AI?
AI should support, not replace, human insight. Staff should be trained to critically review AI-generated content and be empowered to override or question its outputs.
Are we using AI to fill gaps that should be filled by people? If so, why?
Where AI is being used to reduce staff numbers or delay hiring, boards should question whether this aligns with the organisation’s purpose and values. The use of AI should be people-centred, not simply efficiency-driven.
Are we considering how AI might exclude or disadvantage certain groups?
Bias in AI systems can result in harmful or exclusionary outcomes. Boards should expect regular reviews of how AI is affecting different communities and clients.
How will we ensure that marginalised voices are part of the conversation about AI use?
Include diverse human experience in planning and governance processes. Ensure cultural safety, accessibility and equity are embedded in how AI is implemented and reviewed.
How will we ensure staff and volunteers use AI responsibly?
Training should be provided to build understanding of what AI can and cannot do, the risks involved and how to use it ethically. The organisation should enable experimentation and learning.
What if some staff are unwilling or unable to use AI tools?
Adoption should not be mandatory. Alternative workflows and support should be offered where possible, recognising that different people have different comfort levels and needs.
Will any teams or roles be disproportionately affected by AI use?
Roles involving writing, communications and admin tasks will be particularly affected. Boards should ensure these impacts are assessed and addressed through planning and consultation.
How do we introduce AI ethically without overburdening already stretched teams?
AI should be introduced gradually, with clear boundaries and a focus on reducing workload. Implementation should be monitored to ensure it supports rather than pressures staff.
Do we have an acceptable use policy or AI governance framework in place?
If not, one should be developed and approved by the board. It should define appropriate use, responsibilities, risk management and review processes.
Are there realistic and sustainable options for not using AI?
It is becoming difficult to avoid AI completely, particularly as it is embedded in common platforms such as Microsoft and Google. If the organisation decides not to adopt AI, this should be a clear and coordinated decision that is reviewed regularly.
If government or funding bodies are using AI, do we understand how that affects us?
Boards should request transparency from partners and understand how third-party AI use could impact clients, compliance or data integrity. This should form part of partnership and risk assessments.
The ethical use of AI is not just a technical or operational matter. It is a governance issue that goes to the heart of an organisation's values, purpose and accountability.
Boards have a critical role to play in asking the right questions, setting clear expectations, and ensuring that any use of AI supports rather than undermines the people and communities the organisation exists to serve. Starting with thoughtful, inclusive conversations and a practical framework will help your organisation navigate AI with integrity and confidence.
Community Directors offers these resources to support your organisation in deploying AI ethically, sustainably and effectively.
Posted on 11 Nov 2025
I’ve seen what happens when fear of conflict wins out over taking a principled stand.
Posted on 15 Oct 2025
Writing grant applications is usually seen as an operational job, but that doesn’t mean the board…
Posted on 18 Sep 2025
Organisations using artificial intelligence to transcribe and summarise meetings must ensure they…
Posted on 14 Aug 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 17 Jul 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 26 Jun 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 21 May 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 10 Apr 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 13 Mar 2025
ICDA training lead Nina Laitala examines the governance issues facing Australian not-for-profits.
Posted on 18 Feb 2025
In her monthly column, ICDA training lead Nina Laitala examines the governance issues facing…