Should you trust artificial intelligence? Ethics and governance issues for not-for-profits

Posted on 21 Nov 2023

By Matthew Schulz, journalist, Institute of Community Directors Australia

DALL E 2023 11 20 13 52 38 Illustrate a humanoid android with a blindfold over its eyes in a 16 9 format The android has a futuristic metallic body with articulated joints and
DALL·E prompt: "Illustrate a humanoid robot with a blindfold"

Science fiction writer Isaac Asimov first conceived of the three laws of robotics in a 1942 short story.

I, Robot
Author Isaac Asimov's seminal work.

Those fictional laws for robots proposed:

  1. a robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. a robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
  3. a robot must protect its own existence as long as such protection does not conflict with the first or second law.

User error a common thread in artificial intelligence failures

While those fictional laws were prescient and a great plot device, the reality is that the real trouble usually comes from the humans designing the robots.

Philosopher Peter Singer pointed out the contradiction in Asimov’s laws years ago: that most computer-assisted weapons are designed specifically to hurt us.

It wasn’t long ago that Australians were horrified by the former government’s “robodebt” scheme, which used AI systems to automatically ­– and wrongly – issue people with crippling debt notices. In the worst cases, these led to suicides. A similar system of welfare fraud surveillance in the Netherlands was scrapped by a court ruling.

Artificial intelligence has also been linked to:

The UK-based Tech.Co is among the many watchers collating a fast-growing list of AI failures.

The common thread linking these failures is that humans are apt to use AIs recklessly if unchecked.

ICDA members know better than most that governance failings are behind many poor decisions. It was alarming, then, to see the board of the company behind ChatGPT, Open AI, embroiled in drama after sacking – and then reinstating – its CEO this week.

Good programming won’t be enough to meet the ethical and governance dilemmas facing us in the artificial intelligence age.

{ "title": "Australian AI Ethics Principles", "description": "The eight Australian AI Ethics Principles are voluntary, and designed to ensure AI is safe, secure and reliable. By applying the principles and committing to...", "url": "https:\/\/www.youtube.com\/watch?v=TihWPgUVCKw", "type": "video", "tags": [ "video", "sharing", "camera phone", "video phone", "free", "upload" ], "feeds": [], "images": [ { "url": "https:\/\/i.ytimg.com\/vi\/TihWPgUVCKw\/hqdefault.jpg", "width": 480, "height": 360, "size": 172800, "mime": "image\/jpeg" }, { "url": "https:\/\/i.ytimg.com\/vi\/TihWPgUVCKw\/maxresdefault.jpg?sqp=-oaymwEmCIAKENAF8quKqQMa8AEB-AHUBoAC4AOKAgwIABABGGUgZShlMA8%3D&rs=AOn4CLCa6gKia6XeIPTwuVuJiSXbWExwOw", "width": 1280, "height": 720, "size": 921600, "mime": "image\/jpeg" } ], "image": "https:\/\/i.ytimg.com\/vi\/TihWPgUVCKw\/hqdefault.jpg", "imageWidth": 480, "imageHeight": 360, "code": "<iframe width=\"1920\" height=\"1080\" src=\"https:\/\/www.youtube.com\/embed\/TihWPgUVCKw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen title=\"Australian AI Ethics Principles\"><\/iframe>", "width": 1920, "height": 1080, "aspectRatio": 56.25, "authorName": "Department of Industry, Science and Resources", "authorUrl": "https:\/\/www.youtube.com\/@IndustryGovAu", "providerIcons": [ { "url": "https:\/\/www.youtube.com\/favicon.ico", "width": 16, "height": 16, "size": 256, "mime": "image\/x-icon" }, { "url": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon.ico", "width": 16, "height": 16, "size": 256, "mime": "image\/x-icon" }, { "url": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon_32x32.png", "width": 32, "height": 32, "size": 1024, "mime": "image\/png" }, { "url": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon_48x48.png", "width": 48, "height": 48, "size": 2304, "mime": "image\/png" }, { "url": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon_96x96.png", "width": 96, "height": 96, "size": 9216, "mime": "image\/png" }, { "url": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon_144x144.png", "width": 144, "height": 144, "size": 20736, "mime": "image\/png" } ], "providerIcon": "https:\/\/www.youtube.com\/s\/desktop\/842f5576\/img\/favicon_144x144.png", "providerName": "YouTube", "providerUrl": "https:\/\/www.youtube.com\/", "publishedTime": "2021-07-05T16:13:55-07:00", "license": null }

Governments want AI boundaries, NFPs will need them

Governments have realised guardrails are needed to control the use of artificial intelligence.

Australia’s Artificial Intelligence Ethics Framework, the voluntary guide aimed at making organisations’ use of AI safe, reliable and fair, is now four years old. The principles in the framework propose AI systems should:

  • benefit individuals, society and the environment
  • respect human rights, diversity, and the autonomy of individuals.
  • be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups
  • respect and uphold privacy rights and data protection, and ensure the security of data.
  • reliably operate in accordance with their intended purpose
  • involve transparent and responsible disclosure so people know when they are being significantly affected by AI, and when an AI system is engaging with them
  • enable “contestability”, such that when an AI system [causes negative impacts] there should be a timely process to allow people to challenge the use or outcomes of the system
  • be subject to accountability, including human oversight.

The federal government is now considering further action in line with responses to its discussion paper Supporting Responsible AI. Nearly 450 of those responses, including some from not-for-profits, are available to read online.

AI safety summit family photo
Attendees at the AI Safety Summit in London, including UN Secretary-General António Guterres, pose for a photo. Credit: UN/Alba García Ruiz

Just this month, 28 countries and the EU agreed to the Bletchley Declaration on AI safety, which establishes “a shared understanding of the opportunities and risks posed by frontier AI”. Countries agreed to “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.”

The United Nations Secretary General, António Guterres, also said that any principles of AI governance should be based on the UN Charter and its Declaration of Human Rights.

Why ethics in AI are so important for not-for-profits

As a not-for-profit or charity leader guiding a mission for good, you have a greater responsibility than others for taking robust measures to prevent your robots from running amok.

Our Community data scientist Nathan Mifsud said not-for-profit leaders should familiarise themselves with both the capabilities and the limitations of ChatGPT and other AI models.

He warned that users could be seduced into thinking ChatGPT was more capable than it was.

“Because ChatGPT is so fluent with language, which we consider a fundamentally human characteristic, we tend to infer that it has a range of cognitive abilities, a flexible rather than fixed intelligence.”

Instead, users should remember that ChatGPT is simply trained to predict words or images based on prompts. He said this could be “remarkably useful” for some tasks, but he stressed the need to cross-check material with trusted sources, and to be aware of ethical issues involved in AI, such as the use of poorly paid human labour to filter “toxic responses”, the threat of bias, and the domination of technology by corporations.

He also suggested that the introduction of these tools continued a “trajectory of automating away labour and care” in society, and that the way NFPs, especially frontline service organisations, approached AI could model how to “safeguard the human elements we want to keep” in the future.

Board Dalle E3 v2
DALL·E prompt: "Create an image inspired by a science-fiction film featuring a board meeting with a middle-aged female chairperson with cybernetic enhancements and diverse board members".

What not-for-profits can do now to act ethically with AI

UK charity support coalition Catalyst this month published a guide to the ethical use of artificial intelligence. We’ve adapted some of the article here, using this creative commons licence.

The Catalyst guide suggests that users must consider what data they’ll be “giving” to ChatGPT and other tools, to prioritise accuracy and human checking, to consider what impact AI will have on jobs and society, and to carefully consider the suitability of AI for any particular workflow.

It highlights these as key risks and limitations of AI systems:

  • data security and privacy issues
  • worsening bias and discrimination
  • fake content
  • opaque details about data collection and AI decisions
  • reputational risk from inaccurate information
  • environmental impact of the vast energy use by AI
  • job losses and other social impacts
  • regulatory and legal issues.

And Catalyst suggests asking yourself these questions:

  • How might AI help you to achieve your goals?
  • Do you understand how AI might affect your organisation?
  • Does the use of AI tools align with your mission and values?
  • How might AI affect the people you support?
  • How might you use AI for service delivery, communications, fundraising, data analysis and more?
  • What’s your organisation’s level of digital maturity?
  • What’s the potential for harm from AI for your organisation, staff and users?

It also refers to the Civic AI Handbook, aimed at digital leaders in community organisations. The handbook proposes that organisations should be open about when (and how) they are using AI, should always use a human to review AI content, and should avoid sharing any private or sensitive information with third-party AI tools without first checking their privacy policies.

The report recommends a series of excellent guides already published, such as Zoe Amar Digital’s AI Checklist, which provides scores of questions to help guide an audit of what your organisation is already doing, and how it can adapt to the AI environment.

Many of those questions align with the principles in Australia’s AI ethics framework.

The UK’s charity excellence framework has also produced a guide titled Charity AI Governance and Ethics, which incorporates “limited input” from ChatGPT.

The policies and policy adjustments your organisation may need to consider include these:

Civic AI (UK) suggests organisations might also need to consider issuing public statements about their position on AI, consider their responsible use and development of generative AI, and consider adopting a policy covering written submissions from others.

Learn from others’ mistakes

Canada’s Institute on Governance this year published a literature review comprising case studies, risk analyses, and summaries of best practice approaches in governments around the world, Towards a Considered Use of AI Technologies in Government.

It provides a comprehensive analysis of some of the biggest failings and most controversial practices of governments around the world, including an investigation into Australia’s robodebt scandal.

It found that AI rollouts by governments “have had significant challenges … particularly when they have been introduced into very sensitive contexts that could impact vulnerable populations”.

It found that failures were largely attributable to technical errors, organisations ignoring the law or facing a “governance vacuum”, opaque AI systems, situations in which organisations changed policies to accommodate new tech, and “sensitive deployment contexts”.

It says its 24 case studies highlight how to use AI in ways that are “consistent with legal norms”, can track the good and the bad, and “enable public participation in oversight”.

DALL E 2023 11 17 13 28 32 A medieval traveller depicted as a woman wearing a short riding hood and sitting on a horse with saddlebags arrives at a crossing on a forest path
DALL·E prompt: "Illustrate a medieval traveller, depicted as a woman, wearing a short riding hood and sitting on a horse with saddlebags, arrives at a crossing on a dark forest path".

At the crossroads: Carefully tend your AI operations

Well-regarded thought leaders writing for the Stanford Social Innovation Review in September argued that ethical adoption of AI tools requires a “framework for ongoing exploration, experimentation and growth”.

In short, the authors – Beth Kanter, Allison Fine and Philip Deng – suggested that not-for-profit leaders must:

  • be knowledgeable
  • face up to anxieties about the tech from staff and volunteers
  • remain “human-centred” in their use of AI, and pledge that AI tools should not make organisational decisions
  • use data safely
  • mitigate risks
  • use AI for the right problems, such as for repetitive and time-consuming tasks
  • pilot uses of AI before deploying large-scale uses
  • consider how AI may lead to job changes, which may require additional training.

The authors mirror the views of community sector leaders across the world that AIs are fundamentally changing the way organisations work, and that one of the most important things that leaders can do is to implement “robust ethical and responsible use policies right now”.

If they do this right, organisations will reap many benefits yet keep their organisations humans.

Become a member of ICDA – it's free!