The Ethical AI Checklist for Charities and Social Enterprises

How to make sure your use of AI reflects your values, protects your communities, and strengthens trust.

AI is helping charities and social enterprises work smarter — saving time, reducing admin, and supporting service delivery.

But with great potential comes real responsibility.

In this phase of your AI journey, it’s time to go beyond what’s possible with AI and ask what’s right. That means thinking through how your organisation adopts AI in a way that’s safe, fair, transparent, and aligned with your mission.

This blog is a practical guide to doing just that — with a clear, values-led checklist you can use before adopting any new AI tool.

Why Ethics Matter More in the Third Sector

Charities and social enterprises don’t exist to maximise profit or efficiency. You exist to build trust, deliver impact, and serve people — often those facing inequality, exclusion, or risk.

That means your technology choices carry unique responsibilities.

  • Trust: You’re stewards of sensitive data and deep community relationships

  • Equity: Your mission often focuses on fairness — your tools should reflect that

  • Transparency: You’re accountable to supporters, funders, and the public

  • Care: You have a duty to avoid harm, especially to those most vulnerable

Responsible AI use isn’t just a tech issue — it’s a governance, values, and accountability issue.

The Ethical AI Checklist

Before you adopt, pilot, or scale any AI tool, use this checklist to guide team discussions, trustee decisions, and community conversations.

1. Mission Alignment

  • Does this tool clearly support your charitable purpose?

  • Are you adopting it to serve your mission — or just to “keep up”?

  • Could it unintentionally undermine trust, dignity, or equity?

⚠️ Watch out for tools that prioritise efficiency over empathy or replace meaningful human contact with automation.

2. Data Ethics and Privacy

  • What kind of data will this AI use — and do you have permission?

  • Have you considered GDPR, consent, and data minimisation?

  • Could this data be misused or misunderstood if something goes wrong?

✅ Use anonymised or aggregate data where possible. Choose providers that offer clear, compliant data handling practices. Never use sensitive personal data (e.g. safeguarding info) without serious safeguards.

3. Bias and Fairness

  • Could this tool make decisions that disadvantage certain groups?

  • Was the AI trained on data that reflects your communities?

  • Are marginalised voices involved in your evaluation process?

⚠️ Be cautious of AI trained mainly on on data from world regions and groups that experience most privilege. Even unintentional bias can reinforce inequality.

4. Transparency

  • Can you explain how this AI tool works — in plain English?

  • Are beneficiaries aware that AI is being used?

  • Can people opt out or raise concerns?

✅ Be open with supporters, users, and staff about where AI is used and why. Choose tools that offer explainability over “black box” models.

5. Human Oversight

  • Is there always a human in the loop — especially for high-impact decisions?

  • Can your team override AI outputs if needed?

  • Have you planned for failure — what happens if the AI gets it wrong?

👥 Never fully automate decisions that affect someone’s access to services, funding, or support. Use AI as a tool — not a judge.

6. Community Impact

  • Have you considered how this AI might affect the people you serve?

  • Could it widen digital divides or exclude certain groups?

  • Are you supporting long-term empowerment — or short-term convenience?

🌍 Involve communities early. Build tools that work for the least digitally connected, not just the most tech-literate.

Create Your Own Ethical AI Policy

You don’t need a long document — just a clear, values-based approach.

Start with:

  • Your principles: What values guide your use of AI?

  • Approved uses: What’s OK — and what’s not?

  • Oversight: Who reviews, approves, and checks AI tools?

  • Review points: How often do you check if it’s working as intended?

This keeps your organisation in control, even as the tools evolve.

Know Where to Draw the Line

Some AI uses are simply not worth the risk. For example:

  • Tools that fully automate decisions about access to services

  • Systems that monitor or track people without clear consent and purpose

  • Models you can’t explain to the people they affect

  • AI that replaces essential human care, trust, or connection

These are red lines — and your organisation deserves the clarity to say no.

Practical Steps to Get Started

This week:

  • Use this checklist to review any AI tools you already use

  • Schedule a team discussion about responsible AI adoption

  • Start drafting your organisation’s AI principles

This month:

  • Involve trustees and service users in conversations about values and risks

  • Create a simple AI ethics policy

  • Set up a basic approval and review process

This quarter:

  • Train staff on what responsible AI use means in your context

  • Revisit your existing tools and processes through an ethical lens

  • Share your learning with others in the sector

Final Word: Let Your Mission Drive Your Technology

You don’t have to be perfect — but you do have to be intentional.

Responsible AI adoption is about asking the right questions, listening to your community, and putting your values into practice. It’s about building trust as you build tools.

Because the people you serve aren’t just users of your technology — they’re the reason it exists in the first place.



We’re helping charities and social enterprises to use AI in an ethical, equitable and responsible way — one that puts mission first. Sign up to learn more.

Next
Next

Building with AI: A Beginner’s Guide for Charities and Social Enterprises