Getting Your Team on Board with AI: A Practical Guide for Charity Leaders

Moving from personal experimentation to team-wide confidence with AI

You've tried ChatGPT. You've seen the potential. Maybe you've even started using AI for some of your own tasks β€” drafting emails, summarising reports, or brainstorming ideas for your next campaign.

But when you mention AI to your team, you get a mixed response. Some people are curious but nervous. Others are skeptical or worried. A few might be secretly experimenting already, whilst others seem determined to avoid it entirely.

Sound familiar?

This is the reality for most charity leaders right now. The technology isn't the barrier anymore β€” it's getting your people comfortable and confident with using it.

Why Your Team's Hesitation Makes Perfect Sense

Before we dive into solutions, let's acknowledge something important: your team's caution about AI is actually quite reasonable.

Unlike corporate employees who might be motivated by competitive advantage or personal career advancement, charity staff are driven by mission, values, and genuine care for the communities they serve. They're right to ask questions like:

  • "Will this compromise the personal touch our clients expect?"

  • "How do we know this is ethical and these companies won’t misuse our data?"

  • "What if it makes mistakes with something important?"

  • "Are we just jumping on a bandwagon instead of focusing on what matters?"

These aren't barriers to overcome β€” they're valuable perspectives to embrace. The goal isn't to convince everyone to love AI. It's to create an environment where people feel safe to explore whether it might help them do their jobs better.

Understanding Your Team's Different Reactions

Research shows that people typically fall into one of four categories when it comes to new technology adoption. Understanding these patterns can help you tailor your approach:

πŸš€ The Enthusiasts (10-15% of your team)

These are the people who are already using AI tools, probably without telling you. They're excited about the possibilities and might be frustrated that your organisation isn't moving faster.

How to support them: Channel their energy productively. Ask them to share what they've learned, but also set some boundaries about experimenting with sensitive data or external-facing communications.

πŸ€” The Cautious Optimists (30-40% of your team)

They're interested in AI but worried about doing it wrong. They need permission and guidance more than they need convincing. These people often become your strongest advocates once they feel supported.

How to support them: Provide clear guidelines and safe spaces to experiment. Pair them with enthusiasts for peer learning. Celebrate their questions and concerns as valuable contributions.

😟 The Skeptics (30-40% of your team)

They have genuine concerns about AI β€” ethical, practical, or mission-related. Often, they're protecting something important about your organisation's values or way of working.

How to support them: Listen to their concerns seriously. Many of their objections are valid and worth addressing. Show them how AI can strengthen rather than compromise the things they care about.

🚫 The Resisters (10-20% of your team)

They're actively opposed to AI or dismiss it as hype. Sometimes this comes from deep expertise that makes them protective of established ways of working.

How to support them: Don't force it. Focus on the willing participants first. Often, resisters become more open once they see successful, values-aligned use by their colleagues.

What Actually Works: Simple Techniques for Building Confidence

The most effective approach isn't training sessions or mandates β€” it's creating opportunities for voluntary experimentation with proper support.

🎯 Start with the Willing

Don't aim for organisation-wide adoption straight away. Begin with the people who are already curious. When they start sharing positive experiences, others will naturally become more interested.

πŸ’‘ Try this: Ask in your next team meeting: "Who's interested in spending 15 minutes experimenting with AI tools?" Start with just those people.

🀝 Create Peer Learning Opportunities

People learn better from colleagues than from external experts. Pair enthusiasts with cautious optimists. Let them discover together what works and what doesn't.

πŸ’‘ Try this: Set up "AI buddy partnerships" where two people commit to trying one AI task together each week for a month.

πŸ›‘οΈ Make It Safe to Fail

Create an environment where people feel comfortable sharing both successes and failures. Often, the failures are more instructive than the successes.

πŸ’‘ Try this: Start team meetings with a quick "AI experiment update" where people share what they tried and what they learned β€” good or bad.

🎯 Connect Everything to Mission

Every AI experiment should connect clearly to your organisation's mission and values. This helps skeptics see relevance and maintains your values-first approach.

πŸ’‘ Try this: Before trying any AI task, ask: "How might this help us serve our clients better or achieve our mission more effectively?"

⏰ Keep It Small and Optional

Don't create additional pressure in already busy workloads. Make AI exploration feel like a helpful option, not another obligation.

πŸ’‘ Try this: Introduce "15-minute Fridays" β€” optional sessions where people can try AI tools for small tasks with no pressure to report back.

Handling Common Pushback

Here are the most frequent concerns you'll hear, and simple ways to address them:

"We don't have time to learn new tools"

Response: "Let's start with 5-minute experiments that might save us time later. If they don't work, we'll stop."

"This feels impersonal and against our values"

Response: "What if AI could handle the admin work so we have more time for the personal, values-driven parts of our jobs?"

"What if we do it wrong or make mistakes?"

Response: "We'll start with low-stakes tasks and always review AI outputs before using them. Making mistakes is part of learning."

"Our data isn't good enough for AI"

Response: "Let's start with tasks that don't need perfect data β€” like drafting emails or brainstorming ideas."

"I don't trust AI to make decisions about our work"

Response: "Neither do we. We're using AI to help us think and work, not to make decisions for us."

Your First Steps This Week

You don't need a comprehensive strategy to begin building team confidence with AI. Just start with these simple steps:

This Week:

  • Have an informal chat with your team about their thoughts on AI

  • Identify 2-3 people who seem curious or willing to experiment

  • Choose one simple, low-risk task to try together (like summarising meeting notes or brainstorming event ideas)

Next Week:

  • Ask your early experimenters to share what they learned with the rest of the team

  • Address any concerns or questions that come up

  • Invite more people to try, but don't pressure anyone

The Week After:

  • Look for patterns in what's working and what isn't

  • Adjust your approach based on what you're learning

  • Start thinking about which tasks might benefit most from AI support

Building Momentum Without Pressure

Remember, this isn't about getting everyone to use AI immediately. It's about creating a culture where people feel safe to explore whether AI might help them do their jobs better.

Some people will become regular users. Others might only use it occasionally. Some might never use it at all β€” and that's okay. The goal is informed choice, not universal adoption.

What matters is that your team feels supported to explore new tools that might help them achieve your mission more effectively, whilst maintaining the values and approach that make your organisation special.

The Next Step: From Confidence to Capability

Once your team is comfortable experimenting with AI, you'll naturally start to see patterns in what works best. Some tasks will clearly benefit from AI support, whilst others won't. Some team members will discover they have a knack for using these tools effectively.

That's when you're ready to move from general experimentation to focused implementation β€” identifying the specific tasks where AI can make the biggest difference to your team's effectiveness and impact.

Ready to help your team explore AI with confidence? Join our growing community of charity leaders who are taking a thoughtful, values-first approach to AI adoption.

Previous
Previous

Putting AI to Work: Everyday Tasks Charities Can Start Automating

Next
Next

Start Small with AI: A Gentle Guide for Charities Ready to Explore