Featured image for article: AI Workflows That Actually Work for Nonprofit Leaders

AI Workflows That Actually Work for Nonprofit Leaders

· 8 min read
nonprofit leadershipai for goodtech strategy

Nonprofits are already using AI, and many are seeing real efficiency gains. The question now is how to adopt it in ways that are practical, responsible, and repeatable. Here are the workflows that are working right now, along with the guardrails that make them safe to scale.

Five years into the AI revolution, I still attend presentations that do nothing but warn nonprofit leaders away from the technology. The same graphics about biased image generation. The same cautions about hallucination. The same implicit conclusion: do not touch it.

Meanwhile, according to the 3 Sided Cube AI for Good Adoption Report 2025, 89% of mission-aligned organizations are already using AI, and 77% report noticeable improvements in efficiency. We are so afraid of AI being wrong that we will not even examine if it could be right. We will not risk success for fear of failure.

Nonprofits deserve best-in-breed opportunities, and AI is no different. If public-good organizations do not benefit early from these tools, we have misallocated the upside. The organizations doing the hardest work with the fewest resources should be leading adoption, not trailing it.

So let’s talk about what is actually working.

The Cost of Waiting

The opportunity cost of inaction is real, even if it does not feel urgent. We talk constantly about the burden of doing more with less, but when a genuine tool for doing more arrives, many organizations sit passively.

The concrete risks are mounting. Fundraisers who adopt AI ethically and well are beginning to excel, and donor dollars will increasingly favor those organizations.

On the oversight side, government agencies and major funders are exploring and piloting AI to accelerate review-oriented workflows, including compliance checks, evaluation support, and audit-adjacent analysis. Many agencies have also been publishing AI governance plans, inventories, and use cases, including in oversight and review functions. As documentation cycles accelerate and compliance expectations tighten, organizations without AI fluency will struggle to keep pace.

The gap between early adopters and holdouts will widen, and it will not close on its own.

Risk Management as Part of the Workflow

Responsible adoption is not about avoiding AI. It is about using it thoughtfully.

Start with non-sensitive documents and public information while you are learning. Do not upload PII, PHI, or confidential client data into consumer tools. Use organization-approved or enterprise platforms for anything sensitive. And remember that humans remain accountable. AI drafts, summarizes, and suggests. It does not make decisions.

If you want one practical move that makes everything easier, create a one-page cheat sheet that answers two questions: which tools are approved, and what data is allowed in each. Share it widely. That single page prevents a lot of avoidable mistakes.

These are not obstacles to getting started. They are the habits that let you scale confidently once you have proven value.

Start With Your Story

Before diving into specific workflows, the single best thing a leader can do is organize your organizational information into easily accessible, explainable guides and stories.

This sounds like common sense. It is. But I am consistently surprised at how many organizations do not have an accessible document describing their organizational history, mission and vision, brand guide, writing style guide, success stories, and baseline material. This information is useful for development and communication work. It is doubly so for AI work.

Here is why. Every prompt you write requires you to introduce AI to your organization all over again. Having those documents on hand so you can upload them and go is powerful. It is critical to consistent output.

I have seen what happens when organizations do this work. Early in the GPT-3 era, I worked with a nonprofit that had a large number of corporate sponsors, each with slightly different giving priorities and impact goals. The executive director carefully gathered impact metrics, anonymized success stories, community reach data, mission, and vision, all organized into effective seed documents.

The year before, about half of their individualized impact reports were never produced. The feedback on the reports that did go out was consistent: too generic for giving officers to use when reporting up to their corporate leadership. There was significant reputation risk, and sponsors were openly discussing shifting their dollars elsewhere.

With those seed documents in place, the ED could feed in an impact report template along with each corporate partner’s specific impact goals and have AI help produce individualized reports tailored to each partner. The corporate reaction was immediate. Giving increased 20% the following year.

That is the leverage good preparation creates.

One more note. If you are going to use AI to help edit grant applications or other fundraising materials, have your strongest grant application ready as a reference. It helps the AI understand your version of quality.

Accelerating Communications

AI is powerful for streamlining communication and outreach. But not in the way most people assume.

AI is best at editing and structuring your thinking. It is less reliable at originating strategy, voice, or positioning.

So lean into the parts of your mission that you own. Bring your insights to the page first, even if they are rough. Then ask AI to help you merge those insights with your organizational documents and tighten the result into something that can survive scrutiny.

One technique that works well is to ask AI to take a skeptic’s position. Ask what questions a funder might raise. Answer those questions in your own voice, then ask AI to help you sharpen the structure and clarity again. You will spend more time engaging with your ideas than juggling grammar, and you will often produce work that used to take weeks in a matter of hours.

Data Analytics for the Rest of Us

We talk consistently about techniques like donor segmentation, giving profiles, and predictive analytics. We rarely get to them.

AI can quickly wrangle data into something useful, especially for early discovery. Yes, there are privacy considerations when uploading organizational data to any platform. My previous guides on this topic can help you pick the right data, the right provider, or both to stay safe. Once you have addressed those concerns, start with de-identified or aggregated data and ask an advanced AI model questions about segmentation, cost analysis, or gap analysis.

I would not take the results at face value. The value is in surfacing promising hypotheses quickly, then confirming them with traditional analysis.

For many organizations, even the question “what important questions can I answer with this data?” is valuable. Asking AI to help you through discovery can save weeks of false starts and point your team toward the highest-impact deep dive.

NotebookLM deserves far more attention in the nonprofit space.

It provides a place to upload documents and an AI that will answer questions using only those documents as sources. This is ideal for organizational knowledge bases, onboarding materials, and making sense of rapidly changing policy environments.

This past year, federal regulations have changed faster than any year I can remember. HUD issued new guidelines. Multiple agencies restructured programs. The new policies were often confusing and sometimes contradictory.

I have worked with organizations navigating these waters by uploading relevant policy documents and using AI to clarify specific elements, identify gaps, and generate actionable questions for their compliance teams. It does not replace expert judgment, but it dramatically accelerates understanding.

NotebookLM is free for nonprofits through Google Workspace for Nonprofits and available to almost everyone with a Google account.

The Path Forward

The consensus among practitioners is clear. Start small, prove value in human terms, pick one workflow, measure what changed, and scale what works.

Most nonprofits that succeed with AI do not begin with enterprise-wide implementations. They start with a 30-day pilot focused on a specific pain point. They track concrete metrics. They share early wins internally and with funders, which builds momentum for scaling.

This is not about replacing anyone. AI should not make decisions. It should make the humans who make decisions more effective.

Here is what I want you to do after reading this.

If you are a leader who is still “looking at AI” but has not acted, the key is to do something. Anything. Open an organization-approved tool or create an account on a major platform. Upload a non-sensitive document and ask it a question. The barrier is psychological, not technical.

If you have already started experimenting, pick one specific problem and choose a single action that can get you past it. Not a transformation initiative. One workflow. One measurable outcome. Build from there.

If you want three safe places to start, try this. First, take a messy draft of a grant narrative and use AI to tighten clarity and structure, then ask it to act as a skeptical reviewer and point out what is missing. Second, take a sponsor or donor update template and use AI to tailor it using approved, non-sensitive impact language from your own seed documents. Third, take one new policy document, upload it into a grounded tool like NotebookLM, and ask it to produce a short “what changed, what it affects, what questions we should ask” brief for your team.

The cost of experimentation has collapsed. The tools are accessible. The question is not whether AI will affect your organization, but whether you will guide that change deliberately or scramble to catch up later.

I would love to hear what is working for your organization. What workflows have you tried? What surprised you?