IT leaders can train and educate employees by offering clear use cases for how to adopt AI technologies. Managers need to stay up to date on AI’s capabilities and limitations, as not every task will be enhanced with AI. For example, if a nonprofit is looking to communicate with donors and raise awareness of a new initiative, a human connection can be more effective than a bot-written text message.
RELATED: See the nonprofit IT solutions and services that can accelerate your mission.
2. Stay Human-Centered
Ideally, nonprofits should set clear parameters about how AI will be used. IT leaders should sit down with key stakeholders and teams and agree to a contract. What specific tasks will AI automate? What workflows will be impacted? That way, teams will see how the technology will augment certain functions rather than alter the organization’s core mission.
“Before adopting AI, nonprofits should create a written pledge explaining that AI will be used only in human-centered ways,” write Beth Kanter, Allison Fine and Phillip Deng in the Stanford Social Innovation Review. “It should state that people will always oversee the technology and make final decisions on its use, in ways that don’t create or exacerbate biases.”
Amy Sample Ward, CEO of the Nonprofit Technology Enterprise Network, says it’s important to clarify that AI “should not make decisions.”
This is also important for maintaining a people-first work culture. “I don’t want to get lunch with a robot,” Fine, a nonprofit leader and president of Every.org, tells The Chronicle of Philanthropy. “If we use AI badly and we make people feel less connected to other human beings, it will be a tragedy.”
DIVE DEEPER: Data governance strategies help foster responsible artificial intelligence use.
3. Avoid Unfair Risks and Bias
AI models are good students. And often how an AI model is trained will determine its output. But that’s not always a great thing. If AI models are trained on biased data, they’re prone to relay biased reports and other misguided results. This means it’s imperative for nonprofits to ensure the AI models they work with run on quality, clean data.
It also requires that humans audit algorithms regularly to identify and mitigate any biases. IT leaders can also employ a risk-based planning approach, where threat models are run in advance and an AI model is piloted on a small scale before wider use.
“If AI is to be adopted widely in the nonprofit sector, the problem of AI bias must be addressed, as it is of paramount importance given that nonprofits enjoy a greater level of trust from their constituents than most other sectors — trust that can easily erode if their decisions are premised on skewed or biased data,” writes philanthropy leader Jo Carcedo.
4. Prioritize Data Privacy and Security
Without the proper guardrails, AI tools can use data that they shouldn’t, such as copyrighted and other protected information from disparate corners of the internet. That’s why it’s important for nonprofits to ensure that compliance and security protections are in place.
AI models should only access information “by authors who have consented to be included in the data set,” according to the Stanford Social Innovation Review. Any collection of data must also follow General Data Protection Regulation and California Consumer Privacy Act guidelines.