Jul 10 2024
Software

How Nonprofits Can Avoid Unintentional Bias with Artificial Intelligence

Diverse data sets, regular audits and transparency are among the keys to success.

As more and more nonprofit organizations use artificial intelligence, they’ll need to be intentional in their policies. “We cannot unleash the bots on the world without human supervision, and we have to always stay deeply human-centered in this work,” Allison Fine, president of Every.org, said in a panel discussion with The Chronicle of Philanthropy.

IT leaders must remember that any AI software is probably biased, she noted: “Chances are that it was programmed by a white man and then tested on historic data sets that already exist, which tend to benefit white people. So you’ve got a double whammy there that by the time you take a product for workflow improvement or hiring or providing services to communities, it’s likely biased both against people of color and women.”

Here are some best practices that nonprofit leaders can follow to avoid AI bias.

Click the banner below to learn how to optimize AI in your workplace.

 

IT Leaders Need to Interrogate Their Data

Nonprofits don’t have to accept AI technology as it comes out of the theoretical box. In fact, they shouldn’t. Fine recommends that IT leaders ask “good, hard questions” of the companies behind these tools about what assumptions they built into them. In short, interrogate the data skeptically and critically. This also extends to how the data was collected: For example, can users opt out to protect their privacy?

IT staffers should get answers to these questions so they know what they are dealing with ahead of time. If teams can identify potential biases, it’s easier to prevent them from growing.

Add in AI Gradually Through Pilot Programs

Integrating AI on a small scale can help mitigate risk. Gradual change can also be less jarring to nonprofit staff, donors and other stakeholders, and it will leave more time for teams to educate themselves on potential use cases.

Fine suggests running “tiny” AI pilots with data that is clean, sorted and complete. Doing this helps organizations avoid AI hallucinations. And since the integration takes place slowly, IT teams are able to eliminate biases before they can negatively affect public sentiment and donor trust.

EXPLORE: CDW nonprofit technology solutions can benefit your organization.

Train AI on Diverse Data Sets

The more diverse and representative the data set an AI model is trained on, the more sophisticated it can become.

“You have to think and be thinking about the amplification of bias,” Rodger Devine, president of the research organization Apra, tells The Chronicle of Philanthropy. “All AI-powered tools are subject to their training data, and garbage in, garbage out.”

This is why, for better insights, nonprofit IT teams should train their AI models on data sets that accurately represent the demographics of the populations they serve. This involves diversity not just in ethnicity, gender and age, but also in factors such as socioeconomic status and geographic location.

As IT leaders prepare their pilot programs, they need to keep feeding their model updated data and study the quality of the outputs to promote AI fairness.  Working with a tech partner such as CDW can also help you feel confident that you are working with quality data.

Regularly Perform Audits to Prevent Bias

AI systems are iterative, so IT leaders shouldn’t have a “set it and forget it” mentality. Instead, think of AI systems as living, breathing technology that requires regular monitoring and bias audits. These tests, typically conducted by a third party, can assess whether certain groups are unfairly advantaged or disadvantaged.

RELATED: How data governance strategies can lead to AI success.

Increase Trust and Transparency in AI Algorithms

Finally, be transparent about the AI use cases employees are testing within your organization, and share updates with stakeholders about how your pilot programs, audits and trainings are going. This can dispel any doubt or speculation and help to bolster trust.

“Responsible and transparent use of AI will help you foster an engaged base of support that’s confident in your organization’s abilities,” writes Sarah Tedesco, executive vice president of DonorSearch. “These benefits can strengthen your overall image and reputation with donors, making it easier to forge new connections, reach new donors, and secure grant funding.”

Khanchit Khirisutchalual/Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.