For example, AI technology can scour databases like LinkedIn for keywords to find candidates that match job descriptions. With the right information to work with, AI can help identify candidates whose personalities are good matches for a company’s culture, or those who seem more open to leaving their current jobs.
All of this has proved beneficial to companies, HR employees and recruiters. It has, however, come at the expense of another group of people.
How Does AI Bias Happen?
Being technology, AI has no inherent concept of race or gender, but it does look for patterns, statistical data and trends. That can lead to AI bias when pervasive societal and institutional inequality impact that data.
For example, where people receive their educations can influence (among other things) their writing styles. AI can recognize that. This might lead the technology to screen out candidates who are otherwise perfectly qualified. Not everyone has the same opportunities that lead to an education that creates what the AI might consider a “winning” writing style.
The problem of AI bias can be seen clearly in the IT industry, where according to one estimate, more than 77 percent of professionals are men and 59 percent are white. If you give an AI system every resume that resulted in a successful hire for a particular IT technology role and allow the AI to learn from that database, it will seek out the same sort of candidates.
Using that biased criterion, a hiring AI can shortlist or remove candidates from diverse backgrounds before a human ever sees their resumes, reinforcing a biased status quo instead of efforts toward diversity, equity and inclusion.