Dell Technologies World 2019: Why Artificial Intelligence Still Needs Human Intellect
Despite the promise of artificial intelligence, algorithmic bias plagues the technology and continues to challenge those looking to scale it to effectively solve business problems.
To move forward, the industry will need to solve issues of discrimination that pop up in technologies that rely on AI, such as facial recognition. Thus far, however, this has been a sticking point for companies everywhere.
“Every day, we hear reports of facial recognition technology discriminating against people, especially people of color, and that’s a problem,” said Rana el Kaliouby, co-founder and CEO of Affectiva, speaking Monday at a session on emerging technologies at Dell Technologies World 2019.
Diversity in All Aspects of AI Is Key to Developing Ethical Tools
How can companies work to correct algorithmic biases? Kaliouby offered several ways to solve the problem.
“First, we need to ensure that our data is diverse. Diverse from a gender perspective and ethnic perspective, but also content,” explained Kaliouby. This means gathering data of all qualities as well.
When it comes to training the algorithm, Kaliouby recommended that companies sample data in a way that accurately represents that diversity of population and quality.
“It’s not perfect yet, but at least we’re thinking about it and crafting data in a way that ensures we’re not accidentally discriminating against any one population,” she said, stressing that it’s important for companies to test solutions for algorithmic bias so they can course-correct if necessary.
She also called diverse workforces critical to creating and training ethical AI tools.
“If we’re designing these AI systems to be deployed globally in multiple scenarios or use cases, we need people with different backgrounds to weigh in on how we design and deploy these systems,” she said.
VIDEO: Check out how AI and automation continue to transform retail.
Creating Trust Between Humans and Machines
Kaliouby also argued that the intelligence aspect of AI isn’t quite enough to help the technology effectively understand people’s needs, and that what the technology requires to evolve is a touch of humanity.
“How do humans trust each other?” she asked, noting that we make millions of decisions based on trust each day. “Most often, trust is actually implicit. It’s manifested in nonverbal communication that we exchange with one another — in tone of voice and gestures and our facial expressions.”
Kaliouby is working on a technology to train AI algorithms to identify human expression. And as AI evolves, she wants industry to think of AI not just as a tool but as a reciprocal relationship with the technology.
“We often talk about the mistrust in AI. But can AI trust in humans?” asked Kaliouby. “We need a new social contract between humans and machines.”
Keep this page bookmarked for more stories from the BizTech team. Follow @BizTechMagazine on Twitter, or the official conference Twitter account @DellTechWorld, and join the conversation using the hashtag #DellTechWorld.