Microsoft Ignite 2019: How to Curb Ethics Concerns Around AI and Machine Learning

As the use of artificial intelligence and machine learning rises, so do the concerns about potentially negative impacts.

Organizations around the world are using artificial intelligence to become more efficient, more secure, and achieve their businesses goals. But as it’s used more, the many are still worried that it’s being used irresponsibly.

Sarah Bird, an artificial intelligence researcher at Microsoft, said that the immense capabilities of the technology do force those building it to consider how it could be used.

“This raises a host of questions about what kind of impact you want to have,” Bird said during a session on ethical AI at Microsoft Ignite.

It’s a sentiment that’s carried across industries. Bird cited a recent survey which found that nine out of 10 organizations worldwide were struggling with ethical questions around using AI. And while many may think of AI ethics as surrounding far away problems like killer robots, that’s not the greatest concern.

“When we’re talking about responsible AI, we’re talking about the problems that people have right now,” Bird said, such as fairness, appropriateness, and authenticity. All three must be considered when building the technology.

What AI Impacts Should Be Considered Before Building

The first step toward building more fair machine learning and AI is to think about people in real-world situations right at conception. Bird said that that’s counterintuitive to the way tech professionals are trained to think.

“In technology, we have naturally been trained to abstract away different layers so that we can focus on our little contained problem,” Bird said. “But when we think about responsible AI, one of the things we need to do is lift our head up, and think about ‘what are the implications of the technology we’re creating?’”

This often requires thinking through a cultural lens, to make sure any models being built don’t hold inherent biases that will tilt the balance of fairness or produce inaccurate results. That could mean different things from one application to another, one country to another, or even how a particular company views something.

MORE FROM BIZTECH: Read more about the upsides, risks, and policy implications of AI.

Tools to Use to Build Trust in Machine Learning

In order to build trust in artificial intelligence and machine learning, Bird said that there are three necessary steps: build on a platform that’s trustworthy, make sure the process is reliable and replicable, and think deeper about the model itself.

One tool to achieve this is Azure Machine Learning, which has all of the enterprise and security capabilities needed to lay a good foundation. It allows users to build predictive models, test those models, and adjust them as needed. The data, including what went in to the model and what it’s for, is all in one place.

“This is really important for us to be able to understand if there’s an issue in the model,” Bird said, “we can go back and reproduce it.” It’s also integrated with Azure DevOps, so users can insist that the model see human eyes before it’s deployed.

Azure Machine Learning also has a data monitoring capability that’s customizable, so if the data appears to be skewing, it can alert the user to that it can be adjusted.

“Even if you’ve done all the things right up front, the world is very dynamic,” said Bird, “and so it might be that after you deploy your model, things change.”

How to Incorporate Fairness Into Artificial Intelligence

A lack of fairness in this process can do harm in a couple of ways. First, it could without an opportunity or resource for someone, such as someone applying for a loan from a bank, or it could misrepresent people. This is the aim of a new tool kit from Microsoft called Fairlearn.

The tool can do a fairness assessment of the model, change the model after it’s deployed, and look at how accurate the model is for particular sets of people. For example, it can compare the outcomes of men and women separately to see if there are any disparities. Then, it can re-weight the model to get closer to a fair outcome.

“Fairness in particular is a socio-technical challenge,” Bird said. “We have to take into account many factors, not just model performance.”

Find more of BizTech's coverage of Microsoft Ignite 2019 here.

kentoh/Getty Images
Nov 06 2019

Sponsors