What AI Impacts Should Be Considered Before Building
The first step toward building more fair machine learning and AI is to think about people in real-world situations right at conception. Bird said that that’s counterintuitive to the way tech professionals are trained to think.
“In technology, we have naturally been trained to abstract away different layers so that we can focus on our little contained problem,” Bird said. “But when we think about responsible AI, one of the things we need to do is lift our head up, and think about ‘what are the implications of the technology we’re creating?’”
This often requires thinking through a cultural lens, to make sure any models being built don’t hold inherent biases that will tilt the balance of fairness or produce inaccurate results. That could mean different things from one application to another, one country to another, or even how a particular company views something.
MORE FROM BIZTECH: Read more about the upsides, risks, and policy implications of AI.
Tools to Use to Build Trust in Machine Learning
In order to build trust in artificial intelligence and machine learning, Bird said that there are three necessary steps: build on a platform that’s trustworthy, make sure the process is reliable and replicable, and think deeper about the model itself.
One tool to achieve this is Azure Machine Learning, which has all of the enterprise and security capabilities needed to lay a good foundation. It allows users to build predictive models, test those models, and adjust them as needed. The data, including what went in to the model and what it’s for, is all in one place.
“This is really important for us to be able to understand if there’s an issue in the model,” Bird said, “we can go back and reproduce it.” It’s also integrated with Azure DevOps, so users can insist that the model see human eyes before it’s deployed.
Azure Machine Learning also has a data monitoring capability that’s customizable, so if the data appears to be skewing, it can alert the user to that it can be adjusted.
“Even if you’ve done all the things right up front, the world is very dynamic,” said Bird, “and so it might be that after you deploy your model, things change.”
How to Incorporate Fairness Into Artificial Intelligence
A lack of fairness in this process can do harm in a couple of ways. First, it could without an opportunity or resource for someone, such as someone applying for a loan from a bank, or it could misrepresent people. This is the aim of a new tool kit from Microsoft called Fairlearn.
The tool can do a fairness assessment of the model, change the model after it’s deployed, and look at how accurate the model is for particular sets of people. For example, it can compare the outcomes of men and women separately to see if there are any disparities. Then, it can re-weight the model to get closer to a fair outcome.
“Fairness in particular is a socio-technical challenge,” Bird said. “We have to take into account many factors, not just model performance.”
Find more of BizTech's coverage of Microsoft Ignite 2019 here.