The Real Risks of Artificial Intelligence
While sci-fi scenarios of rogue robots tend to dominate worries about AI in the popular imagination, the technology presents more realistic risks, noted Aaron Cooper, vice president for global policy at BSA.
Cooper pointed to two chief risks of AI. First, he said, there is a risk of AI tools introducing discrimination (or perpetuating existing biases) due to faulty designs or data inputs. Also, Cooper said, industries must prepare to transition workers into other roles as automation takes over tasks that have historically been performed by humans.
“As jobs change, people who didn’t use technology are all of a sudden going to need to interface with technology, and we need to get people trained to be able to do that,” he said.
Kusnezov pointed out that relatively small changes in data inputs can dramatically skew the accuracy of AI tools. For example, he said, the addition of a simple Post-it note can trick an algorithm into mistaking a stop sign for a yield sign. Also, he said, a change in two pixels on a radiology image (which would be invisible to a doctor) is enough for an AI program to mistake a malignant tumor for a benign one.
“Basically, you can get any outcome you want with an image that looks effectively identical,” Kusnezov said. “AI is fragile today. There are many ways — increasingly sophisticated — to fool it. You have to be careful.”
MORE FROM BIZTECH: Read how AI can help grow diversity in the workplace.
Public Policy May Shape AI Adoption
Cooper noted that the White House recently held a summit on AI, focused on both government adoption of the technology and workforce-related issues. “It’s an indication of the government being serious about trying to create a national strategy to both promote the development of AI and also the adoption of AI,” he said.
Cooper also noted that AI is not “one thing,” but rather an umbrella term to describe a wide array of use cases — many of which will require separate regulatory considerations. “Trying to regulate AI as AI doesn’t make a lot of sense,” he said. “But trying to think about the individual contexts in which AI is being deployed is what’s important.”
Kusnezov predicted that government will play a critical role in bringing together different stakeholders who need to collaborate on AI solutions. “With AI, the interesting thing is no one really owns all of it,” he said.
“It is necessarily a partnership. We talk with academics who have a lot of ideas but have no data and are stuck doing mundane things with open, publicly available data sets. We talk with startup companies that are developing new chips, really remarkable processors that are tailored to inference and different kinds of learning, but they have no data to exercise it and tune it.”
“There is a need to somehow bring these groups together,” Kusnezov added. “And I think there is a value and a role for government in some form as a trusted agent, as a convener, as something to bring together the different entities that you’re going to need to make any substantive progress. I have a hard time seeing that it will simply evolve by itself without a bit of help.”