Computer scientist Mahadev Satyanarayanan says the future of AI lies in assistive technology.

May 31 2018
Software

Q&A: Mahadev Satyanarayanan on How Assistive Technology Will Shape the Future

Artificial intelligence that acts like an “angel on your shoulder” is where AI is headed, says a prominent computer scientist.

Long on the forefront of technology’s most promising advancements, from voice recognition to edge ­computing, computer scientist Mahadev Satyanarayanan — “Satya” — has been thinking a lot lately about artificial intelligence. In particular: How will AI develop over the next several years? How will it be applied most effectively by business? And should humans be worried about our obsolescence?

Satya is the principal investigator of Carnegie Mellon University’s Gabriel, a platform for edge computing that uses wearable vision technology that may someday help humans do everything from perform surgery to assemble furniture. In a ­conversation with BizTech, Satya explained why he sees AI evolving not as a threat to human relevance, but as an angel on our shoulders.

SIGN UP: Get more news from the BizTech newsletter in your inbox every two weeks!

BIZTECH: What is the branch of AI known as assistive technology?

Satya: The goal of assistive technology is to help humans improve. One example is a GPS navigation system that says, “Take the next exit,” and then the person takes the action. That’s an example of assistive AI, if you want to think of it that way, that’s been successfully used by millions of people.

That kind of fusion between humans and assistive technology is in store in every domain of human activity. Take a teacher trying to teach algebra to a struggling student. Already, my colleagues at Carnegie Mellon have created learning systems that model how people learn, so that by looking at mistakes that many students have made, they’re able to infer what is going on in a student’s mind, and therefore they can correct the mistake, not only by saying “This is wrong,” but also by saying, “Here is likely why you went wrong.”

Q0218-BT-QA-Satyanarayanan-quote2.jpg

BIZTECH: What industries are likely to benefit the most from assistive technology?

Satya: The lowest-hanging fruit might be elder care. If you look at why people get admitted to nursing homes, for at least a small percentage it’s not because they’re physically unable to function, but because they’re forgetful or they make mistakes or they’re somehow not functioning as well cognitively as they need to be. So there’s a case where an AI assistant could be an everyday guide for them: remind them to take their medication, remind them to do the things that are part of everyday life. They might be able to live at home for another six months. And that’s a win for everyone.

Here’s another example. Recently, I was on a Google website because I purchased a Pixel 2 smartphone, and I hadn’t received my refund for the trade-in. So the website said, “Would you like to chat?” It took about five minutes of my interacting in this chat for me to realize I was interacting with a piece of AI. It wasn’t a human on the other side. And when I got far enough and deep enough into my problem, it finally said, “I have to go consult with —” it didn’t say, “a human,” it said, “my supervisor.”

Today, the trend is to use voice-based AI systems, but it’s still difficult to get voice and intonation perfectly. People are still quite able to detect the difference. So many organizations use off-shore call centers. But chat systems are a different matter. It took me five minutes to figure out that it was an AI system and not a human. A different person might have never known.

BIZTECH: Can you speak about the Gabriel Project at Carnegie Mellon and its importance?

Satya: Gabriel is a platform for edge comput¬ing to create the kind of assistive AI applications that we’ve been talking about. We named it Gabriel because Gabriel is an angel. You can think of these assistive applications as essentially functioning as an angel on your shoulder. It is there to watch what you’re doing and to help you if you’re about to make a mistake, to gently guide you in real time if you seem to have forgotten what to do, to guide you step by step if you seem to be lost.

For example, if you buy something from Ikea, it comes with little printed assembly instructions. It doesn’t even have words. You have to follow the pictures to put together whatever you bought. There are so many areas of American life where the user has to con¬tribute to the assembly. In every one of those things, having specific guidance — an Ikea app that you can download for this specific product you bought that can tell you, “Sorry, not that screw. The long one.” That’s a kind of assistive tech¬nology that’s very valuable. You can see how it would help in healthcare, in engineering, and many industries.

This is the vision of the Gabriel project: Just like the iTunes store or Google Play, where you have millions of apps that enrich the ecosystem, we believe that creating an ecosystem for just-in-time guidance for many different tasks is exactly what Gabriel can help to do.

BIZTECH: What is edge computing and what is its role in AI development?

Satya: That’s a good question. Most people don’t realize initially that the two are intimately connected. Where it starts getting interesting is after you start deploying AI in the real world. Consider this question: How is an AI system going to communicate with a human being the way people do in face-to-face communication, which includes facial expressions, body language, vocal inflection and more, all communicating as much or more information as the words being spoken? It’s all very subtle and very complex, but you also have to do it fast. Human beings are amazing in terms of their speed of understanding — a human being can recognize a face as being familiar in 300 milliseconds. A human being can recognize a sound as a human sound in just four milliseconds.

Q0218-BT-QA-Satyanarayanan-quote.jpg

Edge computing is needed because all of AI’s underlying technologies — computer vision, natural language translation, speech recognition — need to happen in an extraordinarily short period of time. If I have only 100 milliseconds to do something, and 17 of those hundred milliseconds go to just network latency, I have lost before I started. The value of edge is going to come from providing cloud-like capabilities in terms of compute power very close to the point of application. That’s the essence of edge computing. Instead of sending all that data — the video, the microphone data, etc. — to the cloud, processing it there and sending it back, you do it very close to where you capture it.

There are challenges there. If you have a human being and you have a device like a Google Glass, which only weighs 36 grams, it does not have the compute power to process the computer vision on the video stream it captures. So you can send the data to the cloud and it’ll be slow. The ability to send it to a “cloudlet,” which is very close by, and is able to do the computer vision right there, that’s an example of a use case for edge computing that is very possible. And there are many use cases like this that others are exploring.

BIZTECH: When it comes to AI, is there anything we should be worried about, economically or socially?

Satya: In terms of revenue and justifying the cost of these systems, it’s a lot easier to demonstrate the profitability of replacing an expert. The bottom-line calculation is a very different calculation when you’re trying to show how assistive AI makes productivity improvements in people, and how it reduces human errors. What is the value of that? The biggest challenge we face is that the use cases in which AI is easiest to justify financially are almost all negative for people.

For example, take picking stocks in the stock market. Today, stockbrokers and some other people do that. Could AI be trained to do it as well? Probably, yes.

BIZTECH: So should people be worried about someday being replaced by robots?

Satya: First, remember that human beings are the product of close to 1 billion years of evolution. So, whatever our DNA contains, whatever strategies have been embedded into the fundamental makeup of human beings, is the product of an extraordinarily long process of development, testing, and real-world validation.

I think it’s very difficult to imagine a billion years of human evolution being compressed into five, 10 or even 50 years of AI evolution.

My belief is that AI, in narrow domains — I mean very specific kinds of pattern recognition or language skills, anyplace where a lot of training can be brought to bear on a relatively concen¬trated area — AI will do extremely well. But as we start broadening the scope of what we mean by an AI system, things get very difficult very quickly. So in the broad category of general skills and knowledge, and the ability to synthesize new solutions from building blocks — I have a hard time seeing in the time frame of a few decades humans being taken over by AI.

Photography by Angelo Merendino
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT