Aug 14 2023
Security

Black Hat USA 2023: Five Lessons in Artificial Intelligence

Cyberattackers are using AI to reconstruct and manipulate data points to prey on your weaknesses, but researchers say regulating the emerging technology can help.

“We’ve put a lot of trust in artificial intelligence, fast — but it’s time we start to exercise some skepticism,” said Ram Shankar Siva Kumar, a machine learning and security data analyst at Microsoft, in a Black Hat USA 2023 session on cybersecurity and artificial intelligence.

In his talk, Kumar — who splits his time between Microsoft and the Berkman Klein Center for Internet and Society at Harvard University — described the risks of putting too much faith in automated capabilities too soon.

“It’s a mistake to always trust artificial intelligence because with that, we are rationalizing that the outcome is always in our interest,” said Kumar. Instead, he recommended using AI as one tool of many rather than a replacement for real data. This requires “interrogating the validity of AI’s answers and cross-checking it,” said Kumar.

Here are the five AI lessons IT leaders should learn:

1. AI systems are susceptible to manipulated data.

Cyberattackers are using AI to reconstruct and manipulate data points to prey on users’ weaknesses, explained Kumar. This means if IT leaders pull from AI-generated data, a hacker may have already infiltrated it with false information, knowing users will find it.

Alina Oprea, an associate professor of computer science at Northeastern University, described this tactic as “data poisoning” in a recent article in The Economist. It occurs when AI systems pull from data on the open web, making it more susceptible to cyberattacks.

2. Artificial standards are written in a vague way. 

According to Kumar’s research, software engineers find the language of AI standards to lack clarity and precision, especially regarding ethical implications.

“When the language of tech standards is so vague, there are more suitcase words that are projected onto these systems that may be problematic,” he said.

Kumar noted that although many companies have imposed regulatory standards on AI models, preventing them from answering unethical or potentially dangerous prompts, more training is needed.

There are also privacy and sourcing standards to consider. For example: Where is the information coming from? Who has the right to use it? Who has verified it?

Ram Shankar Siva Kumar Headshot
We assume the system to be accurate, morally good and logical but remember, generative AI answers can be gamed.”

Ram Shankar Siva Kumar Berkman Klein Center for Internet and Society, Harvard University

3. Fast data collection has its tradeoffs.

Although AI makes it easy for anyone to develop a basic understanding of a topic or data set, there are drawbacks that stem from collecting data so quickly and grouping different classes of data together. “You might gain efficiency but lose nuance and depth of insights,” said Kumar.

“AI is very intelligent, but it isn’t a thorough researcher yet, nor can it balance multiple perspectives.”

4. Competing interests in AI make objective results hard to come by. 

Technology itself may be apolitical, but the powers controlling it aren’t. “If there is no one solely managing the validity of these AI answers, then it falls to different private companies to handle this tool and leverage it toward their personal interests.”

Right now, tech companies are the loudest voices in the room, Kumar said, but the next step is hearing what academics, data scientists and various professionals have to say.

Checkout the slideshow to see photos from Black Hat USA 2023.

5. An increase in AI awareness must come from IT leaders

For AI to become a trusted and balanced tool in an organization, IT leaders need to build awareness across the enterprise.

“This is a top-down culture shift that must happen where high-level leadership is strategizing about these risks and making sure that everyone on their team is aware,” said Kumar.

AI Is Great, But With It Comes Inherent Issues of Trust

If AI systems represent a fast track to the future, then “we really need to double-click on that word, trust,” said Kumar. Inherent in these five lessons is the issue of trust and perhaps our willingness to trust too easily.

“We assume the system to be accurate, morally good and logical, but remember, generative AI answers can be gamed,” Kumar cautioned.

To keep up with our coverage of Black Hat USA 2023, bookmark this page and follow us on X (formerly Twitter) at @BizTechMagazine or check out the official conference account, @BlackHatEvents.

Photography by Lily Lopate
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.