Keynote session at RSA Conference 2024. 

May 08 2024

RSA 2024: How to Secure Artificial Intelligence Projects

Generative AI is this decade’s cloud computing: exciting and urgent, yet fraught with poorly understood peril.

In the previous decade, cloud computing arrived on the technology scene as a revelation. Met initially with skepticism by some, cloud’s benefits soon proved irresistible to most organizations; today, businesses typically use terms like “cloud first” or “cloud forward” to describe their philosophies for data management and storage.

But along the way, businesses made a lot of mistakes in the cloud, especially with security. Most significantly, they failed to grasp the differences between securing data in the cloud and protecting it on-premises, often leaving data vulnerable to configuration errors and other simple mistakes. Many were also slow to see the shortcomings of the traditional castle-and-moat approach to cyberdefense in a cloud-oriented world and only recently have begun to embrace the principles of zero trust.

At the RSA Conference in San Francisco this week, many of the world’s leading cybersecurity professionals suggested that the original sin of the early cloud era was that businesses plowed ahead with cloud transitions without thinking through the security implications.

Now, they worry that history may repeat itself with the industry’s newest revelation: generative artificial intelligence. “We have to make sure that what happened with cloud doesn’t happen with AI,” said Akiba Saeedi, vice president of product management at IBM Security.

Click the banner below to learn how these cybersecurity solutions can support your business.


Parallels Between Cloud and AI Security

AI “is not just a trend we’re following,” said John Yeoh, global vice president of research for the Cloud Security Alliance. “Our customers are using it. Our staff is using it. And your CEO is presenting it to you now, telling you, ‘We have to do it.’”

When it comes to securing cloud environments, Yeoh said, most professionals today ask questions about network access, data control and management, and configuration, among other things. AI adds new wrinkles to the same questions but doesn’t really change what security professionals need to monitor, he argued: “A lot of the same questions we were asking about cloud 10 or so years ago, we’re going to be asking that for AI.”

RELATED: Why data literacy and data quality are crucial for business success.

For example, authentication is critical in a cloud environment because users seek network access from anywhere. The addition of AI will mean the generation of artificial identities, Yeoh explained. “The machine identities are growing,” he said. “We know access control is important in a cloud environment. In a machine environment, it becomes even more important. For every human you have in your organization, you have 10 or 20 times the machine identities.”

Data control is another critical issue. “We’re going to take a large language model, and we’re going to customize it, train it ourselves and tailor it to specific data in our own environment,” Yeoh said. “And so, data control becomes a crucial aspect of that — what goes in and what goes out.”

Akiba Saeedi
It’s very similar to what we saw in the past, with cloud, where the drive for innovation is out in front of where the current security posture is.”

Akiba Saeedi Vice President of Product Management, IBM Security

Most AI Projects Are Not Being Secured

In a new report, “Securing Generative AI,” IBM and Amazon Web Services found that only 24 percent of current generative AI projects are being secured, even though 82 percent of organizations say that “secure and trustworthy AI is essential to the success of the business.”

“While a majority of executives are concerned about unpredictable risks impacting generative AI initiatives, they are not prioritizing security,” the report notes. The survey of more than 2,300 executives identifies a likely reason for the disconnect: Nearly 70 percent of executives say innovation takes precedence over security.

DISCOVER: Create an integrated cyber resilience strategy for your organization.

“It’s very similar to what we saw in the past, with cloud, where the drive for innovation is out in front of where the current security posture is,” Saeedi said. “From a maturity standpoint, there’s a lot of new projects and a lot of people just trying to figure out what’s going on right now.”

Part of the challenge with AI security is that it involves two distinct disciplines: data scientists, who do a lion’s share of the work in building the deep learning models that are foundational to AI, but who don’t know much about security; and cybersecurity experts, who are only now learning about AI, Saeedi said.

That’s where leadership needs to step in and bring the sides together. It’s critical that business leaders understand that for all of their AI projects, security must be built from the ground up; otherwise, they will repeat the mistakes of the past, when they zoomed ahead into the cloud with poor security controls.

“At the highest level, AI that’s not trustworthy is not sustainable,” Saeedi said. “And if you have AI that is not secure, and you have other data that’s being manipulated outside the boundaries of the business’s intent, then it’s not trustworthy.”

UP NEXT: Experts at RSA share tips for strengthening password security.  

Keep this page bookmarked for articles and videos from the event, follow us on X (formerly Twitter) @BizTechMagazine and join the event conversation at #RSAC.

Photo courtesy of RSA Conference 2024

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT