A security panel at NVIDIA GTC 2024 with Ned Finkle (NVIDIA); Elham Tabassi (National Institute of Standards and Technology); Gerard de Graaf (EU Office in San Francisco) and Ted W. Lieu (U.S House Rep. of California). 

Mar 22 2024

NVIDIA GTC 2024: 3 Ways AI Is Changing the Security Landscape

At NVIDIA GTC 2024, experts discussed why governments are racing to regulate AI and how businesses should safeguard the technology.

Artificial Intelligence is radically influencing the IT security landscape, offering new paradigms for protecting digital assets and critical infrastructure. But its power also presents new risks as the sophisticated technology becomes available to bad actors and nation states looking to dismantle infrastructure, explained Tanneasha Gordon, principal with the cyber risk practice at Deloitte.

At NVIDIA GTC 2024, hosted in San Jose, Calif., experts discussed these exciting opportunities and doomsday scenarios. “Enterprises are being challenged with balancing the need to create data environments that encourage and enable innovation, with traditional concerns around the control of data, from legal, regulatory and even commercial sensitivity perspectives,” said Russell Fishman, NetApp’s senior director of product management for data solutions. But it’s a difficult balance to strike.

Here are three ways AI is reshaping security practices and how IT leaders and policymakers are thinking about it:

Click the banner below to receive exclusive industry content when you register as an Insider.

A Strategic Approach to Securing AI

“Our view is that security must be designed into a company’s AI strategy up front; it cannot be layered on top at a later stage,” said Fishman.

And that’s precisely what organizations are working toward, but the technology is accelerating too quickly. “Everyone is under tight timelines, they have the boss of budget, they have all this huge potential in front of them, and there are going to be mistakes,” said Matt Kraning, CTO for Cortex at Palo Alto Networks.

RELATED: These cybersecurity solutions and services can help your organization.

There may not be a right way to secure AI just yet, but Gordon recommends following this three-pillar strategy:

  1. Defend — Implement AI in places that will help identify data anomalies and respond to threats
  2. Protect — Add safeguards and ensure compliance with federal guidelines as you connect AI to IT systems
  3. Transform — Identify places in your IT ecosystem where AI can “transform your cyber operations to drive operation efficiency and effectiveness,” and improve the organization’s resilience.

Experts predict the process will be iterative, but currently, there are three major ways that AI is impacting cybersecurity. Here are the use cases, opportunities, and drawbacks of each:

Russell Fishman
Our view is that security must be designed into a company’s AI strategy up front; it cannot be layered on top at a later stage.”

Russell Fishman Senior Director of Product Management for Data Solutions, NetApp

1. Enhanced Threat Detection and Prediction

Companies are using AI algorithms to identify patterns and anomalies within large data sets. This allows organizations to detect emerging threats and vulnerabilities faster. For instance, predictive cyberthreat intelligence platforms use AI to analyze data from myriad sources, including past security incidents, dark web chatter and malware evolution trends, to forecast potential security threats.

Opportunity: This allows organizations to proactively bolster their defenses and prioritize security efforts around the most likely attack vectors, significantly reducing the risk of larger breaches.

Challenge: The complexity of these data sets means that continuous training of AI models is essential to maintain accuracy. There’s also the risk of false positives, which can divert resources away from real threats.

DISCOVER MORE: IT leaders share three ways AI is transforming businesses today.

2. Automated Response, Incident Remediation 

AI-driven SOAR systems automate the processes of detecting, investigating and responding to cyber incidents. By integrating with existing security tools and infrastructure, these systems can orchestrate complex workflows for incident response, from initial alert triage to containment and remediation, without human intervention.

Opportunity: Automation speeds up response times dramatically and frees up security teams to focus on strategic tasks rather than getting bogged down in repetitive operational duties. This also provides cost savings.  

Challenge: Automated actions — particularly chatbots handling customer service and security related tasks — can go haywire. Kraning cited how Air Canada was recently found liable for an AI chatbot that misinformed a passenger about a bereavement refund policy that did not exist, for instance.

LEARN MORE: How can businesses leverage AI for their own cyberdefense?

This kind of disinformation campaign can get even scarier in situations where AI infringes upon citizen’s fundamental rights, explained Gerard de Graaf, the European Union’s senior envoy for digital to the U.S. and head of EU Office in San Francisco.

Regulations and safety standards prevent these consequences and build public trust. Right now, “50 percent of people say ‘I’m not comfortable, I don’t trust AI’” de Graaf said, summarizing global surveys.

Gerard de Graaf headshot
It’s AI for people, not people for AI.”

Gerard de Graaf Senior Envoy for Digital to the U.S., European Union

3. Identity and Access Management with Behavioral Biometrics

AI is pushing the boundaries of traditional IAM solutions by incorporating behavioral biometrics, which analyze patterns in user behavior — such as keystroke dynamics, mouse movements, and even cognitive patterns — to continuously verify user identity, noted Rep. Ted Liu of California.

Opportunity: This approach can significantly enhance security by offering a nonintrusive secure layer that can adapt to the unique habits of each user while dramatically reducing unauthorized access.

Challenge: Implementing behavioral biometrics requires balancing privacy concerns. The technology also must be able to distinguish between fraudulent access attempts and legitimate but atypical user behavior.

UP NEXT: Learn how to build a successful generative artificial intelligence platform.

For IT leaders, navigating these changes involves not only investing in AI-driven security solutions but also addressing these organizational, ethical and operational concerns. This means ensuring transparency in AI operations, following regulations and fostering a holistic security culture.

And because security is also about feeling safe, organizations need to tell stories about “the positive sides of AI,” said de Graaf, to counter existentialist worries about the technology, particularly the fear that it will disrupt the labor market. He added: “It’s AI for people, not people for AI.”

Keep this page bookmarked for articles from the event and follow us on X (formerly Twitter) at @BizTechMagazine and the official conference feed, @NVIDIAGTC. The official conference hashtag is #GTC24.

Photo courtesy of NVIDIA GTC 2024

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT