BIZTECH: We’ve entered the era of generative AI. What’s the impact on security?
PATEL: We know that most breaches are initiated on email. Today, it’s relatively easy to discern what’s a phishing attack. You don’t have to be super sophisticated; you just have to be aware and then you start to get it. It’s going to get a lot harder in the future as you have attacks that get much more bespoke and far more personalized. Instead of an email from a fake prince offering you $10 million, it’s going to say, “Hey Bob, nice to see you last night at the game. Here’s a link to some pictures you might want to download.”
Because of generative AI, it’s going to get harder and harder to tell the difference between legitimate activity and a malicious attack. Attackers will get more sophisticated as a result of generative AI, and that’s a bad thing.
On the other hand, generative AI will be used to help simplify the management of security systems. It will play a really big role in detecting breaches and responding, remediating and recovering from breaches.
BIZTECH: How are you using generative AI in the solutions you’re deploying?
PATEL: I’ll give you an example. We have a Security Operations Center Assistant, which will be available by about the end of the year, which will be able to say, “This pattern of behavior that’s happening right now on your network seems like it could be a breach. We don’t know for a fact that it is a breach, but it could be.” You can then tell the system to take a snapshot of your database. If it is a breach, you could instantly revert back to when the breach was detected. If it’s not a breach, you can just move forward.
Those are the kinds of things I think you’ll be able to use generative AI for: setting policy, creating automation, simplifying things for security analysts.
FIND OUT: Learn what areas of business CISOs should prioritize now.
BIZTECH: How will companies use AI to make and enforce policies?
PATEL: We’re launching the Policy Assistant, where you can use natural language to say something like, “Hey, Bob’s a new employee. He’s joined the publishing department. Give Bob the rights of an editor.” It will then give you a set of parameters to pick from, allowing Bob to access these resources but not those resources. You pick, you modify, then you say, “Go ahead and verify,” and then it will implement that policy.
Another question is, how do you protect your organization from intellectual property compromise in the era of generative AI? For example, a developer wants to upload a piece of code to a generative AI engine to have it check and debug the code. But your employer may not want you to do that. So, we can detect if something is a piece of code and do data loss prevention on the egress, saying, “This is something you cannot upload to ChatGPT, because that’s against company policy.”
The other side is that the developer might go to a generative AI engine and say, “Write me a piece of code for XYZ.” Your company might have a policy against that too, because of intellectual property rights, attribution, etc. We can now tell you, “This piece of code that you checked in has a 95 percent chance of having been written by AI.”