Feb 13 2026
Security

Are Businesses Ready to Defend Against AI-Powered Cyberattacks?

Businesses are locked in an epic artificial intelligence arms race with cybercriminals. We discussed the state of play with IDC analyst Grace Trinidad.

The age of artificial intelligence is here, and the effects on cybersecurity will be profound. How profound? In its FutureScape, the tech analyst firm IDC offers a set of predictions on how organizations will use AI to defend themselves in the next year or two, and how threat actors will use it in their attacks. IDC’s Grace Trinidad walked us through the most important and striking of the predictions, including the expectation that breach response playbooks will eventually become dynamic.

BIZTECH: How much will security be automated thanks to AI, and how quickly?

With AI adoption, security is the prime spot for initial adoption and is the most ready for it because of all the telemetry that already exists, which lends itself to good AI analysis. When you have very high-quality telemetry, that naturally cascades into good AI output. The degree to which automation is happening in security is really in those safe areas where low-tier analysts will often intervene or pick up an alert. And so, you’re looking at known vulnerabilities, known remediation pathways, known things that are kind of core, easy to handle and repeatable.

Click the banner below to learn how financial services are unlocking artificial intelligence’s potential.


Where I’m most excited about this push to automation — this push to freeing up analysts, this push to platforms better supporting security posture — is for small and midsized businesses. With this technology, you’re going to unlock a new security era where every organization is secure to a pretty robust degree, regardless of whether they have the right security analysts on staff. In this cybersecurity market, there’s not enough talent and there’s too much need. So, AI-powered automation is arriving at the right time.

BIZTECH: IDC predicts that soon, detection and response playbooks will be generated dynamically at the time an alert created. How will that work?

When you’re talking about the dynamic playbooks, that is forward-looking. It’s happening now, but not yet in a personalized way. In the next iteration, I would say in the next three years, we’re going to see personalized playbooks based on telemetry from that organization’s existing environment captured on the fly, in real time.

Right now, playbooks aren't updated as frequently as they should be. It may be a once-a-year project, if that. Well-resourced organizations probably set aside dedicated time to review and update playbooks. But for the most part, they languish until they’re updated on an as-needed basis.

The hope is that for dynamically updated playbooks, first there’s going to be real-time identification of a vulnerability or exploit. And that’s pretty cool because the exploits are always changing and advancing. We have new ones all the time. By collecting telemetry, AI helps define what a healthy environment should be.  Any deviation from that healthy environment is then picked up. So, you’re layering algorithmic statistical modeling on top of the security posture.

The dynamic playbook in its envisioned state doesn’t exist at this moment in time. We do have remediation playbooks that come up as suggestions, but they’re not tailored to the organization. So, right now, if you’re using Google, if you’re using CrowdStrike, if you’re using Palo Alto, they’ll present you with a vulnerability and then present you with suggested steps for remediation. The future, based on discussions we’ve had with IT companies and security companies, is that the playbook will be tailored to your specific environment and exactly what parameters you need to best address and your business priorities.

EXPLORE: Learn about these threat and vulnerability management solutions.


BIZTECH: These AI systems know everything about your business once they’re embedded. They really are able to say, “Oh, this is what you should be doing.”

We’re not quite there yet. That’s the desired state: “The AI knows my whole enterprise, and we’ve mapped out all of our protocols to this existing AI platform. We’ve got guardrails. We’ve got policies in place that the AI is honoring.” All of that is still very nascent. I don’t know any organization that I would say they’re a standout example of AI integrated throughout the enterprise.

Security posture and telemetry will be dependent on how integrated AI is in your systems. If you’ve left out shadow data repositories or if you don’t even know where they are or you’ve got shadow AI still in your environment? Executing on this assumes that you’ve done due diligence on securing the enterprise first and then securing AI second. And that you have a robust identity management too.

BIZTECH: Speaking of identity, you note that within a year or so, 80% of organizations will experience phishing attacks from criminals using synthetic identities. What’s that about?

Here’s an example: In early February 2024, there was a deepfake of an executive that fooled a Hong Kong-based employee of a British company into forking over $25 million. The employee was suspicious at first, but was invited to a multi-person video call where all of the participants, except the employee, were deepfake recreations of the company’s CFO and other colleagues. There was no protocol in place to verify that that executive actually did need that transfer of funds from one location to another.

That incident heated up the entire discussion of synthetic identities and manipulation of staff, and the emergence of code words. Everyone needs a safe word now so that we can verify transactions and make sure that they’re not actually initiated by a fraudulent actor posing as one of our executive staff.”

There was another attempt in which an employee received a phone call, presumably from their boss, and the boss on the phone call repeatedly referred to “my wife.” And that was a red flag because, having been in discussion with this particular executive for a long time, he had always referred to his wife by her name. And so, this person had doubts.

Click the banner below to start implementing smarter security.


BIZTECH: This deepfake situation really is terrifying. I’m just assuming you are, in fact, Grace Trinidad. How aware are companies of this issue?

The ones that are in discussion with me, yes, they are aware, and they want advice on how to limit their exposure. If you have old materials on YouTube with your CEO speaking in multiple frames, multiple angles, pull that down. Don’t give AI more material to render a more realistic image of your CEO. If you have one static shot, it’s harder to render a high-fidelity video. Don’t just leave stuff up on the internet.

In some ways, the 2024 Hong Kong event was a boon, because it put everyone on notice that this is a plausible threat. Before that point, it was kind of theoretical. Now, it’s often being executed via email at first, then followed up with voice. You don’t even necessarily need to jump on a deepfake Zoom call.

BIZTECH: In general, who benefits more from AI? Is it the companies and organizations trying to protect themselves, or is it the threat actors trying to run scams and create breaches?

What's happening is a zero-sum game. As the ways that we protect ourselves become more dynamic and more responsive and more agile, threat actors are also going to up their game. I think we’re going to stay neck and neck for as long as we can. There are creative people on both sides.

Photography by Sylvia Jarrus
Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.