Aug 12 2025
Security

Black Hat 2025: Stay Secure as Threat Actors Advance With AI

IT leaders must know how to anticipate cyberattacks and defend their organizations, even as artificial intelligence makes cybercriminals smarter and faster.

Threat actors are amplifying their attacks with automation. They’re taking generative artificial intelligence tools and using them for every step, from identifying targets to entrapping them. AI can help cybercriminals pinpoint employees that have access to valuable data and can even help identify those employees’ vulnerabilities. 

The technology also exposes weak points in networks, software and systems, and it does this all faster than threat actors could ever manage manually.

Security experts at this year’s Black Hat USA conference in Las Vegas dissected these trends in criminal behavior and offered a path forward for organizations that want to protect themselves from advanced threats.   

Click the banner below to act on artificial intelligence security insights from Black Hat experts.

 

Understand Threat Actors’ Motives and Goals 

Ransomware is on the rise. In fact, Zscaler cloud blocked 146% more attacks year over year against organizations across various industries, according to its “ThreatLabz 2025 Ransomware Report.” And amid these attacks, researchers noticed another startling trend.

“A growing number of ransomware operators are abandoning encryption altogether in favor of pure data extortion — an evolution mirrored by a 92.7% rise in data exfiltration volumes over the past year,” the report states. 

Threat actors are taking this next step to put more pressure on organizations to pay their ransoms. IT professionals must remember that these cybercriminals are organized like a business, and most cyberattacks are financially motivated, said Aamir Lakhani, senior manager of threat intelligence at Fortinet.

“The majority of attacks we see at FortiGuard Labs, about 70%, are financially motivated attacks,” he said. “Some of these threat actors we investigate have the equivalent of CEOs and CFOs.” 

The best way to combat AI is with AI, said Shannon Murphy, global security and risk strategist at Trend Micro. “The thing with AI is scale and pace, and you have to keep pace,” she said. But to do that, IT leaders must first know how to anticipate threat actors’ AI-driven attacks.

146%

The year-over-year increase in ransomware attacks blocked by Zscaler

Source: Zscaler, “ThreatLabz 2025 Ransomware Report,” July 2025

Cybercriminals Are Working Smarter, Not Harder

There are numerous ways that AI allows threat actors to attack organizations more efficiently. The first is by sharing restriction-free large language models and successful codes on hacker forums, said Lakhani. He said he has personally tested some of these models. “I used an old zero-day attack and asked the AI, ‘Can you write me a code to exploit this vulnerability?’ Most commercial LLMs won’t do that, but this one did,” he said.

Threat actors are also better able to exploit vulnerabilities within certain industries. For example, energy and agricultural organizations are currently facing increased attacks because these industries are largely undergoing new digital transformation initiatives.

Similarly, threat actors are choosing to target the supply chain rather than individual organizations. “The intent is to cause that downstream supply chain effect,” said Deepen Desai, Zscaler’s Chief Security Officer. “If you attack the vendor, like an AI vendor, they’re then able to downstream the attack to the all of the organizations that are relying on that AI vendor. The downstream impact is pretty big.”

Attackers are also going after file-sharing applications to maximize their ROI. “Once they exploit them, they have access to all of the data stored in these applications. So, they’re able to steal data from hundreds or thousands — depending on how many organizations use these applications,” said Brett Stone-Gross, senior director of threat intelligence at Zscaler. “They can steal massive amounts of data all at once, or maybe they chain a few different zero-day vulnerabilities together, and it’s very successful.”

RELATED: What are the top five vulnerabilities uncovered through penetration testing?

Better configuring your file-sharing permissions and adding protections to data in third-party applications helps companies stay ahead of the threat actors’ AI, but the key in any scenario is to be proactive. 

“How you remediate that risk is very different from how we did security even five years ago,” Murphy said.

AI-Powered Social Engineering Drives More Targeted Attacks

Organizations also need to protect themselves and their employees against improved social engineering attacks. Today’s threats go beyond well-written generative AI emails.

Attackers are using platforms such as LinkedIn to target specific positions within organizations and find emails for those positions.

This automated reconnaissance also allows cybercriminals to personalize their attacks, Murphy said. When she was the subject of a red team test, AI used her LinkedIn activity to generate social engineering attacks using the cities she had visited and connections she had made. “AI loves content,” she said. “There’s no more blaming the victim in this scenario, because it’s so perfect.” 

Combatting these attacks, then, is up to the company’s IT experts. “It is our responsibility to keep our people safe,” Murphy said. “We can ask technology to do more of the heavy lifting.” 

GET A STEP AHEAD: Cybersecurity leaders must keep an eye on the future.

Lakhani mentioned that his team worked to reverse-engineer attackers’ research. “We should do the same thing and find the vulnerabilities so we can fix that,” Lakhani said. “It works both ways; it’s always a cat and mouse game.”

These reconnaissance-driven social engineering attacks are also allowing threat actors to enhance deepfake technologies. Deepfake scams, especially audio deepfakes, “are very prolific now,” Murphy said.

To protect employees from these attacks, technology teams need deepfake detection tools. These solutions, which can be built directly into the endpoint, use signals to detect AI-generated audio or video scams.

“For example, sometimes if the audio is too perfect, that’s an indicator,” Murphy explained. “We can hear the HVAC system in here; there’s white noise. If that’s completely gone from the equation, on its own it doesn’t mean there are bad actors at play, but in combination with other signals, it might suggest bad actors.”

Digital Twins Improve Security Audits

Another emerging solution for organizations is the rise of digital twins. Traditionally, this technology has been used to manage instances of smart cities or manufacturing needs, but IT teams are beginning to AI-generate digital twins of their environments for red teaming exercises.

“If I do red team or penetration tests, I’m limited because the business isn’t going to let me attack those crown jewels or that mission-critical server,” Murphy said. “By replicating the entire digital environment, we can simulate a very real-world attack in a way that satisfies the business and satisfies the security leader.”

Check out this page for all of our articles from the event, and find event highlights and behind-the-scenes moments on the social platform X @BizTechMagazine and @BlackHatEvents.

peshkov/Getty Images
Close

See How Your Peers Are Leveling Up Their IT

Sign up for our financial services newsletter and get the latest insights and expert tips.