Jun 14 2024
Security

Understanding Customized Phishing Emails in the Age of Generative AI

Hyperpersonalized email scams are surging. Here’s how businesses can navigate them.

Generative artificial intelligence is a major force, and not always for good. While generative AI is helping enterprises become more productive, it’s also helping cybercriminals become more potent, especially when it comes to hyperpersonalized phishing emails.

Nefarious actors are leveraging generative AI to customize phishing emails, raising the volume and sophistication of their attacks. Eighty percent of security leaders say their organizations have fallen victim to phishing emails written by generative AI. As IT leaders prepare to navigate this new frontier of email scams, businesses need to know what they’re up against.

Click the banner below to learn why cyber resilience improves threat defenses.

 

How Are Phishing Email Attacks Evolving?

Phishing attacks deliberately play with human psychology and personal bias,” Fredrik Heiding, a Ph.D. research fellow at Harvard University, told BizTech in a previous article. “They work because they hijack shortcuts in your brain. But if you pause and reflect on the contents of an email, your rational brain will take over and stop you from clicking.” Readers may need to spend more time parsing emails than they used to.

Traditionally, phishing emails have been full of grammatical and punctuation errors. In fact, 61 percent of people spot scams such as phishing emails because of the poor spelling and grammar they contain. But as Okta reports, those signals are no longer prevalent because generative AI corrects such errors.

Generative AI tools such as ChatGPT can churn out flawless text in multiple languages at rapid speeds, enabling widespread phishing schemes that are sophisticated and personalized. Generative AI also learns with each interaction, so its efficiency only increases over time.

“Generative AI tools are letting criminals craft well-written scam emails, with 82 percent of workers worried they will get fooled,” notes AI Business.

Stephanie Carruthers, chief people hacker for IBM’s X-Force Red, recently led a research project that showed phishing emails written by humans have a better click-through rate than phishing emails written by ChatGPT, but only by 3 percent.  Still, it won’t be long before phishing emails crafted by generative AI models garner higher CTRs than those written by humans, especially as the models leverage personality analysis to generate emails tailored to targets’ backgrounds and traits.

Generative AI models are already more efficient than any human could hope to be. This is one of the reasons threat actors are leveraging the technology to their benefit.

82%

The percentage of employees who fear they cannot distinguish phishing from genuine email messages

Source: aibusiness.com, “Generative AI Opens New Front in Phishing Email Wars,” April 5, 2023

How Are Threat Actors Using Generative AI?

“We were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes,” Carruthers notes in an IBM SecurityIntelligence blog post. “It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure setup. So, attackers can potentially save nearly two days of work by using generative AI models.”

Between these time savings and the email personalization generative AI allows for, threat actors are leveraging ChatGPT, WormGPT and other AI as a Service products to create new phishing emails at a rapid pace. This enables them to attack widely with a greater frequency and more success. This technology can also send customized phishing emails to a specific group of people, a tactic specifically useful for spear phishing.

This is a big reason 98 percent of senior cybersecurity executives say they’re concerned about the cybersecurity risks posed by ChatGPT, Google Gemini (formerly Bard) and similar generative AI tools. But AI is merely a tool. Just as it can be used to improve phishing email attacks, it can be used to better defend against them.

FIND OUT: What is consent phishing and how can businesses prevent it?

How Can You Protect Against These New Attacks?

As phishing email attacks continue to evolve, security leaders must improve their defenses accordingly. According to a recent study, over half of IT organizations rely on their cloud email providers and legacy tools for security and are confident these and other traditional solutions will be able to detect and block AI-generated attacks. These protections help, but the best defense against AI is AI.

Check Point lists three main benefits of using AI for email security: improved threat detection, enhanced threat intelligence and faster incident response.

AI can identify phishing content with a range of techniques, including behavioral analysis, natural language processing, attachment analysis, malicious URL detection, threat intelligence and incident response.

UP NEXT: How to avoid becoming the target of a phishing email.

In addition to AI security defenses, businesses also must implement security training to reduce the likelihood of human error. This means educating employees on what generative AI-based phishing attacks look like, from telltale stylistic patterns to typical grandiose promises, explains Glenice Tan, cybersecurity specialist at the Government Technology Agency, in a Wired article.

“There’s still a role for security training,” she says. “Be careful and remain skeptical.”

DMP/Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.