That’s why updated, foundational user training should remain a key part of a business’s security strategy. Sure, the technology may have gotten more convincing, but emphasizing the motives and tactics of phishing (an unusual request or an attempt to create a sense of urgency) can help organizations ground employee education amid a changing threat landscape, Richberg says.
A Multilayered Approach to Deepfake Phishing Threats
Traditional email filters and text-based protections may not be enough anymore.
“AI-fueled voice and video impersonation can now handle interactive conversations with help desk staff, and the large language models supporting generative AI tools can often provide real-time answers to individual-specific questions — such as, ‘What was your high school’s mascot?’ — that defeat traditional search engines,” Richberg says.
As deepfake attempts become even more believable, a multilayered approach of technology, training and tailored organizational processes is essential to protecting users and business operations, he adds. Businesses can leverage security tools that look for indicators of nonhuman and artificial video content.
Sometimes, the process change can be as simple as slowing down and reviewing the request again on a computer.
EXPLORE: Learn about these threat and vulnerability management solutions.
“Potential victims are more likely to fall for fakes when they encounter them on mobile devices, so teaching users to wait to act on anything financial or sensitive in nature until they can do so from a full-sized screen gives them a broader perspective, a second chance to review the request and options such as hovering their cursor over a hyperlink that can reveal an unexpected internet destination,” Richberg says.
Companies may also consider modifying processes to require involvement of a second person or mode of communication on important or high-value transactions, he adds.
Financial institutions, for example, may want to re-evaluate their biometric-based user authentication processes as AI-fueled deepfake audio capabilities improve to bypass some voice-recognition technology. “This will require organizations employing such technologies to monitor emerging threat capabilities and be willing to upgrade their security technology and practices to stay ahead of this threat,” Richberg says.
