Voice Fraud Is Easy to Commit
Voice fraud is growing because it’s relatively easy to do it, Balasubramaniyan said. A sample of a person’s voice just a few minutes long is enough to train a deep learning system to create a convincing synthetic version of that voice saying anything. And the deep learning technology is not hard to come by.
Balasubramaniyan conducted a live demonstration in which he used such a system to create what sounded like a clip of Donald Trump saying he’d just initiated a bombing campaign on North Korea. Several politicians have already been subjected to such fakes, perhaps most famously last year, when Speaker of the House Nancy Pelosi was made to sound drunk by scammers who simply manipulated a sample of her voice to make her speech sound slurred.
That was just a “cheap fake,” Balasubramaniyan said, because the scammers didn’t use deep learning. But think of a situation in which a politician is presented as having said something truly damaging, when they didn’t. “It seems to me pretty easy to imagine a scenario in which that would actually affect an election,” he said.
Businesses Have Reason to Worry About Voice Fraud
Voice fraud is worrisome for enterprises for several reasons. In addition to the possibility that they might be scammed by fraudsters impersonating their own executives, high-profile business leaders may also find themselves subject to public relations problems or even blackmail by scam artists who create fake audio or video of them saying something embarrassing.
Mark Zuckerberg, for example, was the subject of such a “deepfake” in 2019, when a fake video was posted to Instagram, which Zuckerberg’s Facebook owns. In the video, Zuckerberg is presented as saying: “Imagine this for a second: One man with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”
He never said that, and the video is not especially convincing. But Balasubramaniyan said the technology is improving rapidly, and soon fraudsters will be making much more persuasive deepfakes.
Another concern for businesses: the implications of voice fraud on voiceprint authentication systems, a technology with which many organizations are experimenting. With voiceprint authentication, end users supply a sample of their voices, which is then stored and compared against live callers’ voices to verify identity.
Asked whether voiceprint authentication is now a “dead end,” Balasubramaniyan said the technology still has value.
“Any authentication system on its own is susceptible to fraud,” he said, noting that generations of criminals have stolen passwords. “What you need to make sure of is that you have enough safeguards and that you’re using all of them. If you’re replacing passwords with voiceprint, you’re going to run into a world of hurt. But if you’re using several things together, you’ll be much safer.”