Will AI Technology Usher In a Wave of Security Threats?
Businesses and IT leaders concerned about cybersecurity have a lot of potential threats to deal with, from spear phishing attacks to ransomware, and may soon have another item on their worry list: artificial intelligence.
Although AI has the potential to boost productivity and handle rote office tasks, freeing employees to work on more complex assignments, there is a looming downside in the security realm. Someday in the near future, hackers may be able to use AI tools to find new vulnerabilities and then create new exploits and attacks in a fraction of the time it would take a human.
The Security Threats AI Technology May Expose
In August, the Defense Advanced Research Projects Agency, the Defense Department’s research arm, sponsored the Cyber Grand Challenge hacking competition in Las Vegas. The contest pitted seven autonomous machines against each other to find and exploit bugs in each other’s systems. The teams played a hacking game known as Capture the Flag, in which, as TechCrunch noted, they were “assigned servers which must perform certain tasks while constantly being fed new code filled with bugs, security holes, and inefficiencies.”
"Mayhem," a bot created by startup ForAllSecure, which grew out of research at Carnegie Mellon, won the contest by being conservative about how it patched its own servers, because patching can sometimes slow services down and require they be taken offline, as Wired reported. The upshot is that the bots were able to find some bugs much more quickly than humans could have, but they also crippled their own systems in protecting them and could not grasp all of the bugs that a human might.
The danger such technology poses is significant. As IDG News Service reported: “For instance, cybercriminals might use those capabilities to scan software for previously unknown vulnerabilities and then exploit them for ill. However, unlike a human, an AI can do this with machine efficiency. Hacks that were time-consuming to develop might become cheap commodities in this nightmare scenario.”
In such a world, AI could be used to create and control cyberweapons. “The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of Future Crimes, in a New York Times interview last fall.
According to Goodman, AI’s advances can be seen in hacking tools like the widely used malicious program known as Blackshades. Created by Alex Yucel, a Swedish national who was convicted in 2015 in the United States for selling the malware, Blackshades was sold widely in the black market, and functioned as a “criminal franchise in a box,” Goodman said. The tool allowed users without technical skills to deploy ransomware attacks or perform video or audio eavesdropping.
“I don’t want to give any ideas to anyone,” Tomer Weingarten, CEO of security firm SentinelOne, told IDG. However, he said that AI-driven technologies that look for internet vulnerabilities may be coming.
Additionally, tools like Blackshades, which are sometimes referred to as “rent-a-hacker” services, may someday use AI to “design entire attack strategies, launch them, and calculate the associated fee,” IDG notes, with the human buyer saving time but still getting the cyberweapon.
How AI Can Be Used in Cyber Crime
Despite the perils, cybersecurity firms and researchers are using AI to fight back.
As IDG notes, to block malware, the firm Cylance is using a subset of AI known as machine learning, which uses algorithms to detect patterns, and then can predict outcomes and potentially operate autonomously. That effort has involved creating models based on malware samples to determine whether activity on computers is normal or not.
“Ultimately, you end up with a statistical probability that this file is good or bad,” Jon Miller, chief research officer at Cylance, told IDG. He added that the machine learning works to detect the malware more than 99 percent of the time, and that the company is always adding more malware samples to its algorithm.
Writing in The Independent, John Clark, a computer scientist and the recently appointed chair of computer and information systems at the University of Sheffield in the UK, notes that AI will be used to rapidly characterize malware that changes its form as it attacks.
“AI will also help us to track down who is responsible for attacks, identifying what further information is needed to draw conclusions and then asking for it, with automated investigative algorithms following their AI-enhanced noses, making best use of limited resources,” he wrote.
Last April, researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory announced that they had developed an AI system to detect malware. The researchers claimed that the system, called AI2, could “detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives” by a factor of five.
The researchers tested the system using 3.6 billion pieces of data known as “log lines,” which were generated by millions of users over a period of three months.
“To predict attacks, AI2 combs through data and detects suspicious activity by clustering the data into meaningful patterns using unsupervised machine-learning,” MIT noted. “It then presents this activity to human analysts who confirm which events are actual attacks, and incorporates that feedback into its models for the next set of data.”
And, as Wired notes, that balance between AI technology and human cybersecurity intuition is critical:
“Relying entirely upon machine learning to spot abnormalities inevitably will reveal code oddities that aren’t actually intrusions. But humans can’t hope to keep up with volume of work required to maximize security. Think of AI2 as the best of both worlds — its name, according to the research paper, invokes the intersection of analyst intuition and an artificially intelligent system.”