Security is not working.
While security as a percentage of IT spend continues to grow at a robust rate, the cost of security breaches is growing even faster.
Organizations are spending close to $100 billion on a dizzying array of security products. In fact, it is not uncommon for CISO organizations to have 30 to 40 security products in their environment. However, if you ask chief information security officers how they feel about their security risk, they will express concerns over being highly exposed and vulnerable.
Artificial intelligence (AI) and machine learning (ML) can offer IT security professionals a way to enforce good cybersecurity practices and shrink the attack surface instead of constantly chasing after malicious activity.
Why Isn’t Cybersecurity Working as It Should?
There are many reasons security measures are falling behind, like the ever-increasing sophistication of adversaries and traditional perimeters virtually disappearing due to the rise of cloud and mobile technologies. But one of the biggest reasons we are not succeeding is that we always seem to be one step behind the bad guys.
Most security products are focused on understanding malware or attacks. This is an unbounded problem and, as a result, we are always playing catch-up with malicious actors. The number of malware and fileless attacks run into the billions, with hundreds of millions getting added each year. On top of that, the bulk of these products focus on infiltration prevention. By homing in on preventing infiltration almost exclusively, we are conceding the asymmetry advantage to the attackers — while they just have to get it right once, we must get it right every time.
We must figure out a way to bound the problem. Focusing solely (or primarily) on chasing the bad is not going to help us succeed.
How Cybersecurity Threats Can Be Contained
The principle of least privilege is one of the oldest information security principles, with the original formulation by Jerry Saltzer stating: “Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job.”
If we enforce this principle to our IT environments, where every application is confined to performing only what it must to complete its job, we’d have dramatically reduced the attack surface, and would consequently have bounded the problem.
While this doesn’t eliminate the need to monitor for threats, it simplifies the problem. You are no longer looking for a needle in a haystack, but looking for a needle in a few pieces of hay.
So, the right solution architecture would include two components:
A foundational piece that shrinks the attack surface by enforcing least privilege (also known as cyberhygiene)
A complementary piece that controls residual risk by monitoring for threats
The Limits of Least Privilege in Cybersecurity
Customers have tried implementing least-privilege environments in the past through whitelisting. While whitelisting solutions can be effective, they have been a nightmare to operationalize.
The constant changes during the normal course of operating an IT environment at scale are very hard to keep up with. So, in this case, instead of playing catch up, we were chasing our own tails. These changes include patching, upgrades, network reconfigurations, new integrations, administrative activities like backup, management activities and many other things.
In fact, most whitelisting solutions had limited scope, focusing largely on file integrity as against behavioral integrity of programs. If we want to extend least privilege to include behavior, arguably we’d have an even more complex operational problem than the traditional whitelisting solutions. What is the answer?
Can Machine Learning and AI in Cybersecurity Help Shrink the Attack Surface?
Some argue that AI can solve the problem of “chasing bad” and dramatically increase our security. If this were true, one might argue that we do not need the foundational piece described above. There is little doubt that with the resurgence of deep learning owing to multiple factors, we have seen phenomenal improvements in heretofore hard problems in AI. This includes object detection in images and videos, speech recognition, natural language processing, self-driving cars, search, recommendation engines, games like chess and Go, healthcare and much more.
Some of these problem domains are adversarial but have well-defined rules like chess and Go. There are others like self-driving cars and speech processing that have few rules that can be used to describe them. However, these problems often do not have adversaries involved and frequently have large amounts of data — a prerequisite for deep learning algorithms. Chasing bad guys in cybersecurity is uniquely difficult due to three factors:
It has sophisticated adversaries.
They are guaranteed to not follow any rules.
There is scarcity of labeled data on malware or attacks.
On the other hand, we have established that ensuring good is always going to be more effective than chasing bad. This approach gets even better with the rise of modern AI/ML.
AI/ML techniques are ideal for achieving cyberhygiene and shrinking the attack surface at scale, which requires an automated understanding of the intended state of an application. There are two distinct advantages that make it ideal for AI/ML.
Rules exists for the behavior of good software (there a lot of them, but AI/ML can take advantage of them, update them and improve security as a result).
There is plenty of data labeled “data for goodware.”
The primary challenge has been the constant change at scale. The nature of change, though, is predictable and follows patterns. This is the kind of problem that AI/ML excels in.
Using AI and ML to achieve cyberhygiene and enforce least privilege environments at scale is the breakthrough idea that will help us secure modern IT environments against ever-evolving threat landscape.