Cyberattacks have gotten more sophisticated and damaging, and while data loss is always a fear for IT security managers, the bigger concern is downtime. For example, a survey by the analytics company New Relic found that 62% of respondents said outages cost their companies $1 million or more per hour.
This means that the ability to observe and analyze networks in real time is of utmost importance. But that is becoming more difficult to do: Only 1 in 4 organizations reported achieving “full-stack observability” in the New Relic survey.
Multiple Infrastructures Complicate Data Observability
One reason is that companies have their data spread out over multiple infrastructures.
In its State of Observability 2024 report, Dynatrace surveyed hundreds of companies across multiple regions and found the average multicloud organization spanned 12 different environments, including large-scale cloud providers, local Infrastructure as a Service providers and private servers. Of the technology leaders surveyed, 84% said that complexity made it more difficult to protect their infrastructures from attacks.
The result in this kind of environment is an overwhelming amount of data to sift through. One example of the avalanche of data cited by Dynatrace was log analytics.