BIZTECH: What is data gravity, and why is it such a hot topic now, more than 15 years after the term was first coined?
FISHMAN: Data doesn’t remain static. As it grows in volume, value and interconnectedness, it generates “gravity,” pulling applications, analytics and AI toward where the data lives. This phenomenon increases the complexity and cost of moving data, and makes it harder and more expensive to move. That’s data gravity.
While the concept of data gravity isn’t new, it’s especially relevant again because data is more distributed than ever, across on-premises, multiple clouds, Software as a Service platforms, edge locations and sovereign environments. That sprawl often creates silos, inconsistent governance and duplicated copies, which slows down innovation and increases risk and cost.
The increasing demand for agentic AI raises the stakes. To deliver timely, accurate outcomes, AI systems need secure, governed access to up-to-date data with predictable performance. In many cases, it’s more effective to bring compute to the data, whether on-prem or in the cloud, than to move large data sets around, particularly when privacy, residency, compliance requirements and regulatory frameworks apply.
As AI becomes an operational necessity, financial institutions are re-examining their data estates and the infrastructure that supports them. Feeding AI with high-quality, well-governed data at scale changes both the amount of data being retained and the performance, security and resiliency requirements of the platforms that store and serve it — all of which increase the effects of data gravity.
DISCOVER: Connect on-premises and cloud systems into a unified, high-performing platform.
BIZTECH: Why should financial institutions be concerned about data gravity?
FISHMAN: Financial institutions experience the effects of data gravity more intensely due to their unique position. They operate at the intersection of massive and rapidly generated data volumes, strict regulations and always-on customer expectations. The cost and effort required to move data make it entirely impractical for them to do anything but leverage their data “in place.” When critical data is fragmented across platforms and locations, it can drive up cost, increase operational complexity and slow modernization — especially for new initiatives like AI.
At the same time, financial institutions have had to build security and governance into their data operations due to the sensitivity of the data in their environments. Trying to maintain those controls over multiple, fragmented environments is cumbersome and increases the risk of a serious error. Instead, by accounting for data gravity and centralizing security and governance, they can extend those same controls to new applications as their data is used in place. Institutions that can make data securely accessible — with consistent governance, protection and performance across on-prem and cloud — are better positioned to use AI for fraud detection, risk modeling, customer personalization and operational efficiency. Building a unified, hybrid data foundation that supports innovation without compromising security or compliance can help financial institutions accelerate the adoption of AI.
BIZTECH: How can financial institutions address this issue?
FISHMAN: Financial institutions can address data gravity by reducing friction between where data is created, where it must be governed and where it needs to be consumed. That starts with eliminating silos and standardizing data services such as storage, replication, protection, governance, security posture and lifecycle management so teams can access and manage data consistently across environments.
It is also important for financial institutions to invest in data visibility and governance: knowing what data you have, where it lives, who can access it and which policies apply. With that foundation, enterprises can securely bring analytics and AI to the right data rather than creating more copies and complexity while maintaining resilience and compliance.
