How Energy and Utility Companies Can Make the Most of Big Data
The world is awash in a glut of data. Driven by digitization, the deluge rapidly grows every day. The total amount of data in the world is expected to skyrocket to 44 zettabytes by the end of the decade — nine times the amount in 2013, according to IDC. To put that in perspective, one zettabyte equals 1 trillion gigabytes, or the contents of 20 trillion four-drawer file cabinets.
Collecting, managing and gleaning immediate insight from all of this data has become a make-or-break challenge for businesses in every industry, including energy and utility (E&U) companies.
Industrial sensors are now everywhere — from airborne lasers to surface data sensors during drilling to pipelines monitors — and they are all flooding E&U company databases with geographical information. In theory, this data can be used for better spatial modeling and analyses across all segments of the market, including exploration, production, transportation and distribution.
As they try to gain actionable insight from the spatial data they produce and collect, E&U companies are taking part in an important shift in the industry.
This is the same shift that’s occurring in financial services, retail, healthcare and other data-intensive industries. It’s a move away from expensive, nonscalable, proprietary information systems and toward open-source, cloud-based software frameworks better suited for harnessing enormous amounts of data for deeper understanding of patterns.
The Implications of Big Data for the Energy and Utilities Industry
Big Data is fueling innovation in how many organizations manage and realize value from their data sources. Implications for E&U companies are especially potent, given the direct relationship between these companies’ “datability,” so to speak, and their business success. Here are two examples of that potential by segment:
- Energy exploration is a highly spatial enterprise, with data from satellite images, surface geology mapping and subsurface remote sensing determining the economic viability of pursuing operations in a certain site. E&U firms need to amalgamate all of this information in order to quickly and successfully glean insight from it.
- In the production and transportation segments of the E&U market, a wealth of data from supervisory control and data acquisition (SCADA) sensors tell many things: whether a well is working optimally or depleting too quickly, the condition of a pipeline, etc. The volume is immense, but the information is valuable and can inform actions that lead to greater operational efficiencies.
Understandably, E&U companies treat this data as their crown jewels. But as data volumes increase, they’re challenged by how to store and analyze all this information and optimize it in mapping.
Open-Source Data Processing Tools Can Save Time and Money
Traditional geographic information system (GIS) solutions were built on technology from a time when data requirements were simpler.
They’re often hamstrung by the shortcomings of proprietary technology. Traditional GIS platforms cannot scale, it’s costly to expand them to support the vast influx of information, and they have rigid and confusing software licensing models that discourage users from tapping into current and powerful open-source platforms.
For example, take Hadoop, a popular open-source software framework for storage and large-scale processing of unstructured and semi-structured data on clusters of hundreds or thousands of commodity servers.
Hadoop collects high-volume data, whether structured or unstructured, from various sources and distributes it across multiple nodes on the servers, each of which processes a subset of data in parallel.
The system then uses that same parallelism to perform fast computations against the data on each node and reduces the findings into more consumable data sets. It does this very quickly and efficiently, without the time- and labor-intensive steps associated with the traditional relational database model.
That’s extremely attractive to E&U companies that want to keep and make use of all their data but are outgrowing their conventional solutions and databases — and are under pressure to grow capacity within tight budgets.
Unlocking the Business Intelligence of Geospatial Data
GIS solutions that leverage the cloud and open-source technologies like Hadoop allow E&U companies to process and analyze large amounts of data, and do so quickly and cost-effectively.
These solutions can unlock the business intelligence of location-based data by building predictive models and running “what if” scenarios using all data, not just a subset. Companies can run frequent modeling iterations to quickly derive insights from data that have never been created before.
Globally, E&U companies are starting to take advantage of these easier-to-scale and more elastic infrastructures:
- An exploration-centric oil and gas company in the U.K. opted not to invest in an expensive, proprietary database management system and instead standardized its geospatial data management infrastructure on open-source–based solutions.
- An independent oil and gas company in the U.S. kept its traditional GIS solution — the company had invested resources and skills in the technology over many years — and supplemented it with a project to federate spatial and nonspatial data. As a result, the company could discover and share information online, serving technical GIS users and nontechnical constituents.
- A pipeline company is using an open-source technology stack to integrate and visualize data within a corporate “call before you dig” application.
These types of capabilities, enabled by a powerful, scalable, cost-effective infrastructure, will become increasingly important as data volumes keep exploding in the sensor-laden age of the Internet of Things.
As energy and utility companies search for more acute intelligence using location-based information, they are realizing the deficiencies of traditional, proprietary technologies and the transformational effectiveness of the next generation of scalable solutions.