Aug 05 2014
Cloud

Fog Computing Keeps Data Right Where the Internet of Things Needs It

The growing pool of connected smart devices will be enabled through fog computing, not just the cloud.

When the leading influencers within an industry have difficulty agreeing on the right label to slap on something, you know it’s a real phenomenon and not just another product push. Such is the case with the Internet of Things (IoT). Cisco Systems calls it the Internet of Everything. Data analytics firms Gartner and IDC use the terms Nexus of Forces and the Third Platform, respectively. And GE calls IoT the Industrial Internet.

All of these terms refer to the same technology development, the growing digital connectivity of everything. This includes everything from inanimate personal objects (such as a fitness band or a car) to business systems (warehouse operations or mining equipment) to municipal resources (street lighting or water treatment operations).

There are some big numbers attached to this trend. Cisco CEO John Chambers optimistically predicted a $19 trillion profit market for IoT, and Cisco projects there will be 50 billion smart objects connected to the Internet by 2020. Clearly, those are motivating reasons for companies to put their label on this coming IT tsunami.

But what is the value of more streams of data being collected in a data-saturated world? What’s to be gained from this information overload? Where will all this data be stored? How will it be transferred and processed? Welcome to fog computing.

The Fog (of Data) Around Us

Fog computing is a close cousin to cloud computing, a technology that takes advantage of the distributed nature of today’s virtualized IT resources. But instead of being “out there” somewhere like the cloud, the fog is all around us, among the numerous smart objects we interact with every day.

“We see this as more highly distributed out at the edge, closer to the ground, where the data is really occurring,” says Todd Baker, head of the IOx Framework at Cisco. “But it’s not a replacement for cloud.”

Fog computing, like many IT developments, grew out of the need to address a couple of growing concerns: being able to act in real time to incoming data and working within the limits of available bandwidth. Explains Baker: “We live in a real world where bandwidth is neither infinite nor free. There’s a lot of data being generated. We talk about 50 billion sensors by 2020. If you look today at all the sensors that are out there, they’re generating 2 exabytes of data. It’s too much data to send to the cloud. There’s not enough bandwidth, and it costs too much money.”

Enabling Meaningful Action at the Edge

Fog computing works in conjunction with cloud computing, optimizing the use of this resource. Currently, most enterprise data is pushed up to the cloud, stored and analyzed, after which a decision is made and action taken. But this system isn’t efficient. Fog computing allows computing, decision-making and action-taking to happen via IoT devices and only pushes relevant data to the cloud.

“Fog distributes enough intelligence out at the edge so we can manage this torrent of data,” explains Baker. “So we can change it from raw data into real information that has value that then gets forwarded up to the cloud. We can then put it into data warehouses; we can do predictive analysis.”

This improvement to the data-path hierarchy is enabled by the increased compute functionality that manufacturers such as Cisco are building into their edge routers and switches. To that end, Cisco has also created a platform to support fog computing, IOx.

“IOx is about taking communications to our edge routing platform and combining it with open source – an open platform for you to be able to put your own applications out at the edge. We look at it as a bundle of technologies. You integrate your applications directly on top, directly into it or work alongside it,” says Baker.

Traffic in the Right Direction

How does all of this work in the real world? Consider this example: A traffic light system in Chicago is equipped with smart sensors. It is Tuesday morning, the day of the big parade after the Chicago Cubs’ first World Series championship in more than 100 years. A surge of traffic into the city is expected as revelers come to celebrate their team’s win. As the traffic builds, data are collected from individual traffic lights.

The open-source application developed by the city to adjust light patterns and timing is running on each edge device. The app automatically makes adjustments to light patterns in real time, at the edge, working around traffic impediments as they arise and diminish. Traffic delays are kept to a minimum, and fans spend less time in their cars and have more time to enjoy their big day.

And what about the data pushed up to the cloud? In the traffic light example, there is little value in sending a steady stream of everyday traffic sensor data to the cloud for storage and analysis. The civic engineers have a good handle on normal traffic patterns. The relevant data is sensor information that diverges from the norm, such as the data from parade day. That data would be sent up to the cloud and analyzed, supporting predictive analysis and allowing the city to adjust and improve its traffic application’s response to future traffic anomalies.

patronauta/ThinkStock
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT