Fog computing rolls over critical IoT data as cloud steps aside
When a cloud can’t move fast enough
The success of the industrial IoT is going to rely heavily on both the gathering of unthinkably large amounts of data, as well as the analysis and response to that data. Devices like sensors, cameras, controllers and switches make gathering and acting upon data possible, while the cloud allows for storage and analysis. But because these actions are potentially critical, time-sensitive data gathering is an essential requirement in many use cases. And when it comes to the industrial IoT, that means reacting in a matter of milliseconds.
That puts huge demand on every element of an IoT system, particularly the cloud, which is continually being weighed down by the loads of information being sent up to it.
Some even think that the cloud, one of the essential pieces to the creation of IoT, is too slow.
Cisco agrees. In fact, the opinion was articulated in its report, Fog Computing and the Internet of Things: Extend the Cloud to Where the Things Are: “Today’s cloud models are not designed for the volume, variety and velocity of data that the IoT generates.”
Cisco says that the cloud just can’t keep up with the billions of previously unconnected devices that are generating more than two exabytes (That’s 10ˆ18 bytes, or 1 billion gigabytes) of data each day.
So Cisco spearheaded the solution to the cloud’s shortcomings in what it coined fog computing (also edge computing, though not nearly as clever; and fogging, downright awkward).
What needs to be accounted for?
There are three main elements that fog computing needs to be able to do to alleviate the shortcomings of the cloud, according to Cisco.
- Analyzes the most time-sensitive data at the network edge, close to where it is generated instead of sending vast amounts of IoT data to the cloud.
- Acts on IoT data in milliseconds, based on policy.
- Sends selected data to the cloud for historical analysis and longer-term storage.
Cisco has also outlined main requirements for dealing with the “volume, variety and velocity” of IoT data: minimize latency, conserve network bandwidth, address security concerns, operate reliably, collect and secure data across a wide geographic area with different environmental conditions and move data to the best place for processing.
Defining fog computing
We will look at two leading definitions for fog computing. One by Cisco, and the other by the OpenFog Consortium, a group whose purpose is to “create an open reference architecture for fog computing, build operational models and testbeds, define and advance technology, educate the market and promote business development through a thriving OpenFog ecosystem.”
First, Cisco’s definition:
The fog extends the cloud to be closer to the things that produce and act on IoT data. These devices, called fog nodes, can be deployed anywhere with a network connection: on a factory floor, on top of a power pole, alongside a railway track, in a vehicle or on an oil rig. Any device with computing, storage, and network connectivity can be a fog node. Analyzing IoT data close to where it is collected minimizes latency. It offloads gigabytes of network traffic from the core network and it keeps sensitive data inside the network.
The OpenFog Consortium’s definition is a bit more involved:
Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from cloud to “things.” It is a:
- Horizontal architecture: Support multiple industry verticals and application domains, delivering intelligence and services to users and business.
- Cloud-to-Thing continuum of services: Enable services and applications to be distributed closer to Things, and anywhere along the continuum between Cloud and Things.
- System-level: Extend from the Things, over the network edges, through the Cloud, and across multiple protocol layers – not just radio systems, not just a specific protocol layer, not just at one part of an end-to-end system, but a system spanning between the Things and the Cloud.
“It’s an emerging architecture that takes compute, storage, control and networking, and distributes those services closer to the end user, closer to the device, closer to where the data is actually being generated,” said Lynne Canavan, executive director at OpenFog Consortium. “It brings it closer to the ‘cloud to things continuum.’”
How exactly does it work?
Cisco gives a clear and easy-to-understand guide for where all of the data goes when using a combination of cloud and fog computing. It’s important to note, the presence of fog doesn’t mean there are no clouds:
- The most time-sensitive data is analyzed on the fog node closest to the things generating the data. Therefore, the fog nodes (industrial controllers, switches, routers, embedded servers, and video surveillance cameras) closest to the grid sensors can look for signs of problems and then prevent them by sending control commands to actuators.
- Data that can wait seconds or minutes for action is passed along to an aggregation node for analysis and action.
- Data that is less time sensitive is sent to the cloud for historical analysis, big data analytics, and long-term storage.
“It’s the perfect image of a cloud that is really closer to the ground. We call it the ‘cloud to thing continuum’ because fog works with the cloud,” Canavan said. “We overlap with cloud, we work very nicely with the cloud. Microsoft is one of the founders of the OpenFog Consortium, and they are obviously a huge cloud provider. It requires a nice synchronization of cloud to fog to make the whole thing work.”
Benefits of fog computing
We have established that fog computing is required when large amounts of data need to be handled in an agile manner, reducing latency to provide immediate direction to systems using critical applications.
Extending the cloud closer to the things that generate and act on data benefits the business in the following ways, according to Cisco:
- Greater business agility: With the right tools, developers can quickly develop fog applications and deploy them where needed.
- Better security: Protect your fog nodes using the same policy, controls, and procedures you use in other parts of your IT environment. Use the same physical security and cybersecurity solutions.
- Deeper insights, with privacy control: Analyze sensitive data locally instead of sending it to the cloud for analysis. Your IT team can monitor and control the devices that collect, analyze, and store data.
- Lower operating expense: Conserve network bandwidth by processing selected data locally instead of sending it to the cloud for analysis.
“Fog is not going to be used every time, all the time going forward, but it is really necessary for certain scenarios, and a lot of it is IoT – the killer application for fog computing,” Canavan said. “It is not just IoT, it is also artificial intelligence, advanced robotics, etc. But IoT is pushing fog to the forefront of the conversation because there has been a lot of discussion about those billions of devices generating so much information, it is impossible to transmit all that information from the network to the cloud and have it happen in the response time that is necessary for things to run the way that they need to run.”
Fog vs. Cloud
Fog computing isn’t designed to entirely replace the cloud, it was designed to work hand-in-hand with it. Emphasizing the differences between these two solutions is important in determining where they fit within IoT. Here is Cisco’s take on the specific jobs fog and cloud computing have on the internet of things:
- Receive feeds from IoT devices using any protocol, in real time
- Run IoT-enabled applications for real-time control and analytics, with millisecond response time
- Provide transient storage, often 1–2 hours
- Send periodic data summaries to the cloud
- Receives and aggregates data summaries from many fog nodes
- Performs analysis on the IoT data and data from other sources to gain business insight
- Can send new application rules to the fog nodes based on these insights
Virtual reality, drones, automated vehicles, oil rigs– you name it. If IoT deployments are possible in an environment, there is a good chance fog computing can be used. Use case criteria, according to Cisco includes:
- Where data is collected at the extreme edge: vehicles, ships, factory floors, roadways, railways, etc.
- Thousands or millions of things across a large geographic area are generating data.
- It is necessary to analyze and act on the data in less than a second.
There exist endless IoT use cases that require extremely low latency in order to react to a critical situation. Having local computing will speed up processing times. For instance, if temperature sensors on a critical machine send data that suggest imminent failure, or if oil pipeline readings suggest a potential leak, fog computing will be quick to react and respond appropriately. The key is that it will react quicker than the cloud would.
“In a connected car, you’re going to get tens of megabits per second of data,” Canavan said. “So if all that data always went up to the cloud, what happens if that network goes down? There is a need to have the processing come down to a more local level so that there can be a much faster response time for when that car is coming, what are the operating conditions, what are the traffic conditions and what is it that car in front of you is doing. Autonomous cars need to have processing located at or near the device, and in this case the device is the car.”
With the OpenFog Consortium currently developing additional use cases and establishing an open-reference architecture for fog computing, it is clear this technology is still very much in its infancy. But a local computing infrastructure that can significantly decrease latency and keep costs low for IoT systems would be welcomed with open arms.
“We don’t think of it as a ‘It’s nice to have, let’s create this because it is fun to do these thing.’ In certain scenarios, especially in advanced, digital scenarios, there is so much critical data being transmitted,” Canavan said. “To have things all the time, always, and only going to the cloud and back down to the device – that oil rig that is in the middle of the ocean during a storm when systems go down – that doesn’t work. That is when people get in trouble. It [fog] is architecture that is needed. It is the necessary architecture to handle these [IoT] scenarios in order for them to be operationally efficient. We expect it to be transformational.”
The OpenFog Consortium currently has 45 members from 11 different counties. Founding members include industry leading companies ARM, Microsoft, Cisco, Intel, Dell and Princeton University. Contributors include GE, AT&T, Schneider Electric and Sakura Internet.