Golden Gate bridge fogged in.

Internet of Things Shifts Computing From the Cloud to the Fog

Golden Gate bridge fogged in.Mark Twain is rumored to have once said, “The coldest winter I ever spent was a summer in San Francisco”. And if you’ve ever been to San Francisco any time of the year, you’ll know that one of the distinguishing characteristics of the city is how cool, damp, and foggy it can be.  I’ve been up to the Marin Headlands a few times to snap a picture of the city behind the Golden Gate Bridge and rarely is it not foggy.  In fact, the fog can be so thick sometimes that it’s hard to distinguish where the fog ends and the clouds start.

I bring this up because this year’s Cisco Live user conference was held in the typically foggy San Francisco, where the company was pushing the concept of ‘fog computing’ as a way of scaling the Internet of Things (IoT) — fog, San Francisco, Cisco… it all fits neatly.

One of the highlights of any Cisco event is the demonstrations, and during this year’s demos this concept of fog computing was introduced.  While Cisco has been pushing this term for some time, I imagine it was the first time the majority of the over 25,000 in attendance will have been introduced to the concept.  In reality, Cisco isn’t the only company pushing the concept of fog computing, they’re just the only mainstream IT vendor doing so today, which makes sense, given the company is looking at it as an IoT enabler.

For those not familiar with the concept of fog computing, it’s a concept where IT resources (computing, storage, application and network) get distributed to get closer to the data.  While cloud computing is all the rage today, it does have some scale issues with IoT.  Data has to be collected locally and then transported back to the cloud, processed, analyzed, and then action can be taken.  If the data set is large and/or the geographic distance to the cloud is too far, this latency can impair the performance of the IoT-based application.

Fog computing extends cloud services to the edge of a network and can provide local compute services to users or applications.  The close proximity of the fog to users, sensors, or applications means latency is reduced and QoS requirements are minimized, improving processing times and user experience.  The Internet of Things is based on massive numbers — billions in fact — of distributed sensors spread out across the network.  One could actually think of the Internet of Things as being a massive “fog” of IT resources.  Wouldn’t it make sense then that the computing model to process information would also then follow fog principles?

Fog computing isn’t required for all applications, but for real time, big data analytics that require real time, predictable latency, fog is ideal.  This includes industrial automation, transportation, oil and gas, or any other large network of sensors and actuators that require a high degree of automation.

Today, the “edge” of the fog is something like a router, WAN optimization device, or other network edge device that has the horsepower and storage to collect and analyze data.  However, as the price of processing and memory continues to fall, I would expect fog computing to extend down to many of the wearable technology devices that we have today.   Don’t be surprised if things look foggy through your Google Glass soon.  Fog computing is coming — be ready.