As fog computing moves toward broad deployment, I offer some tips to those designing, connecting to and using fog-based systems.
Fog envelops the Internet of Things from end nodes to edge networks. Fog computing is a “horizontal, system level architecture that distributes computing, storage, control and networking functions closer to the users along the Cloud-to-Thing continuum,” as defined by the OpenFog Consortium, the group leading the development of an open, interoperable architecture for it.
Fog computing is rapidly gaining momentum as the architecture that bridges the current gap in IoT, 5G and embedded AI systems. As the fog rolls out from its conception to a deployment phase, there is plenty of room for miscues, overlap and U-turns. So, here are five tips to help you find your way through it all.
#1 Recognize where fog techniques are needed.
Fog is about SCALE--Security, Cognition, Agility, Latency, Efficiency. When designing or building a system or app, take heed of certain warning signs which indicate sub-optimal functionality if implemented in traditional cloud-based systems.
If your project has a round-trip cloud latency of more than a few milliseconds, that’s a warning sign you may need to move to the fog. Another sign is excessive consumption of bandwidth from the sensor or device to the cloud. Privacy and security can be other flags, especially with low-power things. Geographic coverage and reliability under tough operating conditions are other considerations.
A fog-based architecture addresses these potential pitfalls. Fog is best known for slashing latency times, but it also helps reduce network bandwidth requirements by moving computation to or near an IoT end node. It also provides additional security robustness for data transfers, and it can improve cognition, reliability and geographic focus through local processing.
#2 Span software across fog nodes North-South and East-West.
Applications can be partitioned to run on multiple fog nodes in a network. This partitioning can be North-South where different parts of an application run hierarchically, up to and including the cloud at times. It can also be East–West for load balancing or fault tolerance between peer-level fog nodes. Partitioning can be adjusted as needed on millisecond time scales.
#3 Understand the pillars of the fog.
OpenFog has identified eight pillars: Security, Scalability, Openness, Autonomy, RAS (Reliability, Availability, Serviceability), Agility, Hierarchy and Programmability. Each of these can be studied in depth in the OpenFog Reference Architecture.
#4 Make fog software modular, linked by standard APIs.
Software is the key to the performance, versatility and trustworthiness of fog implementations. Make it manageable and interoperable by carefully partitioning it into functional blocks. The interfaces between these blocks should be based on well tested, standard APIs and messaging frameworks. Open source projects can be a good starting point for fog software development once you’re identified the right properties for your applications.
#5 Make each installation very easy.
Global IoT applications will require the installation of millions of fog nodes over the next several years. Ensure the fog node hardware drops right in with simple mechanical and electrical connections for most scenarios.
Pay attention to fog node aesthetics, power requirements, cable management, and so on to minimize environmental impact. Fog node provisioning and commissioning tests should be 100% automated nearly 100% of the time. Use pilot programs to optimize design of fog equipment for installation and maintenance.
This article is part of a presentation on “50 Design Tips in 50 Minutes” at the Fog World Congress on October 31 in Santa Clara.
--Charles C. Byers is the co-chair of the OpenFog Consortium’s technical committee and a principal engineer and platform architect in the corporate strategic innovation group at Cisco Systems.