The base station and the hardware inside have the chance to make or break the effectiveness of an IoT.
An Internet of Things (IoT) will require engineers to consider all parts of the data path carefully, from data acquisition of to the transport of data from remote locations to the datacenter and ferrying data within the datacenter itself. In this, the second of three pieces on the networking challenges posed by an IoT, the need for more powerful, feature-laden wireless network base stations is presented.
Wireless network base stations are the sole gateway for sensors deployed across a wide geographical area and provide the first hop for data going back to the datacenter. Network standards such as Long Term Evolution (LTE), coupled with the sheer volume of sensors that will associate with a single base station, will drive the need for high performance base stations that are able to deliver the quality of service (QoS) required by an IoT.
AMD's R-Series embedded Accelerated Processing Units are the perfect foundation to serve the rapidly growing compute needs within network base stations. They provide industry leading energy efficient 64-bit x86 processing combined with extensive networking connectivity, allowing equipment manufacturers to pack more intelligence within a base station and provide a higher quality of service to the end user.
LTE is a particularly important wireless network standard for an IoT. In the coming years, it will see widespread deployment in networks aimed at consumers, bringing high-bandwidth connectivity to smartphones and tablets. Though LTE networks are set to be the way consumer devices take part in an IoT, other types of wireless networks, including the ubiquitous and low-cost 802.11x WiFi standard, can be expected to carry this traffic. Such networks will be common with massively deployed, low-cost sensors, though base station requirements will not be all that different with the challenges remaining, regardless of whether the network is based on LTE, WiMax, WiFi, or another technology.
Due to the significant financial investment needed to deploy LTE, WiMax, and other network infrastructures, these are usually deployed in areas where operators see potential for high revenue, and not in remote locations where sensors, such as those that monitor the status of pumps for crop irrigation, will need to be. Despite this, users of such devices will demand that the network deliver greater performance and provide the most valuable and diverse data for analytics.
Engineering a high-performance LTE network to meet the challenges described requires the combination of energy-efficient compute and storage with diverse network connectivity that includes wired, wireless, and out-of-band options, including those that make use of the POTS network.
LTE networks offer both increased bandwidth and tighter latency requirements while decreasing utilization on the backhaul link and delivering "hot" content to users faster. Add the growing public demand for data security and privacy, and LTE network base stations have considerably higher hardware requirements than those of previous generations.
Processors in such network base stations not only have to process and forward more packets than ever before, but they also have to do so while meeting stricter QoS levels. Base stations will have greater intelligence such as sophisticated algorithms to decide what content to cache. The act of caching content in itself requires high-performance storage; engineers need to consider whether traditional platter-based storage such as hard drives will perform well enough, or whether flash-based storage devices such as solid-state drives are required.
Base station processing requirements will be pushed further by the need for securing and obfuscating personal data. Some of this can be handled by the user's device, but some things will need to retain personal information. Consider a temperature sensor that recognizes a user and sets the thermoset according to the user's preference. To preserve the user's privacy, the sensor will need to encrypt the data before sending it on the network. This will require the processors to have silicon dedicated to encryption to increase performance and decrease power use, such as the dedicated security block or an encryption co-processor.
Massively deployed, low-cost sensors may not have the computation capability present on the device to meet real-time data encryption requirements. If that's the case, an alternative scenario would be to have sensors send data to a server that is on its local network, meaning that the sensors themselves are not connected to a wide area network but use a base station as a network gateway.
The base station has to handle all encryption duties in real-time to preserve QoS; this poses significant computation requirements. Not only is the computation requirement significant, but when one considers the fact that these processors will have to be rugged and use little power, one can start to appreciate that the challenge of designing an efficient data transport mechanism from the field to the datacenter is not a trivial one.
Engineering the base station to meet the demands of an IoT requires one to look at the system in a holistic fashion and figure out where the bottlenecks may be. As bandwidth increases, processing power becomes increasingly important if packets are to be processed and forwarded to meet QoS requirements. The base station and the hardware inside need careful thought, because they have the chance to make or break the effectiveness of an IoT.
Read more about the challenges of IoT here:
— Lawrence Latif is a technical communications manager at AMD. He has published peer-reviewed research and has more than a decade of experience in enterprise IT, networking, system administration, software infrastructure, and data analytics. He holds a BSc in computer science and management from Kings College London, an MSc in systems engineering management, and a PhD in electronic and electrical engineering from University College London.