The final draft of the spec for Network Functions Virtualization is nearly complete, but several challenges lie ahead.
With a final draft of the specifications for Network Functions Virtualization coming by the middle of 2014, it's a good time to take a look at how we got here and what are some of the remaining challenges ahead.
The introductory NFV whitepaper, published in October 2012, visualized a rich ecosystem offering integration services as well as maintenance and third-party support. Since then, much work has taken place to draft specifications toward building NFV systems.
Work was well underway to detail the specifications for the underlying infrastructure, the software architecture, and the management and orchestration domains when, in April 2013, it was decided to focus on higher-level documents, known as end-to-end specifications. This in effect has shifted the specifications creation process to a top-down approach to ensure consistency and synergy among the various underlying domain specifications.
As a result, several key documents were approved, including the architecture framework specifying how the various domains work together to make up the overall system and the business requirements dealing with issues such as performance, management, security, resiliency, energy efficiency, and interoperability. A high-level document on use cases also was created for fixed and mobile, access edge, and core networks.
Today, NFV defines a common execution environment for compute, storage, and networking systems that is dynamically reconfigurable to support one or more use cases simultaneously. For example, one service provider can run an application inside an infrastructure operated as a service by a different provider. Thanks to the specification, the first service provider can rest assured it will be able to meet regulatory as well as performance requirements such as latency and reliability while it serves a global customer base.
NFV also defines a way to create an abstract model to enable deploying multiple applications or services. The model can identify types of services in a chain, the relations among them, the topology by which they are interconnected, and the overall management of such a chain.
To further encourage an open global NFV ecosystem that integrates components from different technology providers, NFV contains a proof-of-concept framework. It encourages efforts that include at least two vendors and at least one service provider.
A few challenges remain. NFV will require network operators to adopt new business models both internally and externally, opening the door to working with software providers, many of whom can be relatively small companies. Operators will need to learn how to rely on small innovative startups to provide critical applications for some of their services.
Meanwhile, service providers have come to realize there will be two flavors of off-the-shelf server -- standard high-volume servers and accelerated ones. The latter are needed to meet workloads requiring high-performance and low latency such as next-generation firewalls, especially in service-chaining configurations.
Finally, it will be a challenge to integrate software-defined networking into the NFV framework. Many believe that the SDN controller needs to be an integral part of the virtual infrastructure manager. In such cases, the OpenFlow protocol will need to be part of the interface to the network switches inside of the NFV infrastructure.
— Nabil Damouny is Vice Chair of the Market Education Committee at the Open Networking Foundation that manages SDN, and the editor of the compute-storage domain for ETSI NFV. He is also Senior Director of Strategic Marketing at Netronome.