NETWORKS

Innovating Our Way Out of NFV Disillusionment

By Dan Joe Barry, Vice President of Marketing  |  January 17, 2017

A variety of organizations, including the likes of Verizon (News - Alert) and Vodaphone, voiced their frustration at this year’s TM Forum Live! about the seeming lack of progress with NFV. There is an air of disillusionment that NFV hasn’t taken the world by storm as quickly as many had hoped. But is this sentiment justified? Haven’t we achieved a lot already? Aren’t we making progress?

Cost Savings, Performance, or Flexibility – But Not All Three

The first round of NFV solutions that have been tested has not delivered the performance, flexibility, and cost efficiencies that were expected by some carriers. This has raised doubts in the minds of some on whether or not to pursue NFV. But do carriers really have a choice?

No, says CIMI Group’s Tom Nolle. Based on input from major carrier clients, Nolle found that the cost per bit delivered in current carrier networks is set to exceed the revenue per bit generated within the next year. There is an urgent need for an alternative solution, and NFV was seen as the answer. So, what’s gone wrong?

When NFV first emerged four years ago, carriers were staking their claims in the new NFV space, often retrofitting existing technologies into the new NFV paradigm. Using an open approach, tremendous progress was made on proofs of concept, with a commendable focus on experimentation and pragmatic solutions that worked rather than traditional specification and standardization. But, in the rush to show progress, we lost the holistic view of what we were trying to achieve – namely, to deliver on NFV’s promise of high performance and flexible, cost-efficient carrier networks. All three are important, but achieving all three at the same time has proven to be a challenge.

Problems with NFV

NFV has proven to be its own worst enemy in terms of its infrastructure. Solutions such as the Intel (News - Alert) Open Network Platform were designed to support the NFV vision of separating hardware from software through virtualization, thereby enabling any virtual function to be deployed anywhere in the network. Using commodity servers, a common hardware platform could support any workload. Conceptually, this is the perfect solution. Yet, performance of the solution is not good enough. It cannot provide full throughput, and it costs too many CPU cores to handle data, which means we use more of the CPU resources moving data than actually processing it. It also means a high operational cost at the data center level, which undermines the need for cost-efficient networks.

The Open Virtual Switch, or OVS, was determined to be the source of the problem. The solution was to bypass the hypervisor and OVS and bind virtual functions directly to the network interface card using technologies like PCIe Direct Attach and Single Root Input Output Virtualization (SR-IOV). These solutions ensured higher performance, but at what cost?

Flexibility is the cost. By bypassing the hypervisor and tying virtual functions directly to physical NIC (News - Alert) hardware, the virtual functions cannot be freely deployed and migrated as needed. We are basically replacing proprietary appliances with NFV appliances. This compromises one of the basic requirements of NFV: flexibility to deploy and migrate virtual functions when and where needed.

It also compromises another promise of NFV: cost efficiency. One of the main reasons for using virtualization in any data center is to improve the use of server resources by running as many applications on as few servers as possible. This saves on space, power, and cooling costs. Power and cooling alone typically account for up to 40 percent of total data center operational costs. What we are left with is a choice between flexibility with the Intel Open Network Platform approach and performance with SR-IOV, with neither solution providing the cost-efficiencies that carriers need to be profitable. Is it any wonder that carriers are feeling less than ecstatic about NFV?

Reconsidering Design

Lester Thomas, Vodafone’s (News - Alert) chief systems architect, has noted that the virtual network functions he has seen are not “built from the ground up for the cloud” to give operators the flexibility they desire. The solution, then, is to design solutions with NFV in mind from the beginning. While retrofitting existing technologies can provide a good basis for proofs of concept, they are not finished products. However, we have learned a lot from these efforts – enough to design solutions that can meet NFV requirements.

Is it really possible to provide performance, flexibility, and cost efficiency at the same time? The answer is yes. Waves of new solutions are emerging that are designed specifically for NFV that can deliver on all these requirements simultaneously. One example is a new generation of NICs designed specifically for NFV. These NFV NICs enable OVS to deliver full-throughput data to virtual machines at 40Gbps using less than one CPU core.

This is a seven-times improvement in performance compared to the Intel Open Network Platform based on standard NICs with a corresponding eight-times reduction in CPU core usage. But, because the solution does not bypass the hypervisor using SR-IOV or offload the entire OVS to hardware, the virtualization abstraction layer remains intact and full virtual function mobility is maintained.

The savings in CPU cores ensure that CPU cores in the server are used for processing, not data delivery, allowing higher virtual function densities per server. In addition, because the NFV NIC solution makes it possible to freely deploy and migrate virtual machines, it is possible at the data center level to optimize server usage and even turn off idle servers, providing millions of dollars in savings.

Thus, by redesigning the NIC specifically for NFV, it is possible to address the overall requirements of NFV for high performance, flexibility, and cost efficiency at the same time. The same approach can be used with success in other areas if all the specific requirements of NFV are taken into consideration in the design of solutions specifically for NFV.

NFV is not a technological evolution, but a business revolution. We should go back to the original intentions of NFV as outlined in the early whitepapers and use these as a guide for NFV solution requirements. Carriers need an NFV infrastructure to enable them to do business in a totally different way; and virtualization, with all the benefits that entails, such as virtual function mobility, are critical to success. With that in mind, implementing intelligence in software is the best approach, as it is more scalable and enables automation and agility so only those workloads that must be accelerated in hardware should be accelerated in hardware. When hardware acceleration is used, it should have as little impact on the virtual functions and orchestration as possible, so it does not undermine the overall requirements for NFV.

Working Together

With more than 200 companies already dedicated to NFV and access to a vibrant open-source community, the issues with NFV are not technical; quite the opposite in fact. We are in danger of creating sub-optimal solutions in an effort to show quick progress. What is needed now for NFV are solutions designed from the ground up specifically to address the overall requirements of NFV. Developing and delivering these solutions will require a collaborative effort between carriers and vendors to not only provide solutions that work, but also to address the bottom line NFV requirements for success.


Daniel Joseph Barry is VP of Marketing at Napatech (News - Alert) and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector.

Edited by Stefania Viscusi