The computer industry has offered the capability to concurrently run multiple operating systems on one physical CPU complex in a virtual machine environment for decades. So what is different about network functions virtualization, simply referred to as NFV?
For starters it provides a standards-based model for virtualizing the vast array of network element functions that exist in IT and service provider networks today. Both private and public network operators are behind this movement and see it as a way to simplify the management of their networks. What they are facing today is the challenge of configuring and provisioning many different devices such as routers, firewalls, security gateways, DPI platforms, policy platforms, and packet gateways, just to name a few. All of these NEs need to operate in concert to ensure data networks are running smoothly.
So it is easy to understand why network operators want NFV. But will it be easy for vendors that supply these NEs to fit into this model? Vendor products that operate in the network application domain such as application servers and policy platforms will be able to quickly adapt since these “network applications” are running in the user memory address space of off-the-shelf Unix operating systems. These products can be readily virtualized without significant re-engineering.
Other products that are more closely tied to the data plane are where things get a bit more difficult. With products such as routers, security gateways and DPI engines running at wire speeds, virtualizing this functionality can present challenges since latency has to be kept at a minimum to keep data flowing at designated, higher-speed bandwidths. The challenge for vendors of these products is that they have traditionally used specialized hardware and software to perform network functions at wire speeds. So NFV adoption among this class of NE product vendors will vary depending on whether or not their dependency on specialized hardware can be engineered out of their products and how long it will take to do this.
There is no doubt that NFV matters, and in the case of DPI/policy platforms specifically and the gathering of network intelligence it matters a great deal. Network intelligence goes beyond traditional DPI statistics capabilities by deriving in real time a subscriber-centric profile of network usage in the context of the subscriber’s device in use, location, application in use, content category accessed, bandwidth speed, and data volumes consumed. If network intelligence can be gathered in any part of the network because it is virtualized, then service providers can gather network intelligence that is specific to that part of the network, giving them a much higher resolution view of traffic as it transits various parts of their access and core network infrastructure. All of this can be fed in real time to SDN components using data collector APIs. This closed loop model leads to better actions on the part of SDN components. From a qualitative analytics viewpoint, all of the network intelligence gathered in network-distributed, virtualized NEs gives service providers more granular network intelligence to make better business decisions.
The core economic benefit to service providers of this approach to gathering network intelligence is that product licensing schemes implemented by vendors can be much more flexible and therefore more cost effective to deploy. Since legacy NEs ran on specific hardware, vendors had to tie licensing of their products to specific devices. With NFV-based products, product licenses can be activated as needed and even more importantly not be tied to a physical device’s MAC address. This mitigates the inherent inaccuracies in capacity planning that can result from estimating DPI resources ahead of time for each platform and then potentially not using these resources because they are not positioned for use in the right part of the service provider’s network.
Not only does this directly translate to significant network cost savings, it also accelerates time-to-market for new policy-based services. Intelligent policy enforcement capabilities that may be offered as part of a DPI/policy platform can be readily provisioned in markets where it is needed to either conduct trials or deploy region-specific services. The market for broadband network services is constantly changing and support for more flexible deployment of DPI and policy resources can be a huge benefit to service providers. It is not a question of whether or not virtualized DPI/policy functions will benefit service providers; it is a question of how quickly can the vendor community meet the demand.
Ken Osowski is director of solutions marketing at Procera Networks (News - Alert) (www.proceranetworks.com).
Edited by Stefania Viscusi