This article originally appeared in the March 2013 issue of INTERNET TELEPHONY.
Software-defined networking has become the technology darling du jour. With its popularity comes the inevitable over-hyped use of the term to describe everything from network virtualization protocols to holistic dynamic data center fabric-style overlays. Most likely to be adopted in data center networks are network virtualization protocols such as VXLAN and NVGRE.
Designed to provide more control over and isolation of virtualized networks, these protocols bring with them challenges that will need to be addressed before they can be widely deployed. One of the more disruptive changes they bring directly impacts the configuration of the entire network.
One of the reasons VXLAN and NVGRE have gained popularity is because they’re directly supported by the hypervisors most common in the data center. Research firm Gartner (News - Alert) sees Citrix and Microsoft as roughly equal to market leader VMware, though figures show Microsoft pulling into second place.
Both Microsoft and VMware have submitted their proposed network virtualization protocols to the IETF, and are considered to be low in terms of implementation complexity. Compared to other SDN-related offerings, which require not only support for new protocols but architectures to support their fabric-based design, it seems likely NVGRE and VXLAN will experience broader adoption before competing solutions.
But less complexity does not mean there are no changes or challenges ahead. Both VXLAN and NVGRE will have an impact on the already ballooning east-west traffic patterns putting stress on data center networks today. Both proposals require an increase in packet sizes: VXLAN by 50 bytes and NVGRE by 42 bytes. That can often push a packet beyond the standard 1500 MTU for Ethernet, causing fragmentation and impacting performance. While standardizing on Jumbo Frames, with its 9000 byte MTU, would certainly resolve the problem, operators are then left with the problem of ensuring all intermediate devices in the data path are capable of supporting Jumbo Frames.
Surprisingly, many are not. A basic rule of Ethernet says that the smallest MTU used by a node in the network path determines the maximum MTU for all traffic flowing along that path. So unless all network nodes are capable of supporting an increased MTU for protocols like VXLAN and NVGRE, networks will experience increased traffic and utilization due to fragmentation. Both can lead to performance problems.
And that’s not taking into consideration the nature of network virtualization protocols, which often rely on broadcast domains that can inadvertently flood a network with traffic, i.e. broadcast storms.
Driving 10+Gbe Adoption
New traffic patterns introduced with virtualization and an interest in adopting network virtualization protocols that put additional pressures on the core network are necessarily going to drive 10Gbe – or even 100Gbe - adoption in the network. A survey by Emulex (News - Alert) in late 2012 indicated a significant mandate to vault past 10 straight to 100Gbe by 2016.
In either case, it’s not just routers and switches that must be considered, but the entire infrastructure. Devices like application delivery controllers, caches, IDS and IPS, and other network-focused elements will need to support faster, fatter networking that amount to speed-bumps in the data center network. These choke points will need to be eliminated to ensure the impact of virtualization and network virtualization protocols do not impede performance or availability.
Proponents of SDN and related technologies often hand wave the impact to the network, if it’s even mentioned at all. While there are definite benefits and advantages to both SDN and network virtualization protocols over traditional networking, it’s important to understand how broad an impact adopting such technology may have on the entire data center network.
Lori MacVittie is senior technical marketing manager at F5 Networks (News - Alert) (www.f5.com).
Lori MacVittie is senior technical marketing manager at F5 Networks (www.f5.com).
Edited by Brooke Neuman