Network Infrastructure

How to Build a More Perfect Network

By TMCnet Special Guest
Asim Rasheed , Technical Marketing Manager, Ixia
  |  June 04, 2012

This article originally appeared in the June 2012 issue of INTERNET TELEPHONY

For a data center operator serving a bank, stock exchange, or other transaction-intensive environment, even second-long delays can equal millions of dollars lost. The problem is exacerbated as business-critical applications move to the cloud. Network operators are employing higher speed Ethernet to interconnect their data centers to manage growth in users and traffic. To ensure a high quality of experience on these wide area networks, network operators need comprehensive and precise testing to determine the impact of latency and jitter on network performance.

Network imperfections, also known as impairments, can cause packet delay, jitter, packet loss, and other problems that delay and disrupt vital services. These network impairments are unavoidable weaknesses that network operators and their customers need to mitigate. Impairments that play havoc with both control and data packets passing through networks must be discovered and rectified, especially for quality-sensitive service infrastructure.

Knowing When It Feels Right

Since impairments are always present within devices, systems, and networks to a certain degree, a relevant question is: How much impairment is too much? High-bandwidth applications that include peer-to-peer data, file transfer protocol, and broadcast video each have their own performance requirements in terms of bandwidth, latency, and jitter. 

Network operators must ensure that the QoE of their services feels right. VoIP calls must sound as good as land-line service; IPTV (News - Alert) must be absent of pixelated, blurred, or frozen frames; and high-speed Internet services must appear responsive. For Internet telephony services, voice call quality is very sensitive to delay and jitter. In addition, voice applications must re-sequence out-of-order packets and keep on providing service in the face of delayed or lost packets.

To ensure interoperability between devices and minimize service degradation across long distances, it is important that there are limits set on the maximum level of the most relevant impairments present at an output interface and the minimum level that can be tolerated at an input. Adherence to these limits will ensure interworking between different vendor equipment and networks, as well as providing the basis for isolation of problems.

Impairments are Fickle

As we’ve all experienced, working networks do not behave in a deterministic way. Caused by cumulative random events, impairments affect traffic packets that are traversing a network differently at any given moment. Most of us have experienced slower Internet upload/download speeds in the afternoon when the kids in our neighborhood get home from school. Distance of the path dynamically built for each packet as it traverses a cross-country network can impact application performance. And what about the media frenzy that happens when there is a wide-spread service disruption?

Since impairments are fickle, coming and going with whatever is trending, network operators need test solutions that can help to predict when and where they are most likely to occur. At the forefront of the battle to prevent latency and packet loss are test tools that have the ability to emulate WAN cloud impairments to simulate various network and cloud conditions and validate whether the end-to-end performance is impacted by the delay within the WAN cloud.

Testing Impairments

Pre-deployment impairment testing offers a controllable, simulated network environment that lets you evaluate how networks and services are impacted under various impaired traffic conditions. This saves time, effort, and money that would otherwise be spent using expensive network resources to replicate real-life conditions. With an impairment testing solution, you can easily emulate a real-life network with a few clicks of the mouse, running network protocols and sending data traffic over an emulated network that has impairments impacting those packets.

Functions to look for in impairment test systems:

  • high density 1GE, 10GE, and 40GE support so the system can grow with your needs;
  • realistic, high-scale WAN emulation for precision testing of expansive networks;
  • hardware-based impairment generation for full line rate impairments, superior network performance, and cost-effective testing;
  • integration with traffic generation, protocol emulation, and analysis – all from a single user interface – for faster time to test, lower costs, and ease of use;
  • high levels of latency to emulate long network distances and corresponding network latency (600ms delay with 10GE port pair, 6s with 1GE port pair, and 500ms delay with 40GE port pair) at line rates;
  • flexible and definable classifiers that allow you to impair traffic differently by class of traffic, such as QoS priority or application-specific impairment profiles; and
  • the ability to emulate large network clouds using combination of router/host emulation and impairment.

Companies use the results of their impairment testing to determine where to focus efforts to improve performance problems. One view is to keep the network simple and depend on applications to most efficiently manage traffic flows. Others point out that networks may need to be better provisioned or tuned to handle certain high-bandwidth, time-sensitive traffic. I expect that the answer lies somewhere between the two, with a fine-tuned network and impairment-aware applications working seamlessly together.




Edited by Brooke Neuman