TMCnet
ITEXPO begins in:   New Coverage :  Asterisk  |  Fax Software  |  SIP Phones  |  Small Cells
 
June 2007
Volume 10 / Number 6
Feature Articles
Richard "Zippy" Grigonis

Death, Taxes and VoIP Testing

By Richard “Zippy” Grigonis, The Zippy Files
 

Testing is the most important activity that ensures the acceptable quality of voice over IP conversations. Testing has evolved from load testing and feature 'proof of concept' tests imported from the circuit-switched world, to simulating more subtle real-world network scenarios involving SIP registrations and malicious server attacks. "Testing" relating to VoIP can also involve scrutinizing infrastructure devices at the vendor level and even testing the testing hardware and software itself.

Over at Empirix, (news - alert) (http://www.empirix.com) which made a name for itself in computer telephony and PSTN-related testing long before the rise of VoIP, Vice President of Marketing and Management Duane Sword says, "We're happily selling lots of load generators and feature testers, and we're now getting a lot more inquiries as to how to take real test or operational scenarios that are happening in a live network and how to troubleshoot and diagnose them faster back in the lab for regression testing. That's one area that's driving a lot of business for us."

"People just want 'more' — more registrations per second," says Sword, "or more simulated malicious attacks. Instead of standard call generation or voice quality testing, people are interested in the more stateful nature of scenarios on the feature side or on the load, registration and threat side of things."




"People are examining operational network problems on the service or network diagnostic side," says Sword, "and taking those traces of what happened, if you will, and pasting them into a call generator or a network emulator to recreate those scenarios back in the lab. That's something that we stumbled upon, given that we span both the test and monitoring sides. It plays up a differentiator that we have. The only other company that could possibly do this in terms of their portfolio is Tektronix. Most of our other competitors focus on testing in the lab or else they're monitoring companies that have probes and they'll do CDR dialogs or they'll just look at signaling or a little bit of media. What we've done is to take real live operational scenarios and recreate them and troubleshoot them a lot quicker by harnessing the lab tools and monitoring tools together. People are not finding sophisticated performance problems; instead it's very much the 'teething' problems of ramping up, new subscribers and new applications. Thankfully, there are a lot of problems, so we're selling a lot of test gear."

Tektronix (news - alert) (http://www.tek.com), that great rival of Empirix, also looks at all the dimensions of communications equipment and networks with the technological equivalent a fine-toothed comb.

Keith Cobler, Marketing Manager at Tektronix, says, "As communications equipment moves through its life cycle all the way to full deployment, it's subjected by the manufacturer to functional and load tests, then it's brought into the carrier's labs, where they do interoperability testing. After that, there's an initial pilot phase of the deployment, which is followed by a full networkwide deployment. For Tektronix, we work in all stages of this product flow, working with the OEMs, through the carriers and their labs and all the way through full deployment. The importance of doing this is maintaining consistency in the way we do our tests and approach our monitoring of the networks, because you want to have consistency in the protocols and the technologies all the way through this process. That one concept is fundamental to our strategy."

"A triple play network is built up of a pyramid of different items," says Cobler. "At the bottom you're working the network elements. Again, you're doing the functional and load tests, then working with subsystems and the elements that make them up. Eventually we get to the networks where we start to look at different techniques for monitoring the networks, and eventually all the way up to the services and applications. So, you really need to build upon this foundation to ensure network quality of service [QoS] or quality of experience [QoE] to the end user. So, that's another way of looking at it."

"Network operators and equipment manufacturers need to adopt a wellgrounded test-and-monitoring strategy throughout this process," says Cobler. "We have our point solutions which do the functional and load tests. They both have a breadth and depth capability. The second part of that is the network monitoring solutions. This is where we start to talk now more about the complimentary nature of active and passive testing. In the case of passive testing, you have builtin monitoring, end-to-end across your network, where you're monitoring realtime traffic. Active testing, on the other hand, is a much more flexible solution that is perhaps a bit more cost-efficient and easier to scale for enterprise networks and other portions of the network. The one common theme in all of this is that the end users want to have the same good level of QoE, whether or not they're accessing their data or their applications over a fixed network, a mobile, an enterprise network, what you. They want to have the same QoE no matter how they access the network."

Scott Sumner, Tektronix' Senior Manager of Active Test Products, says, "We see quite an interesting synergy between active and passive testing. There are various angles to this. The main differentiation between the two is that, in our case, is that an active test is really designed to replicate the end user's perception of the experience itself. That means that we're listening to both the analog part of the conversation from our active test calls as well as looking at the packet statistics describing the underlying delivery mechanism in some networks. The active solution listens to the calls, which allows you to measure things that you can't normally measure with a passive test system. So you can measure things such as echo, noise, voice path delay, distortion, clipping events, and you can validate DTMF tones — things that you wouldn't be able to detect just by looking at how many packets got through the network and determining what their structure is. We also correlate these metrics together with the packet statistics of the packets as they arrive, so that we can formulate MOS scores based on both analog and IP measurements. There are a lot of interesting use-cases where the active test system can identify the problem and the passive test system can isolate it in the core — let's say a core router or a gateway that's causing the problem. There are also other instances where a passive test system sees a problem and you can isolate how many customers and which access networks are being affected by using the active test system to effectively localize the problem. So there's quite a complementary relationship between active and passive testing."

"If you look at a lot of network operators, they have both passive and active test systems in place and they're using both for exactly those two reasons," says Sumner. "But Tektronix is the only company that offers both of these to the extent that we can cover everything from the network core straight into the customer premise."

 

Testing Voice over WiFi

Azimuth Systems (news - alert) (http://www.azimuthsystems.com) is actively involved with the development of new 802.11 standards enabling voice over WiFi (VoWiFi), and company representatives often publish and present on the topics of wireless VoIP and cellular- to-WiFi convergence.

Recently Azimuth announced a VoWiFi test suite of over 20 benchmark tests enabling service providers, handset providers and semiconductor vendors to streamline the testing of VoWiFi phones and converged wireless devices. This VoWiFi Handset Test Suite adds power consumption testing to the automated scripts, and allows vendors to analyze voice quality, roaming performance and battery life of wireless VoIP handsets under various motion and traffic conditions.

Azimuth's Vice President of Marketing, Jeff Abramowitz, says, "We focus on wireless IP test equipment aimed at engineering applications. Historically, that has meant the WiFi industry, but increasingly we're finding ourselves addressing WiMAX and cellular through fixed mobile convergence [FMC] applications. We've built a portfolio of products and we sell to more than 120 different vendors in the WiFi and WiMAX space. We're the official test engineering supplier to the WiFi Alliance. If you look at our customer list you can see that they represent about 90 percent of the WiFi market. So we're pretty well entrenched in the WiFi space."

"One area we see growing the fastest is the testing of voice over wireless, particularly Voice over WiFi," says Abramowitz. "This is occurring both on the handset side and the infrastructure side, although frankly most of what we've been seeing lately concerns figuring out how to do handset testing. The WiFi industry has a history of mostly using retail routers, and folks using laptop computers to get data connectivity in their homes, which is a larger part of the market than, for example, people accessing the Internet wirelessly in the enterprise. In both cases, these are data applications, and you don't necessarily need high performance to service them. As people move to voice, however, whether it's in the enterprise or in the home, there is a high level of performance requirements. We call that 'carrier-grade WiFi'. We're starting to see carriers or operators drive the performance requirements for the WiFi industry."

Azimuth Systems has spent an increasing amount of attention on helping to test what's traveling over voice over WiFi networks and how to make devices better serve than environment.

Graham Celine, Azimuth's Senior Director of Marketing, says, "We're working with a number of the vendors out there, starting with the chipset manufacturers, who ask us, 'How do we make our chipsets more efficient to work in handsets?' and then the handset makers and service providers," says Celine.

"Having gone out and done some testing," says Celine, "they all come back and want to discuss the same three key topics that concern them: First, is straightforward voice quality. That's related to WiFi coverage. How far can you move from the access point? How much background noise or RF interference before your call becomes unacceptable in quality? Second, is handoff. The concept of handoff is itself three-fold. If you're in an enterprise network, you're going from access point to access point. If you're in a public network you could be going from hotspot to hotspot. And if you're in public network where there is no additional hotspot, then you're handing off from WiFi back to the cellular network. In all of those cases, when you're switching there's a potential to lose calls."

"The key third concern is battery life," says Celine. "With dual-mode phones, you're putting a WiFi radio into a handset that already has, say, a GSM radio. Doing that adds to battery drain. The service providers need to ensure that they can provide a level of service acceptable to their customers. They can't add functionality and tell customers that they have to charge their phone every two hours. That would be unacceptable."

 

Testing the Testers

Netronome Systems (news - alert) (http://www.netronome.com) provides the Open Appliance platform, a Linux and IA/x86-based solution that helps next-gen products come to market faster and without expensive redesign. Jarrod Siket, Vice President of Marketing for Netronome Systems, says, "We don't sell test and measurement equipment to enterprises or service providers. What we sell is our appliances and hardware and software components to test and measurement companies so that the products that they build meet the requirements for next-gen networks focused on VoIP and IPTV applications." Siket continues: "Network operators want to continue to increase the performance, have bigger pipes, more bandwidth, and more packets per second, yet at the same time, both in the enterprise and the service providers space, they're being asked to take a much closer look at all of the traffic in the network. In the test-andmeasurement space that means looking at individual voice packet flows, measuring specific user's VoIP quality during a trouble ticket. It might mean some type of on-demand or scheduled testing to look at batch samples of VoIP or IPTV flows for the purpose of measuring quality."

"For a test-and-measurement device the last thing you want is for the device itself to be the source of either injecting or incorrectly measuring delay, loss and jitter, when in fact its purpose for being there in the first place is to measure the application quality to see if those things exist," says Siket.

Explains Siket, "We've found that, with the vast majority of these test-and-measurement companies, their intellectual property and value resides in their VoIP or IP testing stacks and, one layer up, their OSS [Operations Support System] that manages the many probes peppered around the network. They've also found that the actual appliance or probe sitting in the network is no longer a service-specific box that was optimized on POTS [Plain Old Telephone Service] or DSL testing. It's more of a universal or general appliance that just connects to an Ethernet network, and is a place to house their IP applications and test modules. When you think about it in that regard, the three primary components of a next-gen testing device are its OSS that controls all the boxes, the many application test modules that reside on the probe itself, and then the third piece is the actual hardware probe. That's what we at Netronome Systems offer. We call ourselves an 'open appliance', a platform that any test-and-measurement company can add to the network and create a reliable platform for not only VoIP, IPTV and other application testing, but also the active and passive monitoring of the network itself."

 

From Analog to IMS

"VoIP is becoming more critical," says Bahaa Moukadam, Vice President of Marketing at Spirent Communications (news - alert) (http://www.spirent.com), a global provider of integrated performance analysis and service assurance systems enabling the development and deployment of next-gen networking technology such as Internet telephony, broadband services, 3G wireless, global navigation satellite systems, and network security equipment. "You don't hear as much about VoIP as you used to, which means that the technology is maturing and many carriers are deploying more and more VoIP. Another trend involves migrating from VoIP to an IMS [IP Multimedia Subsystem] and fixedmobile convergence [FMC] environment and architecture, not necessarily at the carrier level yet, but certainly there's a lot of activity with some key equipment vendors we work with. That creates another layer of challenges, or amplifies existing ones. Some of them are technical challenges, while others deal with organization structure and how carriers will approach the network as a whole network over time rather than as separate wireless and wireline networks. We look at it from an end user behavior point of view, which involves eliminating the 'graying out' some boundaries between the wireless and wired infrastructure."

"Another interesting trend seems almost counterintuitive," says Moukadam. "Much of the move to fiber involves consolidating the network and going to higher speeds in order to offer triple play services. Ironically, this move to drive fiber also drives the need for analog POTS testing. As people start bringing fiber to the home, they still have two-to-four POTS lines coming into the house. Much of this testing does involve driving regular POTS calls through the lines and back into the infrastructure, so carriers must ensure this works properly. With IP you can assimilate many endpoints out of one port. But with POTS, there's a one-to-one ratio. So that's driving a lot of POTS test ports, and that part of our business is growing rapidly instead of declining."

"Carriers are starting to think more about creating high-end IMS-related labs," says Moukadam. "Some of them have actually done it, but other big carriers are still a few months away from starting to form IMS-oriented labs. With IMS changing so fast, some of the big equipment vendors in this space are saying, 'We're not so sure that our continuing to build and house test tools to stay up with the technology curve is really the right way to do things.' We're making a lot of progress with them in terms of shifting some of that investment in in-house tools to partnering with external, third-party test vendors such as ourselves, under the right circumstances with the right solution, which we believe we at Spirent have. We've made a lot of progress on that front over the last eight months or so. Things are becoming a lot more open."

 

Quality Assurance

Brix Networks' (www.brixnetworks.com) integrated hardware and software products — "the Brix System" — are strategic service assurance solutions that proactively monitor IP service and application quality. The Brix System is used by network operators to guarantee the successful launch and ongoing operation of their portfolio of IP services, including VoIP, IPTV, and VPNs.

Kaynam Hedayat CTO and Vice President of Engineering at Brix Networks says, "We are very busy not only with VoIP but also IPTV and mobile systems. Looking at VoIP for a moment, as the industry matures more and more, we see a lot of demand for testing and monitoring all the way to the handset. Originally in the case of VoIP, everyone concentrated on getting the core of the system up and running, and then the service itself. They tended to ignore the customer experience, for two reasons: First, there were no testing tools available at the time. Second, scaling the technology is a difficult task."

"We at Brix are pushing a couple of standards within the IETF and we're working with several vendors and providers to make this challenge achievable," says Hedayat. "We recently announced a relationship with SunRocket and Linksys, based on one of those standards. We are working with CPE and handset vendors to turn these devices into intelligent 'cooperators' or 'reflectors' of VoIP calls with media loopback capabilities, to test the quality and also the availability of the service, all the way to the customer's home or handset. This applies both to residential and enterprise applications. This sounds like a relatively simple capability, which it is, since it's based on standards, so a typical handset or CPE vendor can implement our concept in their device within a couple of weeks — but the capabilities it offers to providers is immense. It enables them to effectively sweep the whole customer and user base and have full visibility into the quality and availability of the service all the way to the user."

"Then there's analysis and reports," says Hedayat. "We're working very hard on next-gen business intelligence tools. We find that our customers originally used this data to turn off their network and start to operate it in a repair mentality. The tools were used for troubleshooting and effectively fixing problems very rapidly. I always tell my customers should be capable to tell a customer calling about a problem that they know what it is and that they've already fixed it. That's our focus on what we want our tools to be able to do."

Hedayat concludes: "For the past two years, we've discovered that customers want to use the data produced by our tools for things other than testing; mainly concerning executive reporting — they want to show the executives that the service is working correctly — and we're seeing a lot of demand for service marketing. They want to use this data to market the service against the competition, which requires a whole set of business intelligence expertise in the system. That's a capability that we're introducing in our products very soon."

Ensuring the quality and reliability of large-scale VoIP deployments is a huge issue today over at Covergence (news - alert) (http://www.covergence.com) too.

Founder, CTO and VP of Engineering Ken Kuenzel says: "It's a huge issue for our customers. There's the regular latency, delay and jitter aspects of the real-time protocol with which we have to deal, but those aren't really the primary issues. What's really important is this whole host of barriers that exist in deployment of these networks that are causing service providers not to be able to consistently maintain the quality and reliability of their services. Problems range all over the map from variations in the protocol implementations to interoperability issues, to misconfigured devices that can go into registration storms or otherwise behave inappropriately. There's buggy hardware and software out there, and many other issues."

"We at Covergence have built a session border controller specifically for the access edge," says Kuenzel. "We're in a world where hundreds of thousands and millions of devices are communicating from the access edge of the network. There are devices from a variety of different vendors with a variety of use-case models. They could be running VoIP, instant messaging, presence, some variant of all these things. Or they could be running software from BroadSoft, or accessing software from Microsoft."

Adds Rod Hodgman, Covergence's Vice President of Marketing, “So, we're in a position where our product has to be really able to allow our customers to quickly diagnose-test, diagnose-debug and continuously monitor and repair network outages and disable and mediate devices that are working inappropriately. We put a lot of management and quality and reliability capability into our product, so that our customers can easily and quickly diagnose a variety of problems in the network."

"When we talk to our service provider partners and our live enterprise partners," says Hodgman, "it's less about spot-checking and testing. They're really concerned about ongoing reliability, so we've put a lot of effort in our tools to build out that reliability as they deploy their networks. So it's not about testing it for the first time, it's about continual monitoring and being able to tell when something is going wrong and isolating it. That's why we've built a lot of trace capability in there. You can go back and look at historical calls. You can see what the quality was of a certain VoIP call or trunk group. The tools out there today are shunted off to the side. You've got to know when to check for problems. Ours is very much the network probe approach that's becoming popular at Cisco, Juniper and enterprises where devices that are deployed in the network have to have some ability to monitor and control the overall quality of that network."

 

Testing Becomes Management

We've said it before — the lines between testing, monitoring and management itself become blurred.

As Richard Whitehead, CTO of Clarus Systems (news - alert) (http://www.clarussystems.com), says, "Clarus made its name specifically in testing, but we now see our role considerably broadened into the management sphere, since testing has become part of the management process anyway. IT management has had 20 or 30 years to figure this out, but the IP telephony and unified communications communities are going to have to 'get with it' very quickly, considering how fast things are changing, and they'll probably have a steep learning curve."

"Clarus has an enterprise product in the marketplace that has been consumed by various types of users," says Whitehead. "The first were system integrators. These are the folks actually charged with deploying IP telephony and communications solutions. Our value proposition there is really simple - use software to automate the testing and acceptance of an IP telephony deployment, rather than have people do it manually. If it's done automatically, then you're not wasting manpower; therefore the ROI is really easy to identify. Not only is it cheaper to do but you can do it faster, more efficiently and more objectively. We've dealt with integrators who are the top five of the Cisco IP telephony specialists within any region. These are people who are actually involved the deployment and acceptance testing of IP telephony. They're using our services and software to streamline that process."

"Essentially, we have a fairly distributed application," says Whitehead. "It comprises two fundamental components: The first component extracts configuration information from a very high level to a very deep level from the PBX. So we extract data and look at the dial plan and the very detailed configuration of the PBX. Using this process we've identified a lot of issues up front. We generate what we call 'the fat finger' report, a quick 'second opinion' of configuration changes - just by scanning down a list you can often spot things that aren't right. For example if a device pool is supposed to be 30 phones and you see two groups, one of 29 and one with a single phone, you know something's gone wrong."

"Once we extract the configuration information we use that detailed understanding of the configuration to expedite the process of testing," says Whitehead. "There are obviously some basic tests that you always want to do. You want to check that every phone is plugged in and attached to the network. We can check things remotely using a logical configuration of a physical branch office, for example, observing phones as they're added to a distant system."

 

Testing and More Testing

Whether you own a IP network, use one or make network equipment, it's clear that it's impossible to ignore VoIP testing in one form or other.

Richard Grigonis is Executive Editor of TMC's IP Communications Group.

 

 




Today @ TMC
Upcoming Events
ITEXPO West 2012
October 2- 5, 2012
The Austin Convention Center
Austin, Texas
MSPWorld
The World's Premier Managed Services and Cloud Computing Event
Click for Dates and Locations
Mobility Tech Conference & Expo
October 3- 5, 2012
The Austin Convention Center
Austin, Texas
Cloud Communications Summit
October 3- 5, 2012
The Austin Convention Center
Austin, Texas