TMCnet
ITEXPO begins in:   New Coverage :  Asterisk  |  Fax Software  |  SIP Phones  |  Small Cells
 
May 2007
Volume 10 / Number 5
Feature Articles
Richard "Zippy" Grigonis

Deliving Reliable Quality of Service

By Richard “Zippy” Grigonis, Feature Articles
 

Quality of Service (QoS) has been the bugbear of IP Communications since its very beginnings. Fear over call quality (or lack thereof ) slowed the adoption of IP by both providers and customers. Various techniques were proffered to maintain voice and video quality: overprovisioning of bandwidth, dedicated bandwidth (private IP networks), prioritization of realtime voice and video packet traffic, router protocols to signal the location of network congestions points, and so forth. Today, however, many providers and experts claim that even the public Internet offers morethan- acceptable QoS characteristics for voice, video and multimedia.

Ajay Joseph, Vice President of Network Architecture and Engineer for iBasis (news - alert) (http://www.ibasis.com), says, “We are a VoIP company with a presence in roughly 120+ countries. We have a wholesale business where we sell to carriers and a retail business where we sell prepaid disposable and non-disposable calling cards. Carriers connect to us with both IP and TDM methodologies. On the retail side, we use the wholesale network to terminate the voice calls. All of this is transported via IP across a core network. Interestingly, we use the Internet quite successfully to transport voice.”

“Some of the challenges that iBasis initially encountered years ago involved the public Internet,” says Joseph. “The quality of service was affected by unpredictable congestion points that would appear. We would have to take carriers in and out of service, depending on the quality of the network. Maintaining the system was a very manual, time-consuming effort, and it just didn’t scale.”




“So we put several things in place,” says Joseph. “First, since we used the Internet, as a policy we tried to minimize the amount of public and private peering that we use across the network. That’s because, if you look at the Internet, there are many various ISPs that connect to each other. These ISPs are connected through peering points, which allow for bilateral connections between themselves, or they could also go through what are called public peering points. If congestion occurs at these peering points, it gets taken care of by the ISP, which means it’s out of our control. One of the decisions we made was to design an virtual IP network so as to avoid both private and public peering points. So, between all of our POPs [Points of Presence] we have a virtual backbone. We don’t own the whole physical network, but we connect to ISPs as customers and we then dynamically define routes using BGP [Border Gateway Protocol] the IP routing mechanism, so as to keep the iBasis traffic away from the peering points. All of the peering that does happen is through the iBasis cloud itself, not through these public and private peering points.”

“Now, if any kind of capacity upgrade must take place,” says Joseph, “we at iBasis manage it, not the ISP. We’ve been running this for at least seven years now, and the quality we get using this technique is very good, very clean and clear. The packet loss is close to zero. And packet latency is low too.”

Joseph adds, “We’ve also observed two interesting phenomena: First, the general quality on the Internet has improved tremendously over time. Second, the price of bandwidth has dropped in terms of dollars per megabit-per-second. What all this means is that our original decision to use the Internet was a good one. It reduces tremendously our cost of building out the network.”

“So we’ve got millions of calls entering the network at any given point,” says Joseph, “and all those calls come into the network and have to reach their respective destinations. By the time a call reaches the far end across the IP cloud, it could have gone through congestion points anywhere in the world. Fortunately, in my shop we have a software development group that develops the software that handles the routing of the calls in the network. Given that we don’t actually own the whole pipe, we probe every endpoint in our network which rides atop the Internet, and we probe the quality of the connections on a fairly frequent basis. The results of these probes are fed in real-time to the routing system. Depending on the quality of the IP pipe towards the far end, if the quality is very bad, then the call will not be routed there - it automatically gets taken out of the routing system. So, from an operations point of view, we don’t have to do the manual activity of removing a call from routing because the network quality is bad. That works pretty effectively.”

“Additionally, we’ve made other modifications,” says Joseph. “Let me give you an example. Let’s say a call originates in New York and it needs to terminate in Malaysia on the other side of the world, but the quality of the Internet between New York and Malaysia happens to be bad at that time. However, the Internet’s quality from New York to Los Angeles is found to be good, and it’s good from LA to Malaysia. So in real-time we’re also looking at the different paths across the Internet cloud between all of the different endpoints and we figure out which is the best path through which to ultimately terminate the call. If the path from New York to Malaysia is bad, but New York to LA is good and LA to Malaysia is good, then we’ll just force the call to travel via LA instead of directly to Malaysia. That maximizes the probability of completing the call, as opposed to saying ’Oh the quality is bad, we won’t take any calls at this location for a while’.”

“We also use session border controllers inside the network for transcoding and quality purposes,” says Joseph, “and we’ve been running the system very successfully for about seven years now. Millions of calls traverse our network at any given point. We buy routes from our providers and they give us rates and coverage - such as to Malaysia. We have systems that look at the quality of the call that terminate across providers and we have threshold values that are formulated based on the provider, the coverage, and so forth. Based on a particular threshold, if a particular provider does not ’behave’ well for a particular route, the provider gets shut out and goes back into testing. We have a testing system and a ’scrubbing system’ that looks at the quality of the calls that terminate through a provider and if something’s wrong, different routes are used instead of the provider in question.”

“Thus, our quality verification methodology is quite extensive; it goes through the IP cloud up through the session layer and all the way up to the application layer,” concludes Joseph.

At the heart of maintaining QoS are the related activities of testing and monitoring.

One major company in the field, Psytechnics (news - alert) (http://www.psytechnics.com) has helped promote the more encompassing term, Quality of Experience (QoE) to describe what its voice, video and multimedia solutions do. Psytechnics recently published a report (March 2007) wherein they applied their testing expertise to evaluating the voice quality of a pre-release of Microsoft Office Communications Server 2007 and Microsoft Office Communicator 2007 desktop VoIP solution. Their results revealed that Microsoft software-generated calls deliver superior voice quality to a single- purpose IP phone. As Psytechnics reports, “These findings show that the quality of Microsoft’s offering is high enough to allow companies to integrate voice communications with PCs, which could eliminate the need to purchase expensive IP phones.”

Psytechnics also is now heavily into the testing and QoE of the burgeoning IPTV industry. Psytechnics’ Vice President of Product Marketing, Benjamin Ellis, says, “Two things have been keeping us really busy at Psytechnics. One concerns enterprises adopting VoIP, running into problems and then having us help them out. The other concerns IPTV, an area now becoming populated with many providers, all of which have now gotten over the excitement of finally get it working at all, and are now interested in getting IPTV to work well and efficiently.”

“If you had asked me ten months ago, I would have said that the ways you provide a good quality of experience to voice and video were somewhat similar,” says Ellis. “But recently they’ve both diverged. The real difference between voice and video actually has to do with where the content comes from. That sounds dumb, but for VoIP, the users are providing the content. Much of what we do there is looking at the ’total speech quality’, which involves things such as looking at a waveform. With IPTV the content is less of an issue, but it is still an issue, and we do some decent things to maintain quality. But especially when the provider receives the video files, they do some assessment on them to check that they are of sufficient quality. Once you’ve taken care of the content quality, as it were, you don’t have much to worry about it since once the video file is on the server it doesn’t get corrupted. When it gets translated across the network as packets, what we do is what we call ’ingestion’ to check that it still is of broadcast quality.”

“We’re also into ensuring content delivery to set-top boxes, and making sure people actually have a picture,” says Ellis. “About a year ago, people were very focused on network tools, checking that the QoS was configured correctly to deliver the IPTV and those sorts of things. Then there was a huge fracas over what I was call ’signaling metrics’ or how quickly a customer can change channels, and how long does it take a channel to come up on the screen once you’ve changed channels - those sorts of things. There are some reasonably-sized providers in Europe and they’ve realized that these areas they were most concerned about aren’t actually in the problem space. Once you’ve deployed IPTV, all of those factors remain fairly constant.”

“When you deploy IPTV, however, you get interactions in the network that you didn’t expect, between different subscribers for example, or having network elements not doing what they were meant to do,” explains Ellis. “It comes down to how well you’re actually delivering the stream. That’s great for us as Psytechnics, since that’s the thing we major in measuring, by looking at the stream on a packet-by-packet basis and working out how well a packet gets delivered and if it didn’t get delivered well, then were there impairment issues of a sort that the subscriber will actually notice? Or is it something that would simply be compensated by the set-top box? These are kinds of things that we examine and help resolve.”

So it appears that while QoS will always be a cause for concern, both the modern Internet and QoS techniques should allay most of the perennial fears held by consumers, businesses and other organizations.

Richard Grigonis is the Executive Editor of TMC’s IP Communications Group.

 




Today @ TMC
Upcoming Events
ITEXPO West 2012
October 2- 5, 2012
The Austin Convention Center
Austin, Texas
MSPWorld
The World's Premier Managed Services and Cloud Computing Event
Click for Dates and Locations
Mobility Tech Conference & Expo
October 3- 5, 2012
The Austin Convention Center
Austin, Texas
Cloud Communications Summit
October 3- 5, 2012
The Austin Convention Center
Austin, Texas