July/August 2009 | Volume 1/Number 4
QoS, QoE and Bandwidth Management
By Richard "Zippy" Grigonis
Quality of Service (QoS) is merely about metrics: packet delay, loss and jitter. Certainly delay and loss are critical because of a recent increased boom in the use of conferencing applications: videoconferencing, internal/external webinars/webcasts, etc. But QoS is just the first step. Quality of Experience (QoE) is more about what happens in the customer's psyche. Good metrics mean nothing if a subscriber to a service is having a problem that is getting him or her angry enough to switch service providers, thus adding to "churn". Even if QoE factors are fine, transaction times may be slow, interfaces awkward, or other subtle, subjective problems may exist, which takes us into the realm of Customer Experience Management (CEM).
QoE for Everybody
One company that practically invented and popularized the term QoE is Psytechnics, a leader in IP voice and video performance management and call quality assessment. Psytechnics offers products that measure and troubleshoot service performance based on a user's call experience and, in real-time, creates a more efficient operations and support environment for both enterprises and service providers. Psytechnics' Experience Manager solution for voice and video performance management and troubleshooting is capable of real-time, objective, call analysis measurement of users' QoE as well as network QoS for every call. Experience can detect and diagnose the factors that typically impact call quality including acoustic noise, echo, delay, distortion and video blocking, blurring and freezing. It enables rapid and efficient diagnosis, using the correct responder, resource or service provider.
Psytechnics' Joe Frost, Vice President of Marketing, says "QoS and QoE are completely different. We've been sending out the message about QoE for five or more years now. It started when we began working on our next-generation voice performance management tools. Among our customers – primarily managed services providers – is this generic belief, fostered initially by the vendors, is that if you get the network architecture and you've implemented QoS, you won't have any problems. Now that's fine for non-real-time applications, but that clearly doesn't apply to real-time applications, according to our customers. It's a completely different environment. Within North America we're now seeing a lot more videoconferencing and telepresence activity and so there arise-related issues. When you encounter problems with voice and video they generate an instant emotional reaction among users. If you and I can't hear each other, or if I can hear you but you can't hear me, or one of us has to keep repeating ourselves, it very quickly becomes an emotional situation and one of us will simply pick up a mobile phone and make a call. Many service providers thus realize that in the case of real-time communications applications, you really have to take into account the emotional aspects of the user experience. People just don't tolerate packet delay, echo or strange sounds when making a call."
"In the QoS versus QoE debate, there's a lot more focus now on QoE from the perspective of the actual user experience. Many vendors are talking about QoE, but it covers a wide range of applications. They're talking about applications response times and from a usability perspective of applications. Instead, when we talk about QoE what we mean is, ‘Are you and I able to have a good interaction or communications experience? Do we have to repeat ourselves? Can I hear you clearly? Can you see me clearly?' We're extremely focused about the real-time communications experience when we refer to QoE."
Finding Those Bandwidth Gluttons
Comptel Corporation is an international telecom software company specializing in the Operations Support Systems (OSS) market for network operators. They sell software licenses as well as services and maintenance related to their products.
Comptel's Olivier Suard, Director of Marketing, says "We focus on service providers but we don't do a great deal when it comes to QoS per se. However, we've done quite a bit of work recently that relates to policy management, which is about what a user can do and when. One of the main areas in which we've done some work has to do with bandwidth management — more specifically, with mobile bandwidth management. A lot of our experience originates out of Asia, where they're forging ahead with mobile broadband and lots of exciting services. They're encountering a lot of the problems that will following in Europe and America."
"We did a project for a leading operator in Hong Kong that's partly owned by Vodafone," says Suard. "Hong Kong is not a very large territory, but we're talking about millions of subscribers. They brought out a 7.2 Mbps download and 2 Mbps speed broadband offering for mobile. One of their objectives as offer a true Internet experience equivalent to fixed broadband, something that's appearing all over the place. As a result, they went for what's essentially flat-fee pricing, but it's a bit ‘tiered' with gold and silver classes of users. The issue with such pricing is that, when it's an all-you-can-eat situation, some people use far more bandwidth than their fair share. The operator wanted to avoid that kind of bandwidth hogging, since it could affect everyone's QoS. So we did a bandwidth management solution for them based on our mediation product, which collects usage data in real time. In this case we also deployed our provisioning products. Basically, we monitor usage on a continual basis at both the cell level and user levels — not in a ‘big brother' scenario but simply to look at levels of usage. When cell congestion occurs, then the usage is compared with the kind of users there are and what type of services they are using, to see if they are QoS-critical services. Then, the QoS is adjusted for a particular user in that particular cell downwards, thus freeing up bandwidth for the other users and boosting their QoS. The bandwidth and QoS situation is monitored constantly in such a way that if the overly active user finishes whatever they're doing, or of the congestion disappears, then we can reinstate their QoS."
Battle of the Flows – Voice, Video, Data
Founded in 2000, Shenick's offerings meet the challenges of network business-class service quality and the issues associated with the introduction of new revenue-generating applications for next-gen broadband network equipment vendors and IP-oriented network service providers.
Robert Winters, CMO of Shenick, says, "QoE is becoming more of an issue on an individual application basis. Now more than ever, there are more applications behind a residential gateway and there's more and more of a requirement that each of these are being viewed individually to see that each one delivers on the service the customer is expecting. Service providers need to know if can they have 2 or 3 TVs behind one home or will they only be able to support service for one TV. With multiple applications behind the typically residential gateway, whether it's cable or DSL, wireless or WiMAX — the critical factor is the quality measurement. When you add to the mix the detrimental effect that P2P traffic is having on everyone's bandwidth and, in particular, what effect that's having on video or voice or other web transactions in a typical home environment, then providers need to be able to accurately determine the quality of each application flow. That's where Shenick can help — we are a provider of IP communications test and performance monitoring systems that can drill down to monitor each and every application flow in a test or live environment right up to 10 Gbps levels."
Shenick also addresses next-generation converged network and application performance issues for IPTV, VoD, Triple Play (VoIP video, data), IMS, Security Attack Mitigation, Deep Packet Inspection (DPI), Traffic Shaping, Peer to Peer (P2P), Application Server, Metro Ethernet and IPv4/IPv6 hybrid network deployments. Their two core products are diversifEye and servicEye. diversifEye is a converged network IP test and monitoring system that offers a per-flow, per-application view of each and every traffic flow in the network helping service providers and NEMs generate and analyze large volumes of concurrent, stateful, real-time traffic flows for applications such as IPTV, VoIP, Video on Demand and Peer-to-Peer. As for Shenick's servicEye, it provides IPTV monitoring and service assurance from the video head end right through to the end viewer, delivering a proactive approach to quality assurance through regular, active quality checks and round-the-clock monitoring of each IPTV channel. It enables service provider pinpoint where problems occur in the network or the encoder. Moreover, service providers can proactively reduce and rapidly isolate quality issues saving on repair costs and increase productivity rates through efficient resource allocation. They can also manage content provider quality issues and establish reliable mechanisms to guarantee content service level agreements.
Another approach to examining various types of traffic can be found over at Allot Communications, whose mastery of network traffic management is based on integrating their expertise in subscriber and traffic control, Internet access, and WAN optimization. Their plug-and-play products include the NetEnforcer family of Deep Packet Inspection (DPI)-based devices, which offer best-of-class traffic shaping technology for QoS/SLA enforcement, real-time IP monitoring and IP accounting; and NetXplorer, a centralized management system for network business intelligence offering global visibility and control, extensive reporting and analysis, and a high level of network security.
Allot's Director of Product Management, Cam Cullen, says, "The biggest problem for service providers and even enterprises is that it's increasingly more difficult to figure out ‘what traffic is what' on their networks. You want to prioritize real-time traffic, but that's difficult because you may something as simple as YouTube, and you instruct the system to look for YouTube.com from a classification perspective, but you may have video that's embedded in web browsers, Windows Media, Skype, Yahoo or what have you. There are so many different forms of audio and video communication that unless you have something like DPI or the technology that we have to understand what applications are really running on the network, the challenge of actually prioritizing real-time applications is nearly impossible. One of Allot's biggest efforts is to ensure that we keep up-to-date with the latest applications, what they look like and how they behave on the network. In the case of Skype and Vonage, two very important apps on the network today, they're actually encrypted so you can't just look at the packet stream and say, ‘Oh, this is Skype'. You really need to know its signature and how the application behaves."
"Then there's the challenge of wanting real-time and non-real-time in different ‘buckets'," says Cullen. "But even in the case of non-real-time apps, some of these are more important that others. Outside of MPLS EXP bit marking, on the differential services or ‘diffserv' side, there has been defined ‘expedited forwarding' and ‘multiple levels of assured forwarding', so there are a number of different classes of service that can be applied to traffic. The question becomes, ‘How much is a service provider using them?' You don't see many deployments that are working with more than three or four levels of prioritization of service simply because it's too complicated to do that on networks from a configuration and user perspective. The biggest trend among service providers is to try and delegate some control back to users so they can figure out what they want prioritized during times of net congestion or reduced bandwidth. But in that case, everyone has a differing opinion of what's important. So to be able to deliver QoS or QoE on a per-user basis is a big challenge. You need something that can do both subscriber identification as well as application identification."
Keeping It Simple
The Fanfare Group another major player in the system and device testing arena, provides solutions to help service providers and equipment manufacturers accelerate testing and improve product quality.
David Gehringer, Vice President of Marketing, says, "QoS is on the lips of the carriers far more than the device manufacturers, although the carrier are trying to shove at least part of that responsibility — at the device level — back to the manufacturers. Some carriers have started to levy fines or penalties against manufacturers for every bug that's found in the field by their customers. This relates to QoS, even though most of us think of QoS in terms of dropped frames, jitter or delay. But carriers also consider QoS more in terms of QoE — the perception of the customer."
Fanfare's own testing tool, iTest is an integrated test authoring and execution solution built for testers, developers, and automation teams. There are 3 versions of iTest: iTest Personal (for manual testers, developers, or those who test infrequently), iTest Team (a preferred solution for feature testers and engineers with experience using scripting languages and how must create pass/fail criteria for tests), and iTest Enterprise (which provides expert testers and automation teams with all the functionality of iTest Team, plus powerful abstraction for regression testing and test portability).
In May 2009, Fanfare released iTest 3.4, said to include the industry's first virtualized test environment, called the Virtual Testbed. Interestingly, this will allow testing teams to formulate device test scenarios even before the device itself is available for testing. That also goes for applications, which can now be coded before the target equipment is ready for testing. Tests can be scheduled for automatic execution. Version 3.4 also improves on layer 2 to 3 and 4 to 7 testing. It also support Ixia's IxNetwork and Spirent Avalanche network simulation and test technology.
Testing Before Deployment
Empirix has since 1992 helped telecom equipment makers, enterprise contact centers and even service providers test and monitor communications-based products, services and networks. Their Hammer-based service quality assurance solutions are used by all top 10 Network Equipment Manufacturers [NEMs], 9 of the top 10 service providers and most of the Fortune 100 companies.
Bob Hockman, Empirix' Director of Product Management for the Network Assurance Solutions Group, says, "Our Service Assurance Solution group deals with distributed system that monitors SIP-signaling, SS7, and Voice-type applications. I'm part of the Network Assurance Group, where we're more into testing than monitoring and developing products used in the pre-deployment of network elements and mocked-up networks before they go operational. Our Contact Center group is into testing, but it's testing that specifically involves agent, voice quality, IVR and so forth, for contact centers."
"We've also dealt with voice quality, heavy signaling on voice, all of the IMS stuff, and what have you," says Hockman. "Our newest product is the Hammer Edge, which is our first testing product that goes above and beyond voice. It not only tests voice and signaling, but it's also going to be testing data and video. The Hammer Edge targets the network edge, an area that's becoming more intelligent and deals more and more with security issues to protect the network core. There are many edge devices, such as firewalls, network border switches, session border controllers, deep-packet inspection devices and application-level gateways. These different kinds edge devices take on more and more functionality, taking offloads from the core and providing security of the core. Today we always are hearing about how security is no longer an option. It's required in these devices, especially devices that connect and deal with mobile wireless data, such as connections to-and-from femtocells. Security — specifically in this case IPsec — is critical."
"Hammer Edge has differentiators from the traditional way of testing devices or pre-deployed networks," says Hockman. "It used to be done with just load generation and packet blasting. Hammer Edge is quite different: it emulates realistic behavior of users of the network who employ web browsers, do big video file downloads, voice calls, or what have you. All these users have different behaviors when they use the network and each person's experience and expectations are different. Hammer Edge allows the test engineer to emulate the realistic behavior of these kinds of users in the form of smart, state-aware type traffic that truly simulates the dynamic interactions of all these different types of data. The Edge also keeps metrics and statistics of everything happening from layer 2 on up to 7. We have ‘indicators' that can immediately show you if there's a problem."
From QoS to GoS (Guarantee of Service)
U4EA Technologies is known for its Multi-Service Business Gateways (MSBGs) and Home Office Gateways used by service providers and resellers to provide integrated, single-device unified communications solutions to SMB, SME, and types of small office customers. One of the more attractive aspects of U4EA's all-in-one customer premises devices is its patented QoS (GoS™) technology, which ensures the secure, reliable and cost-effective delivery of converged VoIP, data and video services. U4EA's SMB solution includes a wireless LAN controller for mobility applications.
U4EA's Vice President of Marketing, Jim Greenway, says, "Our Chief Scientist, Peter Thompson, is one of the key architects of our packet queuing and QoS mechanisms. With us, our main thesis is that unified communications and converged communications over IP networks will proliferate. Most new services are using in some way the IP network fabric. There's an interesting discussion going on right now about whether the Internet will support all of this traffic in the future. But the fact is that voice, video and data will increasingly travel over packet-switched networks. Over the past 20 years, many techniques appeared to solve QoS and QoE. You've had standards such as RSVP, diffserv and MPLS. They address different aspects of the network, so to speak. MPLS concerns itself more with the network core/backbone. But we can see that there's a bottleneck forming at the network edge, which will degrade QoS and QoE, and that's where we've focused our efforts."
"We at U4EA went back to first principles," says Greenway. "Our staff in Britain in early 2000 analyzed the QoS problem mathematically and used those insights to come up with a new queuing/scheduling system which is designed from the outset for multiple real-time services. It provides independent control over loss and delay, the two big enemies of QoS in packet-based networks. We trademarked that technology as Guarantee of Service, or GoS. Our design principles stipulated that it had to be easy to use, a low level of configuration – indeed, we do that automatically. There can be multiple queues, and we make it very easy for people to assign different traffic types to those queues. May also make sure that the system was predictable. If you have multiple real-time or near-real-time queues, you have to be able to predict and calculate how that traffic will react from a loss-and-delay perspective. It also has to be efficient. We find that if you do are really good job with QoS and QoE, especially with multiple real-time streams, you can maximize or at least come close to that magic number of 100 percent bandwidth utilization. We feel we do that better than anybody, especially when you have multiple real-time streams. As I said, most QoS mechanisms, such as diffserv and Weighted Fair Queuing [WFQ] are designed with one priority queue, and everything else battles for the remainder of the bandwidth. It's sort of like the expressway here in California. You have one High Occupancy Vehicle [HOV] lane, and cars are cruising down that lane, while everybody else is battling in the rest of the 3 or 4 lanes for traffic. In the case of packets, we actually create the equivalent of multiple HOV lanes if that's required at the network edge. That allows you to utilize the network bandwidth much more efficiently."
"With our GoS we can achieve quality communications, but not at the expense of adding bandwidth," says Greenway. "Applications that have very sensitive QoS requirements can be mapped without having to overprovision or reserve bandwidth. That's a big difference for us. Other mechanisms require you to overprovision. Some consultants will walk into a small business and say, ‘If you want to add video phones and other equipment, we should install another T1 to make sure there's enough bandwidth.' In our case, we always identify how much bandwidth each real-time application requires. We try to meet our goal of utilizing 90 to 99 percent of that bandwidth instead of just adding bandwidth for bandwidth's sake."
"With our technology, the WAN link from the premise to the network is utilized to the greatest extent possible," says Greenway. "We definitely achieve at least 90 percent traffic with controlled packet loss and delay, and the other 9 or 10 percent can be best-effort service. Another of our differentiators is that we can configure up to 90 percent of a link. Take a T1 link, 1.544 Mbps. We can configure about 1.3 Mbps of that to deal with multiple real-time queues and traffic types."
"The fact that we designed our QoS from the outset for multiple real-time queues dovetails nicely with today's network where there's all kinds of traffic: more video traffic, cloud computing, and anything that's managed in the network from a VoIP perspective, any real-time services such as presence status data that are sensitive to delay and loss. We can deal with this at the edge of the network. Our devices sit at the edge of the premise and they all incorporate our GoS. Our target continues to be SMBs and branch offices of enterprises. Our devices are not mammoth in nature, but they do scale well. We can accommodate locations with up to 500 employees. We believe that many of these smaller businesses will subscribe to hosted unified communications services, because there's no way they can possibly afford the correct Microsoft OCS servers on their premise or deal with the complexity of putting all of these servers and applications together. So we think that there's a looming business opportunity for hosted UC apps aimed at small businesses, or it could be an enterprise hosting these services at a very large location and servicing the enterprise's branches. In either scenario, the companies will need edge devices that help them efficiently deal with the quality, the bandwidth management and even other functions that we integrate into our devices, such as security, routing and switching."
From QoE to CEM (Customer Experience Management)
Empirix' great rival, Tektronix, has for 60+ years offered test, measurement and monitoring instrumentation to solve design challenges, improve productivity and dramatically reduce time to market. Their Tektronix Communications division continues to sell advanced test and monitoring solutions to communications providers and manufacturers worldwide. The company's solutions encompass fixed, mobile and converged network monitoring, mobile network troubleshooting and optimization, and functional load and interoperability testing.
Rich McBee, the President of Tektronix Communications, says, "We don't do any policy management of traffic shaping. Basically we're a passive probe kind of company, with physical probes in the network. We're optimized today for classic QoS, passive monitoring with real-time data, service assurance – which is how the application is working – and customer assurance, which involves how the services to individuals are working. With the Arantech acquisition we just completed, we bridge into the link between OSS and BSS, which we call Customer Experience Management [CEM]. So we have large distributed probes in the network, we do real-time correlations so we can do real-time call trace, and we look at network assurance. In the upper layer we examine how services are working – is a provider able to deliver text and music downloads. Then with our customer solutions piece we can determine how are solutions actually being transacted. Are downloads being completed, and so forth. CEM involves yet another layer, examining the individual transaction and what's experienced there. For example, concerning downloads of music, our classic solution would output either a Yes or a No. But CEM asks what the ‘experience gap' is, which is, ‘How long did it take to make the transaction happen?' So CEM provides a whole different view of each and every transaction that consumers/subscribers experience on the application that they're using. You can see how very important that is. After spending lots of money to bring a new application to market, a service provider may see that his network is working, the service appears okay, transactions are occurring – all green lights. But the provider doesn't know if great customer dissatisfaction and churn is happening because events in the network are just taking too long to transpire, or too many keystrokes were required, or customers gave up and exited the process."
"CEM is becoming a hot growth area in the marketplace, since providers are concerned about their end-users and churn," says McBee. "You don't want to lose customers because they're very expensive to acquire. The blind spot for providers has always been what the customer experience is. One of the things we've brought to the marketplace which we feel is important is the concept of Network Intelligent Solutions. What that really means is the ability to identify a problem, whether it relatees to network assurance, service assurance, customer assurance, or customer experience management, and then do something about it. Because we have products that look at the network, services, customers and customer experience – that's all ‘Northbound' information, real-time data captured from real-time probes, feeding all sorts of applications, some of which are ours and some are third-party. We can drill right out to the end-user and come back and say, ‘This is why the problem occurred and here's were you fix it.' That's what real Network Intelligent Solutions are."
More for Less
Fujitsu's Market Development Director Ralph Santitoro, says, "We focus on three areas: mobile backhaul, residential broadband backhaul and business Ethernet and IP services. I would say that mobile backhaul, or wireless networks in general, is one of the hottest topics in the industry now, because it has the most challenges that require immediate-term action. As the new 3G services have rolled out, you have a hockey-stick curve of bandwidth consumption, and the mobile operators such as Verizon, AT&T, and so forth, are charging a flat rate for that bandwidth. So people are using more and more bandwidth. 3G download bandwidth is on the order of hundreds of kilobits per second. It's like a slow DSL but from a mobile perspective it's a considerable bandwidth, because you must multiply that figure by literally hundreds of thousands of subscribers using the services. The problem with 3G is that bandwidth is growing, but the revenue is fixed, so operators must find lower-cost ways managing that bandwidth in order to maintain their margins. The problem is compounded when you go to 4G services, which deliver megabits per second. And yet flat-rate data plans are pretty much set in concrete now."
"On the QoS front, the challenge is that mobile services are designed to run over TDM networks such as T1 and SONET in the U.S. which provide deterministic or ‘precision' QoS," says Santitoro. "Those services require that kind of solid, TDM-based QoS to work. As operators attempt to solve their bandwidth challenges by sending services over less expensive packet-switched networks, the discover that those networks normally don't offer the kind of stringent QoS found in the TDM world."
"As for QoE, if you look at it in terms of mobile backhaul," says Santitoro, "you'll remember the old days when you didn't drop any calls and the voice quality was pretty good. But now with things such as service coverage, outages and the rich multimedia capabilities of devices like the iPhone and the Google Android, the QoE challenges will be compounded as you resort to higher bandwidth with 3G and 4G services. After all, you'll be able to watch TV shows streamed to your mobile device. You can do that in a somewhat adequate way with 3G networks, but with 4G it'll actually be comparable to what you get on your broadband connection at home. Thus, with mobile backhaul or wireless networks, there are many challenges, solutions are available, but there are many technology choices, and it makes it difficult and compliated for mobile operators to weed through all of this."
‘Fujitsu also focuses on residential triple-play backhaul," says Santitoro. "The bandwidth challenges on that are much more severe than in the mobile network, driven by the needs of multiple IP video streams, such as movies or TV shows on demand. However, IPTV bandwidth is much more easily managed because the bandwidth is determined by the total amount of channels supported, so it's not really an issue. Still there's a lot of bandwidth and it's driving the deployment of fiber to the home and curb."
"Then there's Internet access," says Santitoro. "I recently read that, even in this economic downturn, people will not give up their broadband Internet connection. Internet access continues to increase, particularly with the introduction of DOCSIS 3.0 in the cable world, which can get you up to about 100 Mbps."
"Basically, there will be a lot of QoS and QoE challenges centered on video," says Santitoro. "That's because people are not very tolerant of poor video quality. When alternative providers start supplying video, such as hulu and Netflix, that puts a lot more pressure on the Internet piece of the triple-play backhaul."
Santitoro concludes, "The third major application are we focus on here at Fujitsu is retail and wholesale business services, in particular Ethernet and IP services. The bandwidth challenges here are a little different. It's not so much that massive amounts of bandwidth are required; it's more that enterprises want lower costs per bit. They need more bandwidth, but they're not willing to switch to a higher bandwidth service unless it costs less their current service, such as a T1 private line or frame relay service. The service provides realize this and they want to grow their revenue, and so they have to find ways to more efficiently managing the Ethernet and IP bandwidth to deliver those higher bandwidth services at a lower cost per bit. Another piece driving this consists of hosted applications such as Web 2.0 apps that you run right from your web browser."
The Policy Angle Integrated BroadBand Services, LLC (iBBS) provides Operational Support Software (OSS) and back-office services deployed by cable and broadband operators worldwide. They're known for their Customer Care and Support Service products and Broadband Explorer software platform enabling operators to rapidly launch new revenue-generating services, provide high-quality customer care, and ensure high levels of network availability while minimizing capital and operational expenses.
Dave Keil, CEO of iBBS, says, "Our company provides both a set of proprietary software that we deliver an ASP model, that handles provisioning and diagnostics, and we complement that with a set of robust services. We target mid-sized cable companies and outsource a significant amount of their technology and call center capabilities around broadband and VoIP. We talk to operators weekly and we believe we've developed a set of best practices as a result of those conversations through our account management teams, product management teams and marketing groups."
"Bandwidth management had always been an important issue with our customers, but it started to rapidly escalate 5 or 6 months ago to be come the critical issue they face," says Keil. "In fact, it continued to accelerate month-by-month, and that's associated with operators being asked to serve their customers with greater and greater bandwidth. For example, there have been some shifts in the market as to how video is delivered on the network to the end user. Changes in the market is making bandwidth management the most acute issue faced by cable operators today. They tell us that they need a protocol-agnostic platform that, first, enables them to really develop acceptable usage policies and, second, gives them the capability to enforce those policies by measuring usage and providing different and diverse sets of packages by market. They need the ability to manage this process with good discipline and they need to be in a position so they can notify customers as these packages change. They must be able to bill for these different packages and handle this on a market-by-market basis. With that in mind, our development team has put an incredible amount of work on mapping out and delivering a two-phased approach to solve this bandwidth management problem for our customers."
Keil continues, "We're handling it via two releases. The first one, announced April 1, 2009, as part of our Broadband Explorer 4.6 release, focuses on the development and acceptable use of policies. The second phase, which will be released in 3Q 2009, focuses more on the enforcement of those policies. As we roll out these two releases, it will give the tools to our mid-sized customers that they need to manage the situation effectively. Both released aim to reduce bandwidth costs, give users a better experience and in some cases charging more to the highest users who are consuming a disproportionate amount of bandwidth today relative to other residential users."
Putting IPTV Under the Microscope
Curtis Howe, President, CEO and co-founder of Mariner Partners, says, "Our market focus is increasingly the U.S., Central and Eastern Europe. Our ‘bloodline' is based in early IPTV deployments of the late 1990s with a Bell Operating Company in Eastern Canada. We did what was perhaps the world's first commercial IPTV deployment in 1999. Half of our team was involved in a startup IPTV middleware company providing some of the software for that system and the rest of us were in the operating company. I grew up in IPTV on the operators' side, and got a whole new appreciation for the difference between QoE and QoS during that launch. In July 2003 we launched Mariner, which focuses on creating technology to fill in some of the ‘gaps' that we experienced as an operator in terms of QoE management for IP-based services, specifically IPTV. We provide both services and technologies into the IPTV vendor community and service providers."
"Our flaghip product is called xVu, an service assurance platform," says Howe. "The goal of xVu is to assist the service provider or vendor in objectively assessing the QoE that's being delivered to the consumer by measuring what they're receiving in terms of video and audio delivery, EPG performance, web access, set-top box performance, middleware transaction performance, and so forth. We operate up and down the entire protocol stack of IPTV and we put particular emphasis on the behavior of the last mile, home networks, the set-top box and the middleware. xVu is a suite of tools, and the key benefits we provide are in improving the quality of the video through better system maintenance, troubleshooting and monitoring. It's a benefit in terms of cost reductions, improved customer satisfaction and a more reliable, higher quality service."
What's Good for Voice can be Good for Data
Wireline and wireless service providers in over 50 countries use Veraz Networks' application, control, and bandwidth optimization products to extend their applications suite, rapidly add exciting revenue-generating customized multimedia services, and otherwise evolve to what Veraz calls the Multimedia Generation Network (MGN). The Veraz MGN separates the control, media, and application layers while unifying management of the network, thereby increasing service provider operating efficiency. The Veraz MGN portfolio includes the ControlSwitch, Network-adaptive Border Controller, I-Gate 4000 Media Gateways, the VerazView Management System, and a set of customizable applications, including the verazVirtu softclient.
Gus Elmer, of Veraz Networks Corporate Marketing, says, "Veraz has been in the bandwidth optimization business for voice for a long time. We have some of the leading voice compression and voice bandwidth optimization technology that's out there. We're continuing to invest in that space. Our new session bandwidth optimizer product for mobile VoIP networks is a perfect example of this. We continue to see great demand for bandwidth optimization and high-quality voice in mobile networks. We also see a lot of interest in taking the technology and expertise we have concerning how to optimize voice as a service and apply it to data services. That's generally the direction in which we're heading. In the mobile space, for example, the amount of bandwidth consumed by data applications is growing quite rapidly, which can affect QoS. There's interest as to how to provide high-quality session management and bandwidth optimization for that in the same way that we're do for voice."
The Policy Control Angle
Volubill provides real-time monitoring, control and charging software to communication providers around the world, in the process enabling competitive differentiation and rapid time-to-revenue for data, content, VoIP and messaging services. Although the word "bill" appears in their name – and indeed one of the things they do is to ensure zero revenue leakage through real-time credit/balance management for all customers across all services – Volubill's ultimate goal is to become the leading global supplier of both charging and control solutions for pre-, post- and now-pay environments for all fixed, mobile and Fixed-Mobile Convergent (FMC) environments, irrespective of the underlying network technology, including WiMAX, IMS, IP, CDMA, GSM and 3G.
John Aalbers, CEO of Volubill, says, "Our two core businesses are first, the charging business and, second, policy, management and control. We find that charging, policy management and methods management all actually go hand-in-hand. The policy management side is still a relatively new space in the industry and people are using terms but they're applying different definitions to them. When we talk about policy control, there are really two parts to it. There's a network piece where you implement QoS and you can make the policies ‘come to life' and enforce them in the network. The other piece is policy management, which we view as more of a business function where you've got knowledge about the subscriber and what their usage patterns are like, what kind of services they're trying to access, what kind of quality they need for them to consider it a good QoS for that particular service. We bring these two major areas together – policy management converges with policy enforcement to make the whole end-to-end process work. We refer to that in general terms as ‘policy control'."
"One type of player in this space comes at it from the network angle and focuses on the bandwidth management, traffic shaping and policy enforcement," says Aalbers. "The experts on IP networks and they understand how to manipulate it. But typically what they don't have in their technology portfolio is an understanding at the subscriber level, which is more of a business function. Many companies will say they are ‘subscriber-sensitive', but really all that means is that they sort of identify which subscribers are clogging the network at a certain instant in time, but that's purely based on the subscriber's IP address. What they don't have is the context of that subscriber. In other words, what kind of subscriber is it? Is it a valuable subscriber, a business guy who's doing $150 dollars worth of roaming a month, plus he's running five other premium services? Or is it a student who's paying $9.99 a month and he's just hacking the network with a bunch of peer-to-peer movie downloads? It's that kind of subscriber information that's important."
"The only way you can guarantee high QoE is with an end-to-end approach," says Aalbers. "You must understand who the customer is, the service they're requesting, the kind of package for which they've signed up, and bring all of that information together in real-time along with information on what's going on in the network. That's where we position ourselves. We can handle both sides of that equation."
Looking and Sounding Good
Ultimately, of course, users just want good audio, video and Internet access. They don't care about the underlying technology. If users back in the 1990s could see what was involved in providing the best in QoS, QoE and bandwidth management for IP communications, they'd might have thought twice about adopting it. Fortunately, the technology has risen to the occasion, and customers will continue to enjoy both existing and new services, provided that infrastructure investment keeps up with the demands placed on the network by (hopefully) satisfied users.
Richard Grigonis is Executive Editor of TMC's IP Communications Group.
Companies Mentioned in this article:
Fujitsu Network Communications
Integrated BroadBand Services (iBBS)
Shenick Network Systems
The Fanfare Group