ITEXPO begins in:   New Coverage :  Asterisk  |  Fax Software  |  SIP Phones  |  Small Cells
Customer Inter@ction Solutions
February 2007 - Volume 25 / Number 9
› Back to the Table of Contents

When “Getting Human” Isn’t Enough
Using Testing And Monitoring To Ensure The Quality Of Contact Center Applications

By Michelle Goodall Faulkner
Empirix (News - Alert) Inc.

For most organizations, self-service applications — including IVR, CTI and CRM technologies — have become critical elements of the contact center. And it’s no surprise that the business benefits of using self-service applications instead of live agents for the majority of customer contacts have been well documented..

For consumers, too, the concept of self-service can have significant appeal. Often, transactions can be completed more efficiently, more conveniently and more privately when a live agent isn’t involved. In fact, when they work correctly, IVR systems and other call center applications can increase customer satisfaction and improve agent productivity.

The problem for businesses and the consumers they serve is the difference between concept and reality when self-service applications don’t work as users expect. This disconnect frequently leads customers to “zero out” to a live agent — at first in frustration and eventually by reflex after the customer has experienced inefficiency with self-service systems one too many times.
Today, consumers who are frustrated by issues like long wait times, repeated requests for the same information, being routed to the wrong person or department, and dropped calls have found a real advocate in Paul English and his “Get Human” consumer movement (www.gethuman.com), which offers tools and techniques that help customers bypass IVR systems and other contact center applications to speak to a live human agent instead of an inefficient self-service system.

Form Versus Function
Clearly, organizations can realize the benefits and efficiencies of self-service applications only if their customers use them, and the best way to ensure that contact center systems are used is to make sure they work as they’re supposed to. Customers must get what they need, when they need it.

A strong self-service application begins with an excellent design, but even the best designed application should not be executed until it has been thoroughly tested. A well designed, thoroughly tested self-service application will deliver significant benefits to an organization and its customers.

Testing, Testing, Testing
Regardless of the type of product a company sells or the kind of service it provides, the quality of a customer’s experience begins with their first interaction with the contact center. As demonstrated by the success of the Get Human movement, the impact of that interaction extends well beyond the contact itself. Poorly designed self-service systems clearly top most consumers’ “pet peeve” lists.

Often, companies that have invested millions of dollars installing advanced self-service systems may be willing (in a misguided effort to “save” time and money) to assume automatically that an application will run smoothly after deployment. Yet these same organizations would never consider launching a new product or service without putting it through its paces first with months of thorough testing. Why should the launch of a new customer-facing contact center application be treated any differently?
Simply put, many contact center performance issues can be avoided — well before customers are affected — with thorough testing and ongoing monitoring both pre- and post-deployment.

Proactive Testing, Tuning And Monitoring From The Customer’s Perspective
All organizations should perform three types of testing before rolling out critical self-service applications such as IVRs, CTI solutions and CRM: usability testing, automated functional testing and load testing. By taking a comprehensive approach to testing, organizations can verify a new or upgraded application’s performance under real-world conditions before it is deployed — pinpointing problems before the customer does.

The first is usability testing. Performed by real users, usability testing can help companies ensure that the design of their self-service application is logical and easy to navigate — making the application so convenient and efficient to use that customers will opt for it willingly.

Usability testing can help keep applications simple, with a limited number of menus, for example, and can help ensure that the technologies used in the application make sense. It isn’t logical, for instance, to use speech recognition technology when account numbers are long; callers should be able to enter their account numbers using touch-tone instead.

Finally, usability testing can ensure that self-service applications have a logical flow; a caller should never be prompted to enter an account number and then be asked if they would like to open an account, for example.

The second type of testing is automated functional testing. Automated functional testing evaluates the entire customer experience for a single user — from the network carrier through the PBX (News - Alert) /ACD into the IVR and to the agent desktop — by driving simulated calls that emulate real caller and agent behavior. These simulated calls dial into the system under test, enter or speak account and/or PIN numbers, listen to ensure that the right prompt responses are being played, and measure system and network response times throughout each test.

With functional testing, companies can quickly identify and isolate problems, such as when response times exceed pre-established thresholds, as well as ensure that customers won’t experience dropped calls, be given the wrong prompt or be delayed by slow database lookups.

Like automated functional testing, automated load testing drives calls that emulate real caller and agent behavior into the IVR or other self-service application, but automated load testing simulates hundreds or thousands of calls into a system simultaneously to make sure the application works as designed for many callers. This type of testing provides companies with advanced warning of the types and level of call traffic that would overload or bring down a system, including performance bottlenecks and call-handling errors.

Testing And The Get Human Standard
The Get Human Web site (www.gethuman.com) describes 10 key standards that exemplify Paul English’s vision of how customer service phone systems and support should work. Many of these standards can be addressed when organizations ensure that their self-service applications are well designed and easy for their customers to use.
There are a number of standards, however, that can be directly influenced by thorough pre- and post-deployment testing and monitoring.

For example, standard three says, “Callers should never be asked to repeat any information (name, full account number, description of issue, etc.) provided to a human or an automated system during a call.” This common and frustrating problem typically occurs when software, which transfers information entered by the caller from the automated system to an agent’s desktop, doesn’t work properly. This transfer of information is a key functional element of the IVR, and the problem could be eliminated with pre-deployment functional testing and ongoing monitoring of the application to ensure the system works as designed.

Standard five recommends that, “Speech applications should provide touch-tone (DTMF) fall-back.” In reality, if speech applications are well designed and thoroughly tested, they will work effectively, making a fall-back unnecessary. With functional and load testing, companies can evaluate the performance of speech applications, including application availability, speech recognition rates and transaction length. They can also test voice quality, even as caller and user load grows.
Standard eight says, “Do not disconnect for user errors, including when there are no perceived key presses (as the caller might be on a rotary phone); instead, queue for a human operator and/or offer the choice for call-back.” Dropped calls are another frustrating problem that can be limited, or eliminated, by testing. Calls are most commonly dropped when call traffic overloads and then brings down the system — an issue that can be prevented through pre-deployment load testing.

The Answer: A Comprehensive Approach
It is clear that to ensure the quality of the customer experience and to achieve high levels of customer satisfaction, companies must always strive to meet the requirements of their customers. This necessitates a comprehensive approach to the design and deployment of customer-facing applications in the contact center. In some cases, for example, meeting customer needs may mean making it more convenient for callers to “get human” when they need to. In all cases, however, “getting human” just isn’t enough. Meeting customer needs should always mean ensuring the quality and performance of the automated systems and self-service applications on which customers rely, and having the confidence that applications and systems will work as an organization and its customers expect. For this, thorough testing and on-going monitoring are an absolute must. CIS

Michelle Faulkner has been at Empirix (www.empirix.com) for four years and has been the director of marketing communications since 2006. She is a 14-year veteran of marketing and corporate communications for high-tech companies, on both the client and agency sides. Empirix is a provider of testing and monitoring solutions to ensure customers realize the promise of their technology investments. The company offers expertise that spans the evolution of advanced technologies across multiple markets — from testing in R&D labs through monitoring the end-user experience.

If you are interested in purchasing reprints of this article (in either print or PDF format), please visit Reprint Management Services online at www.reprintbuyer.com or contact a representative via e-mail at tmcnet@ reprintbuyer.com or by phone at 800-290-5460.

For information and subscriptions, visit www.TMCnet.com or call 203-852-6800.

Michelle Goodall Faulkner
Empirix Inc.

CIS Table of Contents
| More