June 2008 | Volume 3 / Number 3
On the Testing Edge
The Product Development Life Cycle
By Andy Huckridge
Role of Standards Bodies
To say it all starts here would be an understatement. A good test equipment vendor will be avidly involved in the telecom standards process (ITU, ETSI, ANSI, IETF etc.) to provide the most up to date tools to Network Equipment Manufacturers (NEM). By the same token a NEM could get left behind if it were not represented there. Although the standards process is often long, seemingly complicated and old fashioned, the finished product — a standard — is well worth it.
Role of Industry Fora/Advocacy Groups
Within the IMS / NGN / Service Oriented Architecture (SOA) space there are many such groups — SIP Forum, Multiservice Forum (MSF), IMS/NGN Forum and IP Sphere to name but a few — with new groups appearing all the time. These groups normally advocate a specific awareness or implementation to suit the needs of their members. We often see the first signs of public testing efforts from within these groups. The IMS/NGN Forum holds Plug Tests every few months. The SIP Forum holds periodic bake-offs with the MSF at the biennial Global MSF Interoperability event. Recent efforts from the industry have seen the introduction of certification programs for well-established technology areas as well as for areas where multi-vendor interoperability has not been achieved. In addition, they provide basic interoperability and permanent test beds.
Testing Life Cycle and Considerations: What Test Methodologies to Use and Where
Design & Development Test. You have a team of developers — and they are scattered around the globe. How exactly do you test their code? This is the realm of dev-test — “not breaking the tree” and “checking working modules in” are common terms here. But more importantly it’s about having a colleague walk through your code to test it. Unfortunately, bugs still make it to the quality assurance (QA) stage since all too often the same person is writing the code and as well as testing it. Never a good idea!
Test methodologies employed here would facilitate the prototyping of code/protocol implementations. Code integrity testing tools are also common, specifically White and Gray box testing. Often these are referred to by alternative names such as Security and Vulnerability testing, or Protocol Fuzzing. Interoperability testing in an open system is important at this stage. But in closed single vendor systems it can often be overlooked. Load or stress testing tools are seldom employed at this stage of a product’s life cycle.
Software Quality Assurance (SQA) / Product Verification (PV) Testing. Software QA or product verification is the department normally responsible for in-depth product testing. Common methodologies include load or stress testing as well as conformance testing where an external standard is referenced. Robustness and interoperability methodologies are also common. Even with all these different test methodologies employed, very often a company can be its own worst enemy by using internal test tools. For example, the developer of the code in the device under test (DUT) will often write the accompanying internal test tool, thus nullifying any independent observation, verification and validation.
A good rule of thumb here is for a company to spend at least 1 percent of its product’s market share on SQA/PV test equipment. In my experience the most successful companies have had the most diligent testing departments.
Manufacturing Test. Sometimes called ‘Mfg test’ or ‘Go/No-Go’ testing, this test methodology is used to only prove the manufacturing process and not to verify the product design. This test is done to verify that the product has been built to set specifications. For hardware, ATE or functional testing is the norm, which is very often performed at a subcontractor’s facility. For software, this is normally functional testing alone, often with hardware and software integration included. For example, voice quality testing mimics the end user experience.
Acceptance Testing. This test methodology generally involves running a suite of tests on a completed/installed system, which may also encompass 3rd party sub components. Each individual test, known as a case, exercises a particular operating condition of the system’s environment or feature and will result in a Accepted/Not-accepted outcome. There is generally no degree of success or failure.
Field Test and Service Assurance. This is often the most time consuming part of a system install, sometimes also called ‘Turn-up’ testing. This methodology deals with system level test done against different components from the same vendor or interoperability testing if connecting to components from different vendors. Having a call or service invoked is normally the sign of a successful field test. Often capacity tests follow.
After a successful field test, the next methodology along the Product Development Life Cycle comes down to a management/monitoring or ‘Service assurance’ function to test the system stays in compliance with all industry standards and both vendor and operator requirements. As well as making sure the end customer is happy of course. Good testing!
Andy Huckridge is an independent consultant and an expert in NGN technologies. Reach him at firstname.lastname@example.org