This article originally appeared in the April 2012 issue of INTERNET TELEPHONY magazine.
Information is the lifeblood of every business’s operations. It flows inbound from customers, outbound to the cloud, and from branch offices, to international offices, through the data center, and to the CEO’s smartphone. But there’s always that one shadowy guy trying to hack his way in.
When creating any security-enabled network device, development teams must fully investigate security of the device itself to ensure it cannot be compromised. A gate provides no security to a house if the gap between the bars is large enough to drive a truck through. Many highly effective exploits have breached the very software and hardware that are designed to protect against them. If attackers can breach the guards, then they don’t need to worry about being stealthy, meaning if they can compromise the box, then they probably can compromise the code.
Application delivery controllers are positioned at strategic points of control to manage an organization’s critical information flow. Organizations require a secure and robust application delivery platform that implements many different checks and counter-checks along the development cycle to ensure a totally secure network environment – and it starts with a secure application delivery controller.
Security from the Inside Out
An ADC (News - Alert) needs to be designed so that the hardware and software work together to provide the highest level of security. While there are many factors in a truly secure system, two of the most important are design and coding. Sound security starts early in the product development process.
Before writing a single line of code, product development should go through a process called threat modeling. Engineers evaluate each new feature to determine what vulnerabilities it might create or introduce to the system. One rule of thumb is a vulnerability that takes one hour to fix at the design phase, will take 10 hours to fix in the coding phase, and 1,000 hours to fix after the product is shipped – so it’s critical to catch vulnerabilities during the design phase.
Secure Code from the Start
Eventually, design ends and coding begins. Many companies that develop software have invested heavily in training internal development staff on writing secure code. But when it comes to software and network exploits, even the smallest mistakes can have huge ramifications. During coding, developers should conduct regular code reviews with the security team.
One of the most common mistakes found in code reviews is unsafe string functions, which can easily lead to a buffer overflow problem. Another issue is when a program or process tries to store more data in the temporary data storage area than it was intended to hold. Both of these mistakes can cause huge problems; but both are relatively easy to catch.
Next, security testing of the completed code begins. First is penetration testing, in which an organization's security staff act as attackers and try to compromise the system. Then fuzz testing begins. The concept is simple: When developers design a program that accepts an input, like a network packet with a pre-defined structure, they assume the input will be correctly assembled – but what if it isn’t? The packet length might be too long or short, or the input could have the wrong data. Fuzz testing systematically varies input and observes the results. Some malformed inputs might be handled well, but others might cause the system to crash, and still others could expose a serious vulnerability. Penetration testing and fuzz testing help make any device as secure as possible against attacks like DoS and even code-based attacks.
Development organizations should also implement a sophisticated third-party scanning application, which analyzes source code for critical flaws. At compile time, the code scanning application looks for flaws such as security bugs and defects, build breaker bugs, crashing bugs such as memory leaks and corruption, and unpredictable application behavior introduced by new code. Source (News - Alert) code scanning can also find non-fatal flaws such as data integrity issues and performance bottlenecks.
In addition to performing exhaustive internal testing, development organizations should hire outside firms to conduct black box testing, in which a third party does application and platform testing “in the dark.”
This means the firm doesn’t have any knowledge of the product beyond what a standard human attacker would have access to (in contrast to source code scanning). Black box testing and analysis can be inserted anywhere in the software development lifecycle, all the way through release. Third parties review code with fresh eyes, which can uncover a subtle vulnerability and add more layers of protection. Once the software passes this final test, some organizations use it in its own infrastructure environment to ensure it’s truly ready for release. Security testing is time-consuming and a huge undertaking; but it’s a critical part of meeting stringent standards and shows a commitment to customers.Peter Silva is technical marketing manager of security at F5 Networks (News - Alert) (www.f5.com)
Edited by Jennifer Russell