Contranyms are a curious feature of the English language. These are words that can have two contradictory meanings.
Examples include seed (seeding a lawn versus seeding a tomato), strike (hitting a nail versus missing a baseball), or sanction (officially approve versus penalize). Lately I’ve noticed an emerging contranym in technology: software-defined networks.
The coinage of software-defined networks should have raised questions for anyone with a sense of history. After all, the very first routers in the Internet had their behavior defined by software. And in central office switches, features like call waiting and toll-free numbers were defined by software.
But software-defined networking as a new term began trending around 2009, emerging from Stanford and other institutions under the banner of OpenFlow. Its key tenets were to separate the control plane (computing routes through the network) from the data plane (forwarding packets from one port to another), and to publish a general programming interface between these two planes. Really, this open interface is the main novelty of OpenFlow. Routers and switches have always implemented these two distinct planes, but the interface between them was available just to the vendor’s engineers. Now in theory, one company (or open source community) could provide the control software, while another company could provide the data forwarding hardware (sometimes called a white box switch, presumably because the vendors are ready to dispense with their logos along with their profit margins).
What problems might be solved by opening this interface between control and data? For academics interested in networking, it could allow them to research and implement new control software, without the expense and effort of building custom hardware. For the broader industry, perhaps the separation could promote competition, encouraging faster feature development and lowering component costs. Then again, inserting a committee into the middle of router architecture could actually slow down innovation and increase integration costs. Time will tell.
Meanwhile, another technique emerged, also lumped under the term software-defined networking. Rather than encouraging networking software to poke around in the switching hardware, this new approach instead endeavors to make networking functions independent of the underlying hardware. Because this approach resembles and parallels the way virtual machines are independent of server hardware, it’s sometimes called virtual networking. And because this virtual network conceptually rides above the physical network, it’s sometimes called an overlay software-defined network.
This approach solves a practical problem when virtual machines comprising a particular application are physically spread across a large data center, yet want to share a private network with little or no accessibility to the rest of the world. To set up such a private network, administrators can’t run dedicated cables to these machines, or even manually configure a virtual LAN. But now with an overlay SDN, the required virtual network can be organized among the relevant hypervisors as quickly and automatically as starting the virtual machines.
So overlay SDNs are convenient, but their power really shines in light of recent events, notably the Target (News - Alert) breach. Through that breach, criminals eventually stole millions of customer credit card numbers, but the initial penetration arose when a visiting technician infected computers controlling the air conditioning (the cyber-heist equivalent of burglars crawling through air ducts to access a vault).
In the technology world, much discussion ensued about whether Target’s cybersecurity systems had detected the ongoing hack, and whether the resulting alarms had been ignored. But such discussions skip a more fundamental question: Why could the air conditioning computers talk to the credit card computers, anyway?
The traditional data center design assumed that if a computer was behind the firewall, it was “one of ours” and thus safe. By default every computer could talk to every other computer. But a modern enterprise runs thousands of diverse applications, many of which have some connection to the outside world (because that’s where customers and suppliers are). What’s the probability that at least one of those applications have been compromised? Nearly certainty. So the new default should be that nobody can talk to anybody, unless it’s specifically allowed. Where can examples be found of such a design?
To find an answer, look to the clouds. The public clouds, that is. As a public utility, they must assume that their tenants want absolutely nothing to do with each other. For instance, Amazon Web Services (News - Alert) creates a virtual private cloud for each tenant, each with its own isolated private IP addresses, and only limited external access as allowed by the tenant. How are these virtual private clouds created? AWS hasn’t revealed the details of its homegrown software, but clearly they’re a form of overlay SDN.
As enterprise data centers evolve into private clouds, each application can be given its own private overlay SDN (VMware calls this micro-segmentation in its environments, and my company PLUMgrid calls them virtual domains in OpenStack environments). In fact, as Moore’s Law continues to advance, every packet traversing the data center can be encapsulated and encrypted, entirely opaque to the outside physical network.
And thus the contranym: SDN has come to mean two things that aren’t just different; they’re in many ways opposite. The switch-based SDN approach envisions software busily rearranging underlying hardware in all sorts of interesting ways. (The image of Lily Tomlin at her operator’s switchboard comes to mind.) In contrast, the overlay SDN approach envisions applications communicating over physical networks without revealing any traces at all.
Larry Lang is president and CEO of PLUMgrid Inc. (www.plumgrid.com).
Edited by Kyle Piscioniere