Content Routing - The Evolution and Virtualization of the Network

Virtualization Reality

Content Routing - The Evolution and Virtualization of the Network

By Frank Yue, Technical Marketing Manager  |  September 03, 2013

Technologies to move traffic back and forth on the Internet have evolved rapidly. How networks direct and steer traffic continues to change as technologies and solutions converge to manage content instead of packets.

Devices have been routing IP packets since the mid 1970s. In the 1980s, local networks used shared 10 megabit Ethernet hub technology and FDDI (100 megabits per second) in the core networks. Devices determined the destination of the packet based on the header. The packet header was the beginning of the frame, which had data for what device sent to the packet and what device is supposed to receive it. On the local level (layer 2), for Ethernet, there is the MAC address.

Shared Ethernet using 10Base2 (coaxial cable, also known as thinnet) and 10BaseT (twisted pair, which has become the de facto standard for copper Ethernet cabling) was commonly used in the local area networks. Any traffic sent out or received on this network was seen and shared by all the devices on the network. That meant that a 10Mb network could only support a total of 10Mb of traffic per second no matter who is sending or receiving the traffic. The more devices on the network, the more congestion there was.

In 1989-1990, a new technology was introduced. It enabled devices to learn MAC addresses on their Ethernet ports and only send traffic out the ports that had the associated MAC addresses. This was the birth of switched Ethernet. No longer would a PC or device on the network receive all the traffic on the local network even when the traffic was not destined for it. This greatly reduced the congestion on the network. The technology to support this meant that the Ethernet hubs (switches now) had to have the ability to learn layer 2 information and leverage that information when making forwarding decisions.

Routers became more powerful and hardware based as well. The routers make their forwarding decision on the layer 3 information (IP address). Hardware-based routers upped the ante in the mid-1990s with the release of flow-based technology. Many Internet engineers know of these flow technologies as a statistic collection of protocols to understand what kind of traffic is seen on the network. When they were created, though, it was to improve the performance of the router. The statistics were an afterthought. 

These flow technologies utilized a forwarding information base that pushed local routing information from the central management to each interface card in the router. The typical FIB stored source IP, destination IP, source port, destination port, and IP protocol, what we now consider to be a 5-tuple. But Internet engineers discovered that most of the traffic seen consists of TCP and UDP (News - Alert) flows that have large numbers of packets in each flow. One central search for the first packet will result in a FIB match for all the following packets in that flow. The FIB introduced the ability to store forwarding information at the interface for layer 3 information and ultimately layer 4, but that is leveraged later.

In the mid-late 1990s, several companies developed customized hardware solutions that did what we typically refer to as server load balancing. SLB meant that these devices were managing traffic to multiple servers on a per application basis. And, when we say application, we really mean TCP and UDP port number. Layer 4 load balancing is popular. To provide the necessary performance, these companies design FPGAs and memory models to store the layer 4 information at each Ethernet port. They typically store a 5-tuple in memory and are able to direct traffic to different ports and servers based on any characteristic within the 5-tuple. These devices were the first true layer 4 routers.

Move forward a few years and layer 4 is no longer enough. Multiple applications use the same layer 4 port. Servers are hosting content based on the type of information. It is necessary to take a major step forward. Up to this point in time, all the routing and forwarding decisions are based on content within the packet header. This information is based on Internet standards in specific formats. It is easy to create FPGAs and ASICs that can parse the packet header since the information is in a relatively fixed format.

When we break into the packet payload, identifying the key information becomes complex. There are many different protocols that need to be understood such as FTP, SMTP, LDAP, RADIUS, DNS and more. Then there is HTTP, which is a world unto itself. With HTTP, there are many protocols within the protocol. I have claimed HTTP to be a higher layer transport protocol in the past. Application protocols such as XML, RTSP, SOAP, TR-069, and many others utilize HTTP as a convenient transport method. It has become essential to look into this content to determine the application and properly route this data to the appropriate resource.

Hardware performance has improved so much since the 1990s that it is possible to do full layer 7 content routing at high speeds. It is possible to do layer 7 inspection and decision making at speeds beyond 1 gigabit and 10 gigabits per second.

Here is the really cool part of this discussion. Once the device is doing layer 7 content inspection and routing, it becomes possible to do much more than just vanilla SLB. We can now inspect the content for potential buffer overflows before they hit the application server. Content can be cached and delivered to offload the workload from the servers. We can look for malicious content and identify the source. We can optimize the content being delivered through compression or content consolidation. Policy controls can be implemented based on client/server details and specific content. We can even insert additional functions in the application path for security and authentication.

Many of the new technologies we hear and talk about are based on layer 7 content inspection and routing. The IDS/IPS is a content router that looks for specific security signatures and acts upon what it sees (usually logging and dropping the malicious information). A UTM/NGFW looks at content from a client perspective and permits/blocks traffic associated with that client, usually from a security and policy perspective. A WAN acceleration device caches duplicate content and optimizes content and traffic for high latency and low speed circuits. The server load balancer is your original traditional layer 7 content router.

The communications service providers have acknowledged that they need to leverage the content within the data traversing their network. There is a value in managing the content within the network from a value-added service perspective and enabling the virtualization of the evolved packet core through technologies and strategies such as software defined networks and network functions virtualization. All of these solutions are leveraging the same base technology to deliver different functions. Ultimately, all of these niche solutions are going to disappear and the base technology will encompass any and all of these functions in a solution consisting of a platform sharing a common architecture, design, management and functionality. 

Frank Yu is technical marketing manager at F5 Networks (News - Alert) (www.f5.com).




Edited by Alisen Downey