×

TMCnet
ITEXPO begins in:   New Coverage :  Asterisk  |  Fax Software  |  SIP Phones  |  Small Cells
 

Feature Article
May 2001

 

Rethinking Packet Switching

BY FAIZEL LAKHANI

As network growth doubles between two and eight times per year, service providers have raced to meet skyrocketing capacity demands. In the race to handle this burgeoning traffic, service providers have been unable to ensure the economic viability of their networks. Today's networks demand scale and capacity, but the means of making money off huge investments in modern network infrastructure have not been considered.

Optical communications, in its very essence, is about efficient use of the spectrum to transmit the highest possible bit rates at the lowest cost. However, today's network spectrum, a mix of optical and packet technologies, is not packed efficiently. As existing models for providing QoS and present multiplexing technologies such as ATM and MPLS prove that they are not able in and of themselves to provide the investment returns carriers are seeking, focus must shift to the packet layer. Intelligent packet switching combined with optical networking technologies holds promise for a new, dynamic, high-bandwidth communications infrastructure; an infrastructure equipped to take service providers into the future. Optimal solutions may utilize circuits for creating capacity and packet technologies for most efficient use of the capacity.

Before considering the benefits of an intelligent packet layer from a service provider's perspective, let us explore the state of networks today.

QoS: A Brief History
Models for providing QoS have been developed over the past decade but have found limited success. Early approaches included qualitative QoS differentiation at the IP level and an integrated services model (for flows) with underlying RSVP control. Simultaneously, ATM evolved as a new unifying multiplexing technology. It became the de facto standard for carrying IP traffic, providing some form of QoS necessary for traffic engineering and predictable network connectivity. Shortly afterward, MPLS, with RSVP extended to perform connection setup and resource reservation, was proposed to provide similar QoS performances. While this approach was seen as reasonable for a small number of paths, questions persist as to whether MPLS can scale effectively.

Over the last three years, in concert with the efforts to build QoS around MPLS, the industry has moved toward a differentiated services model for IP flows. DiffServ, per-hop behavior dealing with aggregate flows, can be seen as part of this evolution. It was introduced to address the intrinsic limitations of the integrated services model, offering low signaling overhead.

The DiffServ architecture is centered on the concept of per-hop forwarding behaviors, complemented by using packet classification and traffic conditioning functions. Its significant merits include avoidance of signaling overhead and preservation of the connectionless nature of IP networks. Unfortunately, while forwarding decisions are made at the packet level, resources allocation and associated policing actions are made at the aggregate level only. As such, no strict QoS guarantees can be made at the flow level. Thus it falls short of providing the level of QoS that's required in modern networks.

In examining the evolution of QoS models, two paradigms have consistently been in opposition: the connection-oriented approach (with out-of-band signaling such as IntServ), and the connectionless approach such as DiffServ. Today, it's apparent that neither has solved the fundamental problem of reliable, scalable, and guaranteed QoS delivery.

Multiplexing Technologies: Pros And Cons
The last decade saw the emergence of two major statistical multiplexing technologies: ATM and MPLS. ATM grew rapidly, based on its promise as a universal unifying technology, and has evolved to be a core technology in today's networks. MPLS's market acceptance is exhibiting the same trajectory as ATM.

MPLS evolved from the search for improved performance in routing lookups, and the desire to replace IP-over-ATM overlay models with more scalable integrated peer models. At first, MPLS was synonymous with using IP control to manage ATM switches, but now it is associated with its traffic engineering capabilities.

From a practical perspective, MPLS is no more than a unifying multiplexing technology for a variety of services. While MPLS solves some of the challenges for traffic engineering, it fails to fully address reliability, resource utilization, scalability and QoS challenges. A shift is required to combine the benefits of out-of-band signaling with the performance advantages of routed networks. The telecommunication world has always been associated with connection-oriented paradigms that are synonymous with reliability, QoS guarantees, and overall rigidity. On the other hand, IP networks and the Internet have been associated with connection-less paradigms, synonymous with, and requiring inherent flexibility.

Certainly MPLS, as a connection-oriented technology, solves some of the problems in the Internet today. But it is probably not sufficient. To scale, flexibility must be brought back to the Internet and its underlying protocols.

Intelligent Packet Switching: The Engine Behind The Network
In the near term, circuits, and packets must be managed in networks. This necessitates reconfiguring network pipes (wavelengths) to reflect shifts in aggregate traffic flows. Optical switching provides this function, as it moves lambdas of light. The second requirement is intelligent packing of the pipes in the optical networking layer.

Intelligent packet switches will enable better utilization of network capacity and better allocation of resources among traffic flows. These twins of efficiency and intelligence will enable carriers to more fully utilize their networks and offer true service level agreements, key elements in a viable profit model.

Present-day packet switches use only a fraction of available bandwidth. Symptomatic of this poor utilization rate is the presence of "hot spots," or localized network congestion. Routers today are unable to react intelligently, simply dropping packets when the network becomes crowded. Thus, while some network resources suffer congestion, others may be underutilized. Present-day routers are also unable to adequately distinguish between traffic flows when allocating network resources. Thus, a highly time-sensitive packet could be stored in a buffer, or worse, dropped, while less time-sensitive packets receive ample network resources. Aggregate treatment of traffic cannot fully resolve this problem, as two flows requiring the same transmission characteristics could experience inequality.

Over-provisioning of bandwidth has been viewed as solution to these problems stemming from the presence of dumb-routers in the packet layer. Yet it is not truly viable from an economic perspective. Hence, the next generation of intelligent packet switches needs to be able to dynamically track and respond to the state of network traffic. Thus intelligent utilization of network resources can only be done at the packet level. Circuit switching can provide pipes, but to pack the pipes effectively, intelligent packet switching is mandatory.

Today the circuit and packet layers are complementary � the former provides bulk bandwidth pipes, together with the ability to reconfigure them, while the latter provides the packet-level switching required to provide intelligent handling of services for the end user. As technology limitations are overcome, network architectures will evolve toward convergence, not only of optical and electrical technologies, but also of the packet and circuit functions themselves.

Reliable Service Levels
Current protocols and technologies are not sufficient to support service levels in the context of a global IP network. Until such technology exists, the market will be forced into commodity IP bandwidth selling. Operational costs will continue to consume a sizable portion of carrier revenues.

The industry requires the ability to provide flows and paths in a connectionless world. This would allow for network wide service levels with hard guarantees since paths would be established in the network.

The challenges presented by connection technologies surround complex and overly rich signaling systems. The processing of signaling has stalled the ability to scale. ATM switches have peaked out at processing hundreds to thousands of connection requests, never achieving the sky-high goals needed for IP networks of millions of flows per ten-gigabit interface. Solutions such as LANE or MPOA show the failure of complex circuit signaling mechanisms, which are weighed down by centralized heavyweight processing.

What is required is a simpler, lighter weight signaling mechanism that does not begin with the assumption of guaranteeing no failure, but rather starts with the assumption of success. This assumption is critical, since congestion or failure is an infrequent occurrence. This signaling must be distributed, not centralized, to guarantee scale. Capacity is not just capacity of processing packets or creating flows, but also the scaling of management and control (routing) linearly with packet processing.

Distribution of intelligence that can scale with Moore's Law on the interfaces, and distribution of processing that scales with the addition of processing elements enables intelligent packet switching to scale with optical networks.

Routing protocols are effective at providing reachability, but there are challenges associated with routing protocols, including reconvergence of route tables after a failure, efficient loading of resources and robustness of the protocol. These shortfalls require a new approach for computing available paths in the network that is not dependent on the next hop route. Rather, routing algorithms need to provide paths in the network and then rely on the intelligence at the packet layer to direct and re-direct packets in real time. This does not mean that IS-IS or BGP need to be eliminated. Instead, information from these protocols needs to be acted on in a different manner than it is today.

Providing The End-To-End Guarantee
The only way to provide end-to-end guarantees is to create an end-to-end flow. There is no disagreement in the industry that current technology is limited to ATM, MPLS, Frame Relay, or IntServ. To even estimate a delay bound for a DiffServ network has been difficult except when the network is very lightly loaded.

Network operators are looking for mechanisms to help them move beyond the commodity business of IP, where delays and uptime metrics of SLAs are the only possible differentiators. They need to migrate to a new mechanism whereby graduated and evolvable levels of performance can be provided. This is achievable today with ATM and Frame Relay networks, but not at the rates of IP today, which are many orders of magnitude greater per interface than the best ATM or Frame equipment.

When service providers have equipment that allows them to offer guaranteed service levels and premium price points for premium services, they will have a better chance of generating returns on their IP network investments. Therefore, network equipment providers need to re-focus on how to generate revenues for service provider clients. Capacity and scale are table stakes for any vendor to enter a network, but additional value beyond increased capacity will foster differentiation and profits. 

Faizel Lakhani is vice president of network solutions at Caspian Networks, a new company currently working on a new optical packet core switch for service provider networks. 

[ Return To The May 2001 Table Of Contents ]



Today @ TMC
Upcoming Events
ITEXPO West 2012
October 2- 5, 2012
The Austin Convention Center
Austin, Texas
MSPWorld
The World's Premier Managed Services and Cloud Computing Event
Click for Dates and Locations
Mobility Tech Conference & Expo
October 3- 5, 2012
The Austin Convention Center
Austin, Texas
Cloud Communications Summit
October 3- 5, 2012
The Austin Convention Center
Austin, Texas