GOING WITH THE FLOW
MPLS And Quality Of Service In Next-Generation Networks
BY JEFF LAWRENCE
There is quiet transformation occurring within the telecommunications network
infrastructure. At the moment, equipment manufacturers are focusing on it, but the
transformation is not really visible to users yet. Soon, however, it will be apparent to
all of the end users, businesses, and organizations that use the network.
This transformation is occurring as the network moves towards a packet- and cell-based
infrastructure. Whether the core backbone of the next-generation network will be IP based
or ATM based is still undecided, but there is one common thread that has gained a great
deal of momentum over the past year or so. Multi-Protocol Label Switching (MPLS) will play
an important role in the routing, switching, and forwarding of packets through the
Packets and cells introduce uncertainty into what, until recently, had been a fairly
deterministic network. Delay, jitter, and potential information loss now become serious
issues that must be addressed to ensure that the appropriate quality of service is
available for a wide range of network users. This situation will become increasingly
troublesome as the number of users and volume of traffic continues to increase by a few
hundred percent per year.
Understanding Traffic Patterns
Recent studies on usage and traffic patterns within the Internet have observed that these
patterns exhibit characteristics of fractals. This means that traffic will look the same
whether traffic is observed at a macroscopic level or a microscopic level (the scale of
the axis simply changes). Variations in traffic patterns arise because of varying user
patterns and varying traffic itself (since traffic can be in the form of short e-mail
messages, continuous video streams, long file transfers as part of Web browsing, or
voice). Understanding these patterns and building an infrastructure that can provide the
high bandwidth, low delay, low jitter, and scalability in the face of these patterns will
be critical to ensuring the success of existing and new services that will be offered over
the next-generation network infrastructure.
Nevertheless, the evolution of network models to support high throughput, low
delay, low jitter, and scalability is a work in progress. The simplest way to ensure a
high Quality of Service (QoS) is to engineer the network so that it has sufficient
capacity in the form of processors, buffers, and high-speed links to process, store, and
move packets through the network. If the capacity is sufficiently large, then, by
implication, information will move quickly and with minimal delay and jitter through the
The downside of this network model may be that it is inefficient and potentially
expensive to build. Some recently deployed networks are initially relying on their excess
capacity to provide fairly high levels of QoS, but as their traffic increases, they will
have to depend on additional mechanisms to maintain the QoS. These mechanisms fall into
the realm of traffic engineering.
The key concept in traffic engineering is that the traffic and information flowing between
applications can be differentiated (for example, voice, video, e-mail, and Web browsing)
and moved through the network with different levels of service. Traffic engineering uses
either reservation-based or reservationless mechanisms.
- Reservation-based mechanisms: These mechanisms assume that there is a certain
amount of capacity available for each type of service. Capacity is reserved on an as
needed basis from processors, buffers, and links.
- Reservationless mechanisms: These mechanisms do not reserve capacity; instead,
they assign different priorities to traffic and information flows. As capacity is used up,
higher priority traffic and flows are maintained, and the service provided to the lower
priority flows is degraded.
In both cases, if the network cannot provide the required level of service, then it may
use mechanisms to prevent additional traffic from entering the network so that the already
existing traffic and flows are not impacted.
Supporting QoS At The Routing Level
The nuts and bolts of supporting the different QoS mechanisms occurs at the
level of the packet routing, switching, and forwarding mechanisms. There are two
fundamentally different ways to route packets between a source and a destination:
hop-by-hop routing and source routing. Each approach has
advantages and disadvantages.
- Hop-by-hop routing: The routers perform significant processing as packets are
received, examined, and forwarded on to the next hop towards the destination.
Unfortunately, as the traffic increases, these routers can become bottlenecks, and it
becomes a challenge to scale to large networks.
- Source routing: This alternative to hop-by-hop routing presents other
challenges. It introduces additional complexity and setup delay because it requires that a
connection be established between the source and destination.
There are several possible solutions that address the bandwidth, latency, and scalability
issues mentioned earlier. These solutions include: layer 3 switching, layer 2 switching,
cut-through layer 2 switching, and cut-through layer 3 switching. The differences among
these approaches result from 1) whether the routing and forwarding occur at the data link
level or the packet level and 2) whether a flow is identified locally or end-to-end.
The various approaches address the challenges from different angles, and until recently
there has been no clearly discernable winner. But lately, momentum has been building
rapidly among network operators and equipment manufacturers to use MPLS as the mechanism
of choice to manage traffic flows in IP-, ATM-, and frame relay-based networks. MPLS
combines various aspects of the different approaches to successfully provide efficient and
scalable routing of packets and cells through what will be the next-generation network
MULTI-PROTOCOL LABEL SWITCHING
What is MPLS? Its an Internet Engineering Task Force (IETF) specified protocol that
provides for the efficient designation, routing, forwarding, and switching of traffic
flows through the network. MPLS can manage traffic flows of various granularities, such as
flows between different hardware or even flows between different applications.
MPLS is independent of the layer 2 and layer 3 protocols. It provides a means to map IP
addresses to simple, fixed-length labels used by different packet-forwarding and
packet-switching technologies. MPLS can interface into existing routing protocols such as
RSVP and OSPF, and it can support the IP, ATM, and frame relay layer 2 protocols.
The layer 3 protocol forwards the first few packets of a flow. As the flow is
identified and classified (based on various QoS requirements), a series of layer 2
high-speed switching paths are set up between the routers along the path between the
source and destination of the flow. The layer 2 switching paths are established by
assigning labels to each link connecting the routers.
Associating these labels within each router and binding these labels to each other
across the entire path of the flow is performed by a simple signaling protocol. The label
assignment can be topology driven (for example, between source and destination devices),
flow driven (say, via RSVP), or control driven (policy based, perhaps). High-speed
switching is possible because the fixed-length labels (also known as tags) are inserted at
the very beginning of the packet or cell and can be used by hardware to quickly switch
packets between links.
MPLS allows what is known as ships in the night operation. That is, MPLS
can be introduced into a network without impacting the existing operation of other
routing, switching, and forwarding protocols within the network. This will allow for the
gradual deployment of MPLS without having to replace the network infrastructure all at
Activity is underway to monitor and control MPLS networks by integrating some of the
MPLS signaling with SS7 signaling so that network operators can manage the flows
associated with their voice and data traffic more effectively, use their core and access
resources more efficiently, and provide integrated management of different network
Future applications and services must be aligned with developments in the underlying
network infrastructure to take full advantage of its capabilities. In the future, MPLS and
other associated protocols will define, to some extent, the networks capabilities.
Understanding these capabilities will ensure that, as services are developed and deployed,
they will behave as expected and not present any unwelcome surprises to users and service
providers. In the future, applications running on the network will be able to identify the
types of flows that are generated and to take maximum advantage of the networks
ability to manage its resources efficiently and cost-effectively.
Jeff Lawrence is president and CEO of Trillium Digital Systems, a leading provider
of communications software solutions for computer and communications equipment
manufacturers. Trillium develops, licenses, and supports standards-based communications
software solutions for SS7, ATM, ISDN, frame relay, V5, IP, and X.25/X.75 technologies.
For more information, visit the companys Web site at www.trillium.com.