Building Service- Aware
Access Networks
BY MARK VEIL
Extreme competitive business pressures and an insatiable consumer
appetite for anytime, anywhere access to content and services are driving
phenomenal advancements and innovation in networking and computing
technology. This is particularly true in the Internet protocol (IP)
application space. Today we work, live, and play on the public network --
e-business, streaming video, and voice-over-packet applications are
driving the convergence of multimedia applications to a common, ubiquitous
IP-based platform.
Delivering converged, IP-based communications requires a new breed of
access network, one that is not only engineered to deliver carrier-class
service, but also optimized for today's packet-based services. This
network must economically deliver these new services with the same, or
better, quality than the existing infrastructure. More important it must
be "service aware" -- capable of applying differentiated
treatment and quality of service to traffic based upon the specific
requirements of the applications being delivered. The ultimate objective
is to manage, monitor, and control network traffic at the level of the
service. These requirements necessitate the implementation of robust
traffic management and traffic engineering services.
THE IP QoS CHALLENGE
Traditional IP networks operate on a connectionless,
"best-effort" basis, with all packets subject to equal treatment
as they are individually routed through the network on a hop-by-hop basis
to their destination. Packets from the same flow may traverse the network
over different paths, arrive at their destination out of sequence, and
have to be reordered. Additionally, some of these packets will be lost in
transit and have to be re-transmitted, and contention for resources and
network processing and encoding overhead will slow the packet's journey.
These factors not only produce a cumulative delay, but they also
introduce an element of unpredictability that manifests itself as delay
variation. Moreover, IP's best-effort "fairness" translates to a
relative "unfairness" for traffic that is more sensitive to
network impairments. In times of heavy and/or prolonged network
congestion, such impairments would likely produce some irritation for the
Napster enthusiast experiencing longer-than-usual download times. However,
for the business engaged in mission-critical B2B transactions, or relying
upon packet-based voice services, the impact and repercussions are much
more severe.
DELIVERING IP-BASED QOS
Quality of service (QoS) can be defined as the collective measure of
service levels delivered to the customer premise and is characterized by
its intrinsic behavioral properties and performance requirements.
Delivering QoS, and meeting customer-contracted Service Level Agreement (SLA)
obligations, requires the ability to manage and control the relevant
service performance attributes -- attributes such as latency, jitter,
average and peak packet rate, and packet loss ratios. By doing so, we can
ensure that availability and performance is delivered within acceptable or
contracted service bounds, and that premium or priority services are given
preferential service within the network.
In the following sections we will examine traffic management and
traffic engineering concepts and emerging solutions for delivering QoS in
IP-based converged networks.
TRAFFIC MANAGEMENT
Traffic management is concerned with satisfying QoS performance
objectives, for both new and existing traffic flows, and protecting
against conditions that result in congestion and degradation of network
performance. For traffic management to achieve its objectives, network
elements must provide facilities for packet marking, traffic
classification, admission control, and traffic shaping/conditioning. We'll
consider each of these functions:
- Packet Marking -- Packets are annotated for a specific QoS
treatment, such as queuing priority or drop precedence.
- Packet Classification -- Classifiers map packets requiring
the same QoS requirements to specific outbound queues. Typically,
these traffic classifications are based upon the contents of the
packet header, such as the L2 and L3 source/destination address. In
practice, however, classifications may be derived from (and applied
to) a virtually unlimited range, combination, and granularity of
packet attributes -- including physical ingress port/interface,
application protocol type, IPv4 Type of Service (ToS), and IPv6 class
of service (CoS) markings.
- Admission Control -- This function ensures that the requested
traffic profile and QoS levels can be met with respect to current
network state, resource availability, or other policy-based
considerations prior to admitting the traffic flow.
- Traffic Shaping And Conditioning -- A variety of mechanisms
are used to monitor and maintain compliance with traffic profiles (or
traffic contracts). Metering services will monitor and measure traffic
against its profile, and pass packets along to the appropriate
policing mechanisms -- the queuing and dropping services.
For delivering converged services within the resource-constrained
access network, it is generally accepted that traffic management services
implement a fine granularity of control. The IETF's Integrated Services
Model (IntServ) is well suited for this role by supporting explicit
service guarantees for priority services, providing admission control
based upon Resource ReSerVation (another IETF initiative), and delivering
flow-level control (at the level of the service).
Within the core of the network -- where the issue is not resource
availability, but traffic volume -- it is both acceptable and desirable to
employ a less granular approach to traffic management. In this
environment, attempting to maintain and process hundreds of thousands of
individual flows, flow states, and resource availability is unrealistic.
Another IETF initiative, Differentiated Services (DiffServ), provides many
of the traffic management benefits of IntSev without the signaling and
state management overhead, and by aggregating large numbers of flows into
a few simple Behavior Aggregates (BAs), provides an attractive solution
for the core network.
TRAFFIC ENGINEERING
Once appropriately classified and groomed, traffic engineering services
must be applied to efficiently aggregate and map service flows onto the
existing network topology to control network behavior and optimize network
utilization and traffic performance.
MPLS represents the best alternative for enabling traffic engineering
and QoS in the heterogeneous public networks. Although originally intended
as a means to enhance routing performance, continued improvements in that
area have shifted the application focus of MPLS to its inherent
capabilities for delivering efficient and scalable traffic engineering and
QoS in IP-based networks. MPLS operates at Layer two-and-a-half (L2.5) and
is protocol agnostic to the layers above and below it. Its architecture is
based upon the multi-layer switch concept, which cleanly separates the
forwarding and control functions -- both of which are defined by MPLS.
The power of MPLS stems from its ability to associate and allocate any
type of user traffic with a particular Forwarding Equivalency Class (FEC).
Each FEC represents an aggregation of traffic that will be treated in the
same manner as it traverses the network. These FECs are then mapped to
Label Switched Paths (LSPs) that have been engineered to support specific
traffic QoS requirements -- guaranteed bandwidth or low latency, for
instance. LSPs behave in a similar fashion to the more familiar ATM
Virtual Circuit and Frame Relay Data Link Connection Identifier (DLCI),
but do so with much greater efficiencies.
Upon ingress to a particular MPLS domain, all packets are assigned a
label that serves to represent its FEC/LSP binding as well as a short-hand
reference to the contents of the IP packet header. In sharp contrast to
today's "longest-match" routing paradigm, MPLS-equipped routers
are able to perform ultra-fast forwarding of IP packets via "exact
match" label swapping. Moreover, MPLS overcomes the inherent
limitations of traditional destination-based routing by supporting both
explicit and constraint-based approaches for establishing LSPs. This
capability allows network administrators to bypass potential points of
congestion and direct traffic away from the default path selected by
today's Interior Gateway Protocol (IGP)-based networks and deliver precise
control of network traffic and behavior.
PUTTING IT ALL TOGETHER ... IP QOS FOR SERVICE-AWARE NETWORKS
For the resource-constrained local access loop, MPLS is best matched with
the IETF's Integrated Services (IntServ) architecture to achieve
service-aware networking. This MPLS-IntServ combination provides explicit
QoS controls at the LSP-level for delivering enhanced IP-based services.
MPLS/IntServ also provides connection-oriented behavior to connectionless
access networks.
Since each LSP is apportioned and allocated access bandwidth in
accordance with its traffic contract parameters, the given application
receives its appropriate level of QoS. For instance, individual LSPs can
be created to support the unique requirements of services such as voice
over IP (VoIP), VoMPLS, or premium data applications such as e-commerce or
VPNs. In the core network, where explicit end-to-end resource reservations
are not practical and additional packet processing overhead is not
advantageous, pairing MPLS with another IETF initiative, Differentiated
Services (DiffServ) provides the required levels of QoS management.
CONCLUSION
Combining IntServ, DiffServ, and MPLS offers a practical
"service-aware" QoS architecture that enables service providers
to create and deliver a wide range of IP-based voice, unified
communications, and data services over a single access network. This
approach enables all services -- packet-based voice, tiered data
offerings, and secure VPNs -- to dynamically share the same link and
provides network operators the tools to optimize resource utilization on
their access networks. Finally, an IntServ/DiffServ/MPLS architecture
reduces the complexities and costs associated with operating multiple
access networks and provides investment protection by integrating with
existing legacy infrastructures, while allowing carriers to follow a
smooth migration path to converged, IP-dominated networks.
Mark Veil is product manager at Integral Access. The author may be
reached at [email protected]
[ Return
To The November 2000 Table Of Contents ]
|