SUBSCRIBE TO TMCnet
TMCnet - World's Largest Communications and Technology Community

CHANNEL BY TOPICS


QUICK LINKS




The Growth of Content Delivery Optimization: What You Need to Know

CDN Featured Article



The Growth of Content Delivery Optimization: What You Need to Know

 
May 13, 2016

Share
Tweet

In sports, the only statistic that really matters is who scores the most points by the end of the game. On the Internet, the only statistic that matters is speed.

There is an evolution happening in content delivery optimization. Just as network performance management (NPM) and application performance management (APM (News - Alert)) revolutionized WAN and LAN traffic, quality of experience (QoE) monitoring is changing the way we deliver Internet applications. Static options served well in controlled network environments, but the unpredictable performance of the Internet requires a more intelligent approach to global traffic management.


You want to be fast no matter where in the world your clients are. Ensuring a speedy, consistent user experience across a best-effort network becomes increasingly challenging as traffic increases globally. Several global traffic management approaches attempt to tackle this problem, but only one of them succeeds.

Why Round-Robin Doesn’t Fly

The easiest type of load balancing is round-robin. This method distributes traffic more or less randomly by directing each user to the next available origin or content delivery network (CDN). Each content source ends up handling an equal amount of traffic.

As traffic needs grow and become more geographically diverse, round-robin’s faults become painfully clear. With no awareness of a user’s location or network conditions, round-robin is just as likely to route a user to a high-latency origin as a low-latency one.

Geographic Load Balancing: Routing with the Wrong Data

IP address geolocation gives us the ability to narrow down the physical location of a user by region. Geographic load balancing routes users to the content source physically closest to them. The idea seems sound at first: to minimize latency, traffic is directed toward the closest geographic content source.

Latency Changes Over Time. Geography Doesn’t.

Because Internet infrastructure is constantly changing, your end-user performance is a moving target. Traffic across a content delivery route will lose quality at a moment’s notice. Congestion comes and goes unpredictably within regions, causing latency road bumps that are overlooked when looking at averages. Geographic load balancing can’t detect this, much less mitigate it.

Geographic load balancing doesn’t solve the problem. The only way to reliably deliver fast, reliable content is by measuring the actual user experience.

Performance-Based Load Balancing: Routing with Real Data

Geographic load-balancing routes traffic based on assumptions that aren’t backed by any performance data. Effective global traffic management reduces the guesswork by measuring network performance and routing users to content sources that perform the best for them at that specific time. This is the goal of performance-based load-balancing. Synthetic monitoring and real user measurements (RUM) are the two different methods of performance-based load-balancing.

Synthetic Monitoring: No Substitute for the Real Thing

Synthetic monitoring measures performance between public-facing ISP servers and popular content providers. It fails to take into account the last mile, the large number of factors that affect performance between the ISP and the end user.

RUM: Real User Measurements Reveal the User Experience

Synthetic metrics are not complete metrics, and load balancing based on them will not be ideal. The true key to effective and consistent performance-based global load balancing is real end-user data. Without knowing the actual latency from the content source to the user, it’s simply guesswork.

Discovering the true latency requires measurements between actual users and content providers: true RUM. Any performance-based global load balancing needs an aggregate of RUM data. How big does this aggregate need to be to be effective? Huge.

Approximately thirty thousand networks, or sixty percent of ASNs, have multiple upstream providers. This means that with an equal distribution of measurements, one million RUM data points per day will give you only thirty-three measurements per ASN, or barely more than one per hour. With traffic spread unevenly across ASNs, a RUM system needs to collect billions of measurements per day to provide accurate, up-to-date performance information. RUM solutions without this much traffic will fail to protect you from poor performance across networks with less traffic.

Summing it Up: Speed Is All That Matters

Natural selection on the Internet selects the service that can give their users speedy, responsive content. To improve the only statistic that matters, load-balancing decisions need to be informed by relevant data.



Article comments powered by Disqus
CDN Home





Technology Marketing Corporation

2 Trap Falls Road Suite 106, Shelton, CT 06484 USA
Ph: +1-203-852-6800, 800-243-6002

General comments: [email protected].
Comments about this site: [email protected].

STAY CURRENT YOUR WAY

© 2024 Technology Marketing Corporation. All rights reserved | Privacy Policy