Latency Matters: Cloud Computing, Rich Media, Financial Industry Drive Changes in Data Center, Colo Space

Data Center Evolution

Latency Matters: Cloud Computing, Rich Media, Financial Industry Drive Changes in Data Center, Colo Space

By Paula Bernier, Executive Editor, IP Communications Magazines  |  August 17, 2010

Rich media has flooded public networks. Real-time and delay-sensitive traffic like voice and video are going IP. Cloud computing is proliferating. And on May 6, Wall Street experienced the flash crash, sending the Dow down nearly 1,000 points within seconds.

All of this is putting new emphasis on the fact that latency matters.

Every second – or even millisecond – can make a significant difference, whether you’re talking about the quality of delay-sensitive traffic, the end user experience with cloud-based services, or the ability to trade fairly (or quickly halt trade, when needed).

Controlling latency was among the key drivers of Equinix’s (News - Alert) recent purchase of Switch and Data, says Jarrett Appleby, chief marketing officer at Equinix, which sells data center space, power, cooling and the ability to interconnect with others.

Equinix on May 3 announced the closing of its $683.4-million deal to acquire Switch and Data. The transaction, which was announced Oct. 21, strengthens Equinix’s position in the global data center services market by extending its presence to 16 new metropolitan areas across North America and by expanding the company’s regional data center footprint from six to 22 metropolitan areas. The company now operates more than six million gross square feet of global data center space with more than 575 network service providers.

Appleby calls the data center “the new network hub of this century.”

“The data center is becoming the new interconnection hub for the convergence of network and services,” he says. “It’s kind of like the wave we rode 7 or 8 years ago with the Internet peering community. The networks needed to interconnect with the content guys, and what that drove is that rapid growth rate and the need for making it easier to exchange data for Internet, which evolved to video and content distribution worldwide. The big folks we were working with were the Googles and the Microsofts and the Yahoos of the world.”

The same thing is happening now with new WAN solutions, he says, noting the expansion of Ethernet beyond LANs to also include carrier-class Ethernet.

“You need to get these WAN solutions and new interconnection hubs really close to customers,” Appleby says. “So we were asked to go to places like Seattle and Denver, Miami and Toronto and Atlanta were the big five. And along for the ride came an even deeper penetration into Philadelphia and Pittsburgh and Boston and places like that. So it moved us from a U.S./North America coverage from roughly 30-millisecond latency for 95 percent of the population and for enterprise clients to within less than 10 milliseconds away for 94 percent of the U.S. population.”

That 20 millisecond difference is important, he notes, because it can have a significant impact on how the application performs and, thus, the end user quality of experience. And while that applies to a variety of applications, the big driver of the push to lower latency by bringing content closer to the edge is cloud services, says Appleby of Equinix, which has more than 130 cloud and SaaS companies (like Amazon) within its data center and colocation sites.

Rose Klimovich, vice president of product management and product development at Telx, says that cloud computing in the early days was best effort, but now it’s evolved to be more enterprise-level. So if you’re a cloud provider or a CDN company serving financial services companies in New York, for example, it’s good to get as close to them if possible if you’re enabling real-time access to video or transactional applications, she says. However, other applications, such as back-up e-mail services, don’t have those same requirements.

“It is good to be centralized amongst our customers because that means the latency for all of our customers is not so bad,” says Daniel Marques, CTO of Ballista Securities, a Telx customer that runs the Alternative Trading System. “And most of our customers are clustered in the New York area or the Chicago area.”

John “JT” Tomljanovic, director of IT solutions global product management for Verizon Business (News - Alert), says a nanosecond can make a difference between price points on a big exchange in the financial industry.

“So in the financial industry, I think proximity hosting is key,” he says.

Dan Tuchler, vice president of product management with eight-year-old BLADE Network Technologies, a Nortel spin-out that sells blade servers and server rack switching elements, adds that high-frequency Wall Street traders want the lowest latency possible and absolute fairness in the network “because if one trader is getting a slightly slower [response] than other traders, that’s a problem.

“There are no absolutes,” adds Tuchler, “but you can provide switching equipment that has the same latency on every port.”

Tomljanovic adds that Verizon Business has an advantage because it offers not only data center services and cloud-based solutions, but also owns networks and other facilities to support CDN and wide area connections.

“So when a customer comes to us we’re going to tout that our network is an advantage,” he says, adding that Verizon Business makes sure its data centers are near customers for which latency control is important.

Verizon Business’s Tomljanovic goes on to say that he expects to see further consolidation in the data center and collocation space as more services move to the cloud.

“Within two to 10 years from now everything customers buy is going to be purchased as a service,” he says. “That’s my prediction.

“So I don’t think people are going to be buying data centers, they’re not going to be buying servers, they’re going to be coming to companies like Verizon” to deliver it all, he adds.

It would seem that Cincinnati Bell has a similar view of the market’s interest in buying data center-related product bundles, as the telco and ABRY Partners in May announced plans for Cincinnati Bell to acquire data center operator CyrusOne for $525 million.

(Meanwhile, ADVA Optical Networking (News - Alert), IBM and Level 3 have joined forces to provide customers with secure wavelength services to deliver high-bandwidth access between their sites and IBM cloud data centers.)

CyrusOne (News - Alert) sells colocation and data center services to Fortune 500 companies. The largest privately held data center operator out of Texas, CyrusOne owns seven data centers in Austin, Dallas and Houston – with a total of 163,000 square feet of data center capacity. Once the merger closes, Cincinnati Bell will have 609,000 square feet of data center capacity in 17 facilities. "Data center services are a key strategic focus for Cincinnati Bell, allowing the company to provide next-generation computing and communications services for our customers," says Jack Cassidy, president and CEO of Cincinnati Bell. "The success of this strategy is evidenced by our ability to organically build the Technology Solutions (News - Alert) segment of our business into a $300 million run rate revenue operation." Two important trends driving growth in data center services are rapid adoption of Internet-related technologies by enterprise customers to run their most important business functions, and the accelerating demand for outsourced solutions that allow them to better focus on their core business, he adds.

Peter Melerud, co-founder and vice president of product development at KEMP Technologies, which makes server load balancing products that can be used in virtualized architectures, says that while it used to be only large companies that were moving content to the edge, today businesses of all sizes are doing so, and gear from companies like KEMP can enable that. KEMP now offers two load balancer/application delivery controllers for less than $2,000 each. If a company has multiple sites, a geographic load balancer can be used to decide which data center is the best candidate to address a specific request, Melerud adds.




Edited by Stefania Viscusi
blog comments powered by Disqus