Virtual Server Farms: Distributed Applications in the Cloud

Virtualization Reality

Virtual Server Farms: Distributed Applications in the Cloud

By TMCnet Special Guest
Alan Murphy , Technical Marketing Manager of Management and Virtualization Solutions, F5 Networks
  |  February 01, 2011

This article originally appeared in the Feb. 2011 issue of INTERNET TELEPHONY Magazine.


Server virtualization first gained a foothold in the data center by offering a production-ready solution to replace physical servers one-to-one, typically referred to as P2V – physical to virtual. The benefits of P2V were immediate: less physical servers to manage; fewer physical servers to cycle; and upgrade on a fixed cycle, savings on energy, etc.

Soon thereafter, enterprise-class virtual platforms allowed administrators to push beyond one-to-one system replacement and consolidate at an exponential level: two-to-one, four-to-one, eight-to-one, as much as the virtual platform and available physical resources would allow. With advances like live migration, it was no longer necessary to keep so many individual servers running for a particular application, and new applications and services could be added to the virtual platform. Virtual machine density – how many virtual machines can run on one physical host server – became a common word in enterprise IT lexicon, and now we’re seeing average density levels between 10:1 and 25:1. These are numbers that are fundamentally changing data center server architecture and allowing new services to be added as physical resources are virtualized.

One thing that hasn’t changed within the data center, however, is the concept of a server farm. A server farm is a cluster of like-tasked servers that either offer redundant services for distributed load or servers that are part of a complete application delivery chain: web tier, app tier, and data tier. For large-scale production deployments, applications are typically deployed across large server farms for redundancy and fault tolerance. In fact, thinking about applications in the paradigm of a farm of servers – be they physical or virtual – is still very much how we design and build data centers today. The fundamentals of that distributed server farm model don’t change as those application clusters are virtualized and even as they move into the cloud.

Virtualization agility is based on the model of a workload: a virtualized resource that performs a specific task. When a web server, for example, is moved from a physical server to a virtual machine, that virtual machine is said to be offering a web server workload; the role of that virtual machine is to run the web server workload and typically not much else. As I’ve discussed here before, virtualization enables a very discrete system of workload isolation. There’s no need to make a virtual machine run both web and e-mail server workloads because we can deploy a unique virtual machine for each discrete web and e-mail workload. We’re able to take advantage of density and resource virtualization by separating and isolating workloads, giving us more granular control over how and where we deploy those workloads across the virtual infrastructure.

If we apply this same isolation model to the idea of the server farm, we can begin segmenting individual roles and responsibilities within the farm into discrete workloads. Using the three-tier system – web server, application server, and data server – we can split up the physical server farm into individual application-focused workloads. In other words, we can break up the physical web servers and application servers into virtual web server workloads and application server workloads. On the surface we didn’t changed anything; we’re still keeping server farms clustered together, just in a denser virtualized environment that allows us to take advantage of virtualization benefits across the entire server farm. The ability to isolate discrete workloads within the virtual infrastructure becomes critical when moving those workloads to the cloud.

Before an IT department moves an entire application to an off-premises cloud provider, they’re going to need to decide what pieces of that application are going to move. Historically, that was an easy task: everything must go. If an IT department was moving from an on-site data center to a hosted environment, it moved the entire server farm, lock, stock, and barrel. The off-premises cloud model – enabled by and through virtualization – however, gives IT the flexibility to isolate which workloads are going to move and which workloads are going to stay. This allows IT to move geographically certain application services without breaking up the virtual server farm. One part of the server farm may reside on-premises while another part is running off-premises. The application server farm is intact even though the individual workloads are distributed between multiple locations.

The idea of the server farm is so ingrained in IT architecture and data center design, being able to use that same model across distributed workloads and cloud deployments becomes a necessity for a truly virtualized data center. Virtual server farms allow us to create a new distributed application model in the cloud while still maintaining a server cluster paradigm we understand and are comfortable with.


Alan Murphy is technical marketing manager of management and virtualization solutions with F5 Networks (News - Alert) (www.f5.com).


TMCnet publishes expert commentary on various telecommunications, IT, call center, CRM and other technology-related topics. Are you an expert in one of these fields, and interested in having your perspective published on a site that gets several million unique visitors each month? Get in touch.

Edited by Stefania Viscusi