TMCnet News

Orchestrate Flawless Performance -- Tuning systems for Web 2.0 applications requires a new breed of system and networking management
[January 28, 2008]

Orchestrate Flawless Performance -- Tuning systems for Web 2.0 applications requires a new breed of system and networking management


(Information Week Via Thomson Dialog NewsEdge) Changes in the methods for building and deploying applications have rendered impotent many of the techniques historically used to manage application delivery infrastructures. Gone are the days when managing a database server's transaction performance equated to performance management. With the advent of Web services and now Web 2.0 technologies like mashups, today's applications are too complex to manage with last-generation tools and methodologies. New IT governance standards gaining acceptance among technology leaders also require that IT resources be managed more cohesively and proactively.



What's needed is a holistic approach to application performance management, one that employs systems that work across application layers as well as across distributed enterprises. Key to getting application performance right is understanding the bigger-picture needs of the organization as defined through practices such as ITIL and COBIT (see "Standards For IT Governance," Dec. 10, p. 35; information week.com/1166/governance.htm). These governance processes help IT understand what's important. In this Blueprint, we'll explore how to keep those important services up and running.

THE NEW MANAGEMENT FRONTIER


The need for APM aligns closely with macro trends confronting IT. Application decomposition may well enable organizations to leverage information stored in previously inaccessible silos, but the real-time Web services required to make that data available demand their own management techniques to function properly. What's more, some data and systems may be located outside IT's immediate purview.

So-called Webification of existing enterprise applications often brings to light the need for new management systems and mind-sets. NetForecast, an APM consulting firm, has found that on average, resources from six servers are required to compose a mashup on a user's desktop, says NetForecast president Peter Sevcik. Depending on how important that mashup is, each service, as well as the collection of services, requires monitoring and management.

Yet, managing those Web services by piecing together data from conventional point management products won't cut it. Polling individual devices for SNMP alerts can't provide sufficient information to control real-time process flows that by their nature are ephemeral. In short, guaranteeing the performance of tomorrow's distributed Web services applications won't be possible without monitoring and managing the entire application flow.

APM has other drivers, too. To extract additional value out of IT investments and improve customer experience, executives are looking at managing IT end to end through governance and process specifications, such as COBIT and ITIL. While these specifications are excellent for pulling together IT business process, they require tools to implement the ideas set out in them. APM closely aligns with ITIL because it postulates a unified system for analyzing application performance problems, notes Dennis Drogseth, VP at IT consulting and analyst firm Enterprise Management Associates.

In fact, APM aligns neatly with at least four of the 14 ITIL service operation activities, Sevcik says, ticking off Incident Management, Availability Management, Capacity Management, and Service Level Management. In short, APM can be viewed as the tool by which ITIL gets implemented in the network (see diagram on next page).

BUILDING APM

The APM architecture is built on three elements that enable testing and incident investigation capabilities: data collectors, analysis engines, and reporting stations. These elements come together to build a set of tools that proactively monitor systems and resolve application problems. In some cases, problems are diagnosed through active synthetic transaction monitors, while others may require passive agent or agentless monitoring.

Synthetic transaction monitors measure application performance by simulating user activity using predefined transactions. They can identify many user-perceived performance problems, but often can't determine where the actual problem is occurring. What's more, they require unique programming for each application monitored. Perhaps their most important use is for reporting user experience data, which can be trendable over long periods and through application revisions. Such data can be extremely useful for reporting on IT's service-level agreements.

Alternatively, or in addition to synthetic transaction monitors, IT can capture application performance data passively by deploying software agents and hardware probes. While these provide a more detailed picture of the underlying application operation, they also can incur significant deployment and installation costs, and take more day-to-day attention. Such systems are likely to observe and record events that actually cause undesired application performance, but finding those events and correlating them back to an observed performance issue is an evolving science.

Hardware probes attach at key network junctures, such as Internet access points or via switch monitoring ports, and are normally passive. They also connect to core switches and collect NetFlow statistics to gain a more complete view of the IP infrastructure. As such, these probes can gather a lot of data. To prevent the that data from inundating the network-particularly WAN links-analysis engines must be deployed throughout the infrastructure. These systems aggregate and process the data from the various probes and, depending on the size of the organization, consolidate data from a number of sites.

Finally, a monitoring station, or management console, enables staff to query these various components from a single location. Numerous functions and technologies are made available within this context to the IT manager. Most important is the ability to analyze and correlate results from many locations such as the user's desktop, the network, and the data center. The monitoring station should be capable of problem resolution functions and service-level monitoring and reporting.

An APM architecture should assess the technology's performance against the actual user experience. Work being done by the Apdex Group (www.apdex.org) aims to standardize these measures. The group, spearheaded by Sevcik, seeks to provide a numerical measure of user satisfaction with enterprise applications. The organization aims to create specifications that calculate one number from many measurements on a uniform scale of 0 (no users satisfied) to 1 (all users satisfied) that can be applied to any set of user perception measurements.

At the same time, within the data center, metrics are needed to capture transaction performance as well as the performance of data center components. Across the network, traffic analysis using standards such as NetFlow, and, more granularly, classical packet analysis, lets engineers analyze the performance of the corporate network. Coupled with route analytics, which provides an understanding of routing's impact on application performance, IT managers gain a complete picture of application dependencies.

Ultimately, the goal is to gain a holistic view that accounts for the unique characteristics of each device and system required by the application. With a coherent view of the application's end-to-end performance, managers can better understand the implications of infrastructure changes on the application, support application life-cycle planning, and ultimately improve the ability to deliver what matters most: a satisfied end user.

Write to Dave Greenfield at [email protected].

http://informationweek.com

Copyright 2008 CMP Media LLC. All rights reserved.

Copyright ? 2008 CMP Media LLC

[ Back To TMCnet.com's Homepage ]