TMCnet - World's Largest Communications and Technology Community

TMCnews Featured Article

March 28, 2006

Becoming a Network Management Superstar May Require an Agent

By TMCnet Special Guest

Agent based monitoring is showing a resurgence in popularity—but is this the right approach to move your network management solution to the A-list?
Historically agents were used to augment the minimal instrumentation provided by early network devices; as this improved, agent based monitoring became less important.  With the advent of more demanding applications and increasing commercial dependence on IT services the need for guaranteed, high performance services has emerged.  When monitoring such services (and service level agreements) one needs to ensure correct and timely responses between clients and servers.  This is often referred to as “end-to-end” service management and encompasses the need to monitor applications, servers and interconnecting networks.  Agents may be required to monitor these services in sufficient detail, or it might be possible to garner sufficient data by other means.  This article explores the issues associated with agent-based solutions and their alternatives as well as where in the service model each technology should be employed.
Agents: Liberators of Subjugators?
Advances in networking technologies, particularly fault tolerant, dynamic routing make prediction of end-to-end path characteristics and availability exceedingly difficult and is exacerbated with partial network visibility.  The only reliable way to accurately measure end-to-end characteristics is by monitoring real application traffic flows or by using synthetic traffic which is indistinguishable from real application traffic.
There are different types of agents, varying in implementation and function and the selection of agent will determine the depth, timeliness and accuracy of data and the cost and complexity of the solution.  Three common agent implementations are:
- Hardware agents are typically network “appliances” queried by remote management stations.  (e.g. OptiView, PacketSeeker and NetDetector)
- Software agents are applications monitoring server and application “health” (remote and/or local application status and server health). (e.g. IxChariot, Vivinet and SystemEdge)
-  Intrinsic agents are already available within existing assets (network devices, OS’s and applications) providing a wealth of information and are extensively leveraged by NMS’s.  (e.g. Cisco IPSLA, ping MIB, RMON, Netflow, SNMP agents) 
Agents function in one of two ways:
-Active agents produce ‘synthetic’ transactions configured to emulate real transactions and report upon the responsiveness of the server.  Examples include IxChariot, Vivinet, SystemEdge, and IPSLA. 
-Passive agents monitor real transactions, quantifying the behavior of the server by observing communication between clients and servers.  Passive monitoring is less prevalent than active monitoring due to the difficulty of ‘snooping’ real traffic flows and extracting useful data.  Examples include Psytechnics and Netflow.
No single type of agent is ideal in all circumstances and the selection of an agent must be based on the advantages and disadvantages of the agent in the context of the environment in which it is to be deployed.
Agents to the Rescue
Agents can monitor the characteristics of applications, servers, and networks in significantly more detail than generic management tools providing enhanced visibility of the network and end-to-end services.
When assessing service availability, conventional techniques such as testing for connectivity to specific ports is inadequate; to conclusively determine whether or not a service is available, the agent must use the service as a client would.
For comprehensive monitoring, in the event of network outage (where agents are temporarily unreachable from the management station), agents can continue to collect data and transfer the data when connectivity is restored.
With passive agents there is no additional application/server/network load, fewer scaling issues and reliable detection of intermittent errors. 
Agents can provide detailed, accurate, timely data about network and application performance, however the disadvantages need to be offset against the benefits when assessing the applicability of an agent based solution.
Abandoned by Agents
Agents are often expensive – initial purchase, additional hardware, rack space, OS licenses, maintenance, training and integration.
Most agents generate synthetic transactions and as such consume additional bandwidth and increase server and application loading. 
While agents are available for common applications and services (web, mail, database, etc.) agents will not be available for bespoke applications; it may be possible to procure custom agents but at significant additional risk and cost.
A common problem deploying agent-based solutions is scalability.  While using agents between small numbers of clients and servers is readily achievable, deploying, managing and monitoring any-to-any connectivity with large numbers of clients and servers rapidly becomes untenable. For highly meshed networks it might not be practicable to monitor all possible connections and a strategic subset might have to suffice.
Active agents can be poor at detecting intermittent faults if the testing frequency is significantly less than the real application transaction frequency.
Passive agents often struggle to cope with throughput; analysing large numbers of transactions and large volumes of traffic at line-speed is difficult and often only achievable using dedicated hardware or by analysing a subset of the data flows. 
Passive monitoring is effective for performance monitoring but not for fault alerting - the first indication of a service fault is the failure of a real client-server transaction. 
Given the range of agents and options, there are some environments in which agents are strongly recommended and some in which agents are probably not necessary.
Sleuthing Your Agent Direction
There are circumstances when the use of agents is highly recommended.  For SLAs based on a customer’s service availability; verifying the service is available using alternative approaches is inadequate.  Certain applications – like VoIP – have stringent requirements of a network and measurement of bulk traffic characteristics is inadequate.  Call quality may be inferred from key network transport metrics, but for accurate call quality measurement voice-aware agents are required.
Highly redundant or dynamic configurations make construction of composite services on the basis of the component devices and links impractical and unreliable.  In such situations, agent based in-band measurements provide significantly more accurate results.  Business critical services mandate the use of active agents to provide the highest quality data about a service, in as timely a manner as possible.  If data gathering must continue even when network connectivity to the NMS is lost (for SLA monitoring and billing) agents are required.
If agents are required, factors affecting the choice of agent include:
- Will infrastructure agents suffice? Modern infrastructure agents can provide a large range of application level metrics. For example, Cisco’s IPSLA supports an ever growing range of applications and protocols.
- If network bandwidth, router CPU/memory or server resources are heavily utilized then passive agents are preferred to avoid additional burden.
- If advance notification of availability issues is important then active agents should be used.
 The Final Cut 
An increasing number of environments suggest or mandate the use of agents but this does not necessarily mean purchase and management of large quantities of additional hardware and software.  Most modern network assets provide some form of embedded agents and while the sophistication of these agents (and the quality of the data they produce) varies widely they are often sufficient to avoid additional CAPEX and OPEX.
Agents can provide very high quality data in very large quantities but one should always consider whether or not agents are really necessary and what will process the data; without some automated means of processing the data, it will produce little benefit in improving the quality and availability of services.
For the lower layers of the ISO model agents are not normally necessary, as one moves to the higher layers agents become increasingly relevant.  In general, a combination of agent-less monitoring for layers 2 and 3, with agent based monitoring at layer 4 and above, coupled with other sources of data (such as SNMP traps and sysLog parsing) enables management of the majority of enterprise networks and monitoring end-to-end SLAs.
J.J. Roper is a development research consultant for Entuity, Inc.

Technology Marketing Corporation

2 Trap Falls Road Suite 106, Shelton, CT 06484 USA
Ph: +1-203-852-6800, 800-243-6002

General comments:
Comments about this site:


© 2021 Technology Marketing Corporation. All rights reserved | Privacy Policy