The standards-based service delivery infrastructures being implemented today appear to create complexity with difficult-to-define short term benefit. Long-term benefit is easier to quantify, but does the need to migrate from existing silo architectures really exist?
Sins of the Past
Since computers came on the scene, every application has stood on its own. In fact, most applications were developed and managed by completely different teams of developers. With no single interface, such as a workstation, available to provide an integrated portal concept, there was little justification for considering the integration of multiple applications. In fact, the sheer complexity of creating applications in legacy languages on bulk processing platforms with cumbersome interfaces made it difficult to envision an overriding architecture or strategy for developing all applications.
As the interfaces to computers and services have improved, the desire to offer a single control portal has evolved. Why should a user have to log off, then back on to access multiple applications and multiple services?
At firt, the trend was toward developing a standalone portal that acted as the integration point for all applications. Thus, the applications stayed independent with little or no architectural vision and the portal acted as the glue.
But, with applications still externally integrated, the problem of having applications as independent silos was only aggravated by an integration portal. If the applications changed, then the portal had to change. The delivery schedules of all applications and services became wrapped around a single integration axle with hub-and-spoke dependencies.
The desire to have rules governing the development and integration of applications has been around as long as computers. It doesn’t take an epiphany to determine you don’t want to do things twice. The problem is most of the initial concepts involved simplifying the developer’s job, not the user’s tasks.
So there are two parallel, but uneven, evolutionary paths that have occurred in computer applications. The first evolutionary path is the development environment. Common sense dictated developers should reuse code whenever possible. Programming languages, linkers, execution environments, system support utilities, and device interfaces have all evolved to simplify the programmer’s task and to foster reuse.
The second evolutionary path is the user interface. This evolution has been slower than the development path since the evolution of devices and the way users utilize them comes more in bursts than in an ongoing evolution. There were initially switches to program memory locations followed by paper tape, cards, magnetic drums, magnetic tapes, dumb terminals, and finally, PCs. This is in addition to all the other devices with their ever-evolving operating systems that enhance the user experience.
Unfortunately, the development environment evolution only requires positive changes, not enhancements, to existing applications. Essentially, code could be evolved into new programming paradigms over time with no absolute requirement that it be evolved at all. Thus, the cost of evolving could be spread over many years and absorbed as the application was enhanced.
The interface evolution, on the other hand, requires a negative change to existing applications. To take advantage of new, more powerful devices often requires a deconstruction of legacy application systems. And interface standards must be strictly enforced to allow integration.
Pay Me Now or Perish
User access devices have now evolved into easy-to-use keyboards front-ending powerful operating systems, which allow full control and integration capabilities. The development environments have evolved to the point that a small team can create unique applications in a very short period of time. This results in a highly volatile situation for legacy applications in an environment where new competition arises very quickly.
Legacy applications must evolve immediately in order to compete with cheaply developed applications from third parties. This legacy application evolution requires a full understanding of the business logic within the applications so that same business logic can be deconstructed and reconstructed without harming the current user community while providing the desired enhancement.
There is no choice any longer. Legacy applications must evolve or be replaced. It is no longer possible to take the short term position of slapping up another silo application and trying to integrate later. Competition dictates that old development methods mean extinction.
Many developers are still stuck in the standalone silo development world. The majority of these developers are pure technicians and have little knowledge of meeting customers’ overall business requirements.
Yes, structured development with enterprise architecture guiding architectural principles and interface guidelines is more expensive in the short term compared to standalone application development. But in the long term, it is significantly cheaper to maintain, enhance, and integrate architecture from the beginning because short term solutions are not competitive. There is no “pay me later.”
There are some that worry the complexity and the very rate of change allowed by standard architectures will create an unmanageable evolution. While the possibility of creating a Frankenstein is always present when evolving and integrating disparate parts, the key factor to managing the complexity is the definition of and control of the interface points between the components.
In reality, standard architectures with full interface architectural control will allow a drastic acceleration of delivery of new services and capabilities. The life cycle management will actually be simplified because the schedule dependencies of a hub-and-spoke application structure are removed. If one component is completed ahead of other components it should not present a problem since each component stands alone.
Testing will actually be simplified since testing is done only to ensure compliance with the interface standard and the individual functions offered by the component. As long as the component fully complies with and supports the interface, the component can be tested independently.
Three Conceptual Layers
When considering the deconstruction of the applications within a service provider, there are three conceptual architectural layers: back office, device management, and service delivery. Each of these requires a controlling enterprise architecture with interface definitions both within and among them.
Back office systems can be decomposed and integrated with a Service Oriented Architecture (SOA) and an Enterprise Bus. At this layer, the applications themselves are the users or consumers; applications share and demand information from other applications. While there is some human interface, the major advantage of a standardized architecture within the back office is the ability to reuse major back office systems such as billing, rating, inventory management, and order management.
In back office architectures the most important factor is the enterprise bus. How do applications communicate among themselves? As long as communication is standardized, it is easy to plug-and-play applications, add enhanced features, or modify business processes. The key is management of the interfaces.
Device management can be decomposed into access and provisioning functions. Each device has its own idiosyncrasies, but virtually all devices are performing the same basic set of functions for back office integration and performance of services. There should be multiple layers within device management to separate order integration from physical provisioning, but one should be transparent to the other. Again, the key is management of the interfaces.
Service delivery can be decomposed and integrated using standard architectures such as the IP Multimedia Subsystem (IMS) for IP services or a standard software delivery platform with OSS integration. Once more, the key is management of the interfaces.
Break It All Down
Once the three conceptual layers are known, each layer needs to have the integration points identified and interface standards defined. Strict compliance is critical. It would be unacceptable for one development team to violate the standards in order to “meet a schedule.” Development groups can bite the bullet now and absorb the cost of conforming to a standard integration strategy or the same groups can continue down the silo path and take the bullet between the eyes. The competition is definitely heading toward an integration strategy path. The choice is clear.
To succeed, a service provider must follow a logical deconstruction process that is manageable and allows maximum use of development resources. One such process involves the definition of an Integration Strategy. An Integration Strategy initially focuses on the integration points between highly level functional areas such as provisioning, billing, service delivery, and routing. Once the integration points are defined and the interfaces for each integration point fully architected, the legacy applications can be decomposed to fit the Integration Strategy model.
The existing applications are divided into distinct categories of function within the Integration Strategy. Using this division of function and the previously documented interface definitions, the developers can focus on reengineering each specific functional category without concern for the other categories. Following this approach, the provisioning functions can be developed independent of the billing functions and the service delivery functions can be developed independent of the core routing functions.
Benefits to Service Providers
Short term cost is no longer the issue — survival is the issue. If the cost is higher to do it right, then the higher cost must be absorbed in the short term.
The benefits to service providers of following standardized, enterprise-level integration architectures are immense:
• Ability to compete with new competitors offering fully integrated services
• Shorter time to market
• Reduced development times
• Application reuse
• Platform for delivering services not yet visualized
• Easy third-party service integration
• Plug-and-play at all levels
• Custom service offerings on a per customer basis with little customization cost
Benefits to Customers
The benefits to customers are similar to those for service providers, but key ones include:
• Movement from one service provider to another with little or no change to existing services
• Transparent integration of diverse services from diverse service providers to a single management structure
• Ability to plug-and-play service providers, not just services
• Evolutionary path into new device capabilities without losing past investment.
Using standardized architectures, such as SOA and IMS, is common sense. Evolution of legacy applications into these architectures is more expensive than continuing silo-based development. The market implies silo development will lead to the extinction of the service provider.
Any time a standard is violated, the cost increases drastically. The key to implementing enterprise level, standardized architectures is management of the interfaces by defining and enforcing an integration strategy. Strict enforcement is critical.
Standardized architectures do not increase the complexity of required management and support activities if the architectures are implemented correctly. In reality, these architectures allow segmentation of responsibilities and the ability to escape from the hub-and-spoke application dependency of the past.
David Croslin is IT Chief Product Architect at MCI, Inc. For more information, please visit the company online at www.mci.com. (news - alerts)