Middleware is one of those ambiguous terms that we all think we know, but probably don’t. Middleware actually exists at a “lower level” in a system than most people are aware. It’s not just a bundle of services or a kind of extension to an operating system. In IP Communications, it’s a sort of applications development framework — a set of services for building complex, highly available applications.
Yours truly first heard about middleware back in the computer telephony days of the 1990s, when you actually had to stretch an RS-232 cable between a PBX and a PC, and the “middleware” holding it all together was written in DOS.
Today, we hear a lot about middleware regarding IPTV and HDTV for Video on Demand [VOD]. For example, Kasenna (news - alert) (http://www.kasenna.com) offers a DHTML-based middleware management platform (software) and VOD servers, to deliver MPEG-4 capability which sets the stage for HDTV.
Enea (news - alert) (http://www.enea.com) is known for its Element™ high-availability middleware solution for telecom, automotive, industrial control, and medical instrumentation applications. Element provides a suite of middleware services that sits between the operating system and applications. It can run on DSPs, network processors, and 32-bit CPUs. Element provides core services for synchronizing, instrumenting, monitoring, and establishing communications between applications spread across multiple operating systems and processors. Element also provides fault management, network supervision, and shelf management services that make it easy to monitor, repair, configure, provision, and upgrade live systems in the field. Element runs under MontaVista Carrier Grade Linux, Red Hat Enterprise Linux, Fedora Core, and CentOS (Community ENTerprise Operating System). Element also is compatible with Kontron’s XL8000 AdvancedTCA system and offers interfaces for AdvancedTCA and the Service Availability Forum’s Hardware Platform Interface.
Enea’s VP of Product Management for Element, Terry Pearson, says, “The term ‘middleware’ is not very descriptive. It can mean anything. Historically, the term was associated with what I’ll call ‘enterprise application middleware’. Sun, IBM, and Microsoft were involved and you could apply the term to everything from Web Services to Microsoft foundation classes, and those sort of things. Our new offshoot of middleware is really targeted more at the traditional embedded computing environment, so I would call it the telecom/embedded/ medical type of verticals, where you traditionally have an embedded controller and software development type of environment. Software gets developed for that type of environment in a way that’s very different than the way software is developed for an enterprise application.”
“On the embedded side, you typically see real-time operating systems,” says Pearson. “Even Linux has made a very big push into the telecom vertical, in particular — not so much in others. Basically, the software ‘powers’ the infrastructure devices making up the Internet, for example, in the telecom vertical. Looking at the way people have done software development in that environment, things have been pretty stagnant for about 15 years. It’s the same old reinventing the wheel over and over again — go out and find an operating system, maybe buy some protocol stacks and different management applications, but basically rolling a lot of your own stuff.”
“The whole concept of middleware, as defined in the enterprise applications space, has basically been reshaped to apply to the more traditional embedded environment as well,” says Pearson. “It’s the recognition of a set of services that are above the operating system but are very commonly needed and are built over and over again for different targeted embedded environments. It’s all about taking that layer and essentially formalizing it, standardizing it, abstracting it and delivering commercial implementations of it, so that companies such as network equipment providers that are building these pretty complex embedded devices having very high performance and reliability requirements can essentially source it as a commercial product, as opposed to building it themselves over and over again.”
“You may be familiar with the COTS [Commercial Off-The-Shelf ] hardware ecosystem that’s developing in the telecom world around the AdvancedTCA [ATCA] form factor,” says Pearson. “What I’m talking about is like ATCA, but for the software. When you look at ATCA, you see a bunch of equipment providers and hardware suppliers who recognize that they’ve been reinventing the wheel for many years and that it’s a very inefficient model. Defining standard interfaces for different hardware building blocks allows them to mass produce hardware more like a commodity. The hardware guys went from proprietary to commercial form factors having real economies of scale, and leveraged the R&D dollars that otherwise get invested into building those solutions across a lot of customers doing custom work themselves.”
“Now, take those same concepts and apply them to software,” says Pearson. “That’s what’s going on in the middleware space. It’s a set of software that has been very proprietary and developed over and over again, but now it’s subject to the same types of forces where standardization and commercialization are playing a big role. Frankly, that’s what our customers are telling us — they want to get out of the business of building the platform infrastructure, the hardware, the OS, and the middleware. They want to source that as a complete integrated solution.”
“Enea’s middleware product has two major functions,” says Pearson. “One is to simplify the job of the application developer by providing a really rich foundation of services for complex computing environments where you have distributed chassis-based systems with lots of hot-swappable blades and you have to achieve five or six ‘nines’ of reliability and availability. When you start heaping all those requirements onto an application developer, the app gets pretty complicated rather quickly. The developer ends up writing a lot of code to solve all of those problems. What the middleware does in the first case is to provide a rich set of programming services that abstracts a lot of the difficulties of those hard problems found in a telecom environment. That makes it easier to build the applications.”
Pearson continues: “The second major thing Enea’s middleware does is to leverage standard interfaces in the whole ecosystem around ATCA, carriergrade Linux and the Service Availability Forum [a consortium of companies that decides how to make middleware fault tolerant] to provide a pre-integrated platform. What that means is that equipment manufacturers spend a significant amount of their project time and resources building the base platform, which is the hardware, the OS, and the middleware. There’s another part devoted to application development. The goal here is to deliver a preintegrated platform across these standard interfaces so that the equipment providers can really focus on the higher layers of the software, or application.”
Middleware plays a major role in high-end systems used by service providers and network operators to bundle together deliver triple and quad play services.
A premier form of such middleware is the Connected Services Framework (CSF), which is both a middleware “glue” and development environment for common service management components and for automating interaction between existing services. CSF leverages Web service interfaces and a Service Oriented Architecture (SOA), so that operators can aggregate, provision, and manage converged communications services for their subscribers, regardless of the network or device (cell phones, set-top boxes, TVs, PCs, etc.).
Andy Chu, Group Manager of Planning and Strategy, for Microsoft Communications Sector, says, “We recently announced Microsoft’s Telco 2.0 vision, in terms of how we see the world of network services and Web services coming together, and how Microsoft is enabling that through three major pullers. First is the screen play, with our mobile device, IPTV, and client software. Second is our service delivery platform, primarily our CSF. The third is bringing services together, ranging from our Hosted Messaging Collaboration services to Live Services to Xbox services. It all helps the telco evolve to the next generation, so we call it Telco 2.0.”
“As part of all this, we also announced the Connected Services Sandbox (CSS),” says Chu. “One of the key promises of CSS is enabling operators or service providers to aggregate Web services very quickly and efficiently together. The ‘Sandbox’ environment is one for developers, ISCs, network equipment providers to contribute applications in this controlled environment. They can play with other Web services components and aggregate and create new services. In return, the operator or service provider can pick and choose what type of services they find interesting, and then they can leverage those services and deploy them into their network environment.”
“The whole idea is to cut down the deployment of services for operators,” says Chu. “Traditionally, a so-called ‘fast’ rollout of a service by an operator can take six to ten months, or even a year or more. We want to cut that time by at least a third by doing a lot of the ‘up front’ work for them in the Sandbox environment. We want to bring in the developer community of Microsoft, and the ISC community of Microsoft to contribute applications in the Sandbox environment.”
“For the Sandbox, we have a number of what we call Foundation Services,” says Chu. “Many Microsoft services have an open API, such as the Live Service, that we actually expose as a component element in the Sandbox. The idea is that if you’re a developer, you can enter the Sandbox environment and see what services are available there. You can see an Xbox or whatever that has the API, and you can create a service based on your own idea or service. You have three ways of contributing to the Sandbox. One way is to contribute source code. Another way is by exposing an API. A third way is by exposing an executable file that can be placed in the Sandbox environment.”
“So, if you’re a developer, you may have come up with a sexy service, by aggregating three or four other services,” says Chu. “That aggregated service would reside in the Sandbox, in which other people — including operators and other developers — can play around with it. If some other entity decides they want to add features on top of yours, they can do that too. It’s a sort of collaboration environment for various parties to come together, build on each other’s great ideas, and then deploy the result in an operator environment. So the Sandbox really is a ‘sandbox’.”
“Telecom, in general, is evolving from very monolithic silo approaches into services,” says Chu, “and embracing the new IT Web services of the world, and actually bringing in many components, some of which are not controlled by the operators. The operators want to open a lot of their existing environment, such as the network environment. One great example of that is British Telecom with the Web 21C initiative, in which they want to expose much of their network assets to third-party developers, so that the developers actually build things on top of their network assets, such as location- based services, so they can expose location elements as a service component. This allows a developer from anywhere, say Russia, to build a service on top of it, which BT can then leverage and actually deliver it to their end users.”
“A key trend for operators is definitely one moving away from the ‘not invented here’ approach,” says Chu. “Rather, the trend involves truly embracing an ecosystem environment that’s outside of the world of the traditional network boundary, firewall, and what-not. Operators can now add value, because they do control the last mile and they can often guarantee the QoS [Quality of Service]. From their perspective, one can make some interesting points. One is that they can offer hosting type services. If there’s a developer or third party that wants to develop a service, they can go to a BT, AT&T, or a Verizon and have their service reside there in a sort of Web hosting type of environment.”
“A second trend is end-to-end QoS for end users,” says Chu. “If you look at the Web today, or Web 2.0, sometimes if you have too many components working together, the quality may not still be any good. From an operator standpoint, by bringing all of these environments together, having a developer and ISC in their data center, combining their network assets and their network service, and aggregating everything, they can provide essentially end-to-end QoS. That can help differentiate the operator versus some of the other Web players.”
With the telecom industry at a historic turning point, perhaps middleware will be the pivot upon which the world will swing into pure IP Communications. That being the case, it’s likely that we’ll be hearing a lot more about Microsoft’s Telco 2.0 in the years to come.
Richard Grigonis is the Executive Editor of TMC’s IP Communications Group.