ITEXPO begins in:   New Coverage :  Asterisk  |  Fax Software  |  SIP Phones  |  Small Cells

February  2000


Online Exclusive Scalable Linux Development Environments


Scalable is a buzz word in application development today, and for a good reason. Scalable environments are able to adapt to a wide range of requirements, and are therefore more efficient for building applications. The range of the Linux development environment can cover the current full spectrum of telecom software development. Many commercial development platforms today are utilizing open software, while others remain closed and proprietary.

Some commercial environments are targeted to a usage that might not be appropriate for a software group. The high cost is just one reason why, along with uncommon tools and licensing restrictions that may require payment of royalty fees for the package library utilization. These systems may also contain other restrictions, such as non-compete agreements.

Starting from scratch on a development environment is very expensive, but thankfully isn't necessary. We can use the same public domain systems on which many current tool sets are based. Developers of telecom industrial products may buy commercial development environments, or may choose an open source system (such as Linux), and get the benefit of public domain code and tools.

Increasingly in the telecom and communications industry, the range of development environments is diverging from the path of proprietary systems and closed architectures, to open systems or hybrids. Developers of telephony systems and applications no longer wish to be constrained by an environment that is unfamiliar and sometimes hostile. The modern programmer wants the flexibility to use familiar tools. Because the development platforms may have many unforeseen changes (requirements cannot be known ahead of time and may be dynamic), development environments must be scalable and flexible.

In 1984, the GNU Project was launched to develop a free Unix-like operating system. It was in 1991 that Linus Torvalds developed a Unix-compatible kernel and dubbed it Linux. When Torvalds decided to standardize on the GNU tool set and file system structure, the result was a complete, free operating system. Linux is written and distributed under the GNU General Public License which means that its source code is freely distributed and available to the general public.

Linux started as a small project with a specific teaching purpose, and evolved over time to the now general-purpose system. Those programmers who needed a Unix-like kernel based system -- without the high price tag and closed nature of a commercial system -- found Linux useful. The Linux kernel source code has been viewed by tens of thousands of programmers. These programmers, who had their own opinions and ideas of what should be done with the code, influenced ongoing development. With this many pairs of eyes searching source code, bugs would have to be very clever to survive.

The requirements and features of Linux have expanded dramatically, yet it has maintained its stability. Every release of the Linux kernel has thousands of people testing and debugging it - this makes it an extremely stable environment.

Linux can scale from very small embedded systems, such as a Palm PDA or mobile phone, to very large systems, such as a telco central office switch with thousands of processors.

There are projects within the Linux community to develop telecom applications, Internet telephony software, broadband communications solutions, and more. The telephony standards of International Telecommunication Union (ITU), Institute of Electrical and Electronics Engineers (IEEE), and Internet Engineering Task Force (IETF) provide guidelines for this development. These groups are generating common applications that operate according to standards, and therefore interoperate with the commercial systems in use. This is peer-reviewed code that can be publicly analyzed and utilized, usually without restriction.

Scalability implies portability. To a degree, you can speed up a process by simply speeding up the hardware. For instance, in a Symmetric Multiprocessor (SMP) environment, you can add more CPUs to work on individual processes simultaneously, and they will then be solved faster. The telecom switch problem of call processing, in particular has a very small code base, and a relatively simple set of machine states that benefit greatly from many processors. The level of processor coupling and memory coupling have differing requirements for an operating system, but the same kernel design and architecture is used across a wide range of processors: This is very good for portability of programs and programmers. A program can be developed, a model can be built to prove a concept, and then a full version can be implemented within similar structures.

The open software groups, and there are several, all have the goals of portability and functionality. In the early 1980s, the Air Force did studies that found 60 percent of the cost of a program was maintenance -- work done after the application was "complete" and delivered. It is doubtful that the figures have improved since then for most systems, because as the technology improves, we ask it to do more.

What was acceptable in terms of delivery time and life span before, seems like a luxury today. A board or system sold today almost always becomes obsolete and is replaced more quickly than those from years ago. That said, the applications running on those systems might remain unchanged. Imagine using a 1969 workstation, today! Laughable, perhaps? Not really. Think of your accounting department's code or source management systems, which may be very old. The hardware running these ancient programs is replaced, but the same software often remains in use. So, which system element is more durable: software or hardware?

Programmer Portability
Using an open development environment means programmers can stick with one interface. They don't need to learn a new model of interfaces each time they wish to move to another processor or borrow code from somewhere else.

Linux contains a great many programs within it: drivers, file systems, and user interfaces. The tools used are similar and independent of the hardware. Since the same commands will perform the same functions, retraining for similar tools and paradigms are minimized. When the time you spend learning the interface is diminished, the time you have to solve "real" problems is increased.

Hardware Portability
Hardware is really only an accessory for software, after all. So, moving (porting) the software from one box to another becomes easier. Many of the tools that you would now use will move from your workstation to a server with little effort. Some tools that require license servers, and have other system dependencies may be more difficult.

Closed software groups, if you can call them that, are the folks with proprietary standards and partially hidden systems. For portability, they may publish an interface. For example, Apple has a published interface, yet all of the details internal to the system are closed. The companies that are trying to enforce and market proprietary systems may do so to protect their position with their customer base. They often wind up spending more on marketing and legal than engineering and development.

Current open software mechanisms are leveraged on other open software development. So as the development continues, it adds to the mass of existing software. Take this example using an open model: If you are using a tool, and need it to do something different than it currently supports, you can publish a request on the Internet for assistance, and accept or reject the modifications that are suggested by the public. If you're interested in a less collaborative route, several companies that specialize in maintaining open software sources will modify them to suit your needs, for a fee. This modified code could be reintroduced to the open world, if you wish, and you would have public assistance in its maintenance.

In this same situation in the closed model, you might need to hire more people or hire a software company to write it for you. You're now the proud owner of a one-off tool from which no one else will benefit, and for which you paid full development costs. Then, you have to add in maintenance costs, which can be very expensive.

The open model may require that you donate some of the developed code, but you only pay a small percentage of what you otherwise would for your complete system. The savings can be spent on marketing your final products, or on additional tools. This model of development is in its infancy, but should continue to grow as the use of open source development grows.

Telecom systems are complex enough, without adding the complexity of closed development environments. The ability to build telecommunications applications on open source development systems benefits both the programmer and the customer in time to market, cost, and application scalability and stability.

John K. Stevenson is a systems programmer for Motorola Computer Group. Motorola Computer Group is the leading supplier of embedded computing platforms to OEMs for use in telecommunications, imaging, and industrial automation applications worldwide. MCG provides best-in-class solutions by combining its advanced design engineering capability with responsive, world-class manufacturing operations.

Today @ TMC
Upcoming Events
ITEXPO West 2012
October 2- 5, 2012
The Austin Convention Center
Austin, Texas
The World's Premier Managed Services and Cloud Computing Event
Click for Dates and Locations
Mobility Tech Conference & Expo
October 3- 5, 2012
The Austin Convention Center
Austin, Texas
Cloud Communications Summit
October 3- 5, 2012
The Austin Convention Center
Austin, Texas