The Stupid Network versus Intelligent Network debate rages on.
Ostensibly, the debate is about creating a new kind of network, a network unencumbered by
outdated assumptions; a network capable of starting afresh, free to explore new paradigms.
And yet the tendency to define the new (or Stupid) network as something in opposition to
the old (or Intelligent) network admits much that is itself dated.
If the frustrations occasioned by the existing network are old, so too are the
expedients meant to compensate for the existing networks limitations. So, if the
Stupid Network were no more than the ultimate compensation for the old network, how new
would it be, really?
COUNTERPARTS, IF NOT COMPLEMENTS
The partisans in the Stupid versus Intelligent debate tend to fall into two camps. In
the first camp, we have people with the users perspective; in the second, people
with the providers perspective, experts practiced in designing, building,
maintaining, and regulating networks.
Once, the most compelling example of networking was the public switched telephone
network (PSTN), and experience with the PSTN indicated that designing, building, and
running a network required a fair amount of ingenuity and large amounts of capital. Data
communications were provided through either open or proprietary solutions. Carriers and
service providers made substantial investments to deliver successful and ubiquitous
services. They had to design and build the switching and transmission infrastructure, as
well as provide the operations, management, administration, and provisioning services
needed to keep it running.
It was difficult, time consuming, and expensive to introduce new services. Furthermore,
the network provided only basic transport and telephony services. Anything beyond this was
up to the users to provide between themselves.
Not surprisingly, providers and users both found this state of affairs frustrating. The
network evolved, however, because processors and memory became more powerful, and digital
technologies advanced over analog technologies. New bandwidth became available; new
protocols were defined. Services running on switches moved to platforms separate from the
switches. This was the beginning of the Intelligent Network.
At the same time, people using the network began bundling technology to create new
services at the desktop based on the assumption that the network provided basic transport
and telephony services. This was the beginning of computer telephony.
To date, progress has been slow on both the Intelligent Network and the computer
telephony fronts. A thorough examination of why this should be so is beyond the scope of
this article; however, it may help to keep the users perspective in mind, to
consider, from the users point of view, what makes applications and services
attractive and successful. Basically, users want applications and services that are easy
to use, widely and readily available, inexpensive, and reliable. If neither the
Intelligent Network nor computer telephony has produced a "killer app," it may
be that neither has produced an application or service that has met all these criteria.
ENTER IP: CONVENIENCE OR CHALLENGE (OR BOTH)?
Whenever a new technology comes along, it can stimulate at least two kinds of
responses: It can be seen as an elaboration on an existing pattern (which can be
convenient), or it can be seen as something with the potential for elaborating patterns of
its own (which can be a challenge). Occasionally, both responses are justified, as is the
case with the Internet.
The Internet, unlike the PSTN, is based on connectionless, packet-switched protocols.
These protocols are known collectively as the Internet Protocol (IP), which is independent
of the underlying physical media and the protocols associated with these media. IP, thanks
to these properties, essentially expands the traditional scope of the network. Users now
have the option of controlling the network. Infrastructure can grow incrementally and
doesnt require the same, initial massive investment as the telephony network.
IPs potential for expanding the scope of the network was described in memorable
terms by Internet essayist David S. Isenberg, who introduced the idea of the Stupid
Network in his essay, "Rise Of The Stupid Network: Why The Intelligent Network Was
Once A Good Idea, But Isnt Anymore. One Telephone Company Nerds Odd
Perspective On The Changing Value Proposition" (www.isen.com).
The Stupid Network, in theory, makes the network easier to use with features and
optimizations extrinsic to the network. The Stupid Network may allocate more bandwidth,
memory, and processing power to give users the ability to monitor and control the network.
There is also no controlling authority such as a carrier or service provider, and the
Stupid Network is broadly specified.
The Stupid Network, in its emphasis on the users role, and its tendency to
diminish the service providers role, superficially resembles computer telephony. Or,
it at least seems to be in sympathy with the "do it yourself" spirit often
expressed by traditional, desktop-oriented computer telephony. But is the Stupid Network,
or any other IP-inspired scheme, no more than an opportunity to replay an old debate,
albeit on a grander scale? The answer may depend on your point of view.
DIFFERENT OUTLOOKS, DIFFERENT GRAPHS
All network models including the Intelligent Network, computer telephony, and
the Stupid Network are really different graphs using the same basic network
elements, such as physical and virtual nodes and connectors. Physical nodes are real
devices and equipment; virtual nodes consist of more abstract elements such as programs
and their information. Connectors are the channels, such as fiber, radio, or even Inter-
process Communications (IPC) mechanisms of an operating system, that enable communications
between the nodes.
In any of these networks, the development and deployment of a service depends on the
relationships between the nodes and connectors. Two graphs can represent these
relationships. One graph represents where these components reside physically (that is,
customer premises, access exchange, carrier, service provider, etc.); the other graph
represents where the components reside virtually (that is, what are the interactions
between components and who controls and monitors them). The physical and virtual graphs
for the same service can be very different from each other, although each is correct.
The people that design, build, use, run, and regulate the network have different
requirements that drive the organization of these graphs. During the early history of the
network, the people who designed, built, and ran the network were responsible for putting
these pieces together. Although this still happens, more users are becoming involved in
these activities. Not all users, however, will be able, or want, to have this sort of
involvement either because of limited experience and/or knowledge.
In the future, as feedback mechanisms such as collaborative filtering are introduced
and the semantic content of information becomes available through languages such as the
Extensible Markup Language (XML), the network itself will be able to organize information
and relationships between nodes.
PERPETUAL DECONSTRUCTION AND CONSTRUCTION
The common thread during the history of the telephone network and the Internet is that
as simple elements and concepts cross thresholds of complexity, they organize and become
new abstractions. These new abstractions absorb the complexity of the previous elements
and become the foundation for future services. At some point, these abstractions typically
become insufficient and quite often deconstruct and then reconstruct with new elements and
concepts that have since appeared to become, yet, new abstractions.
As an example, the future integration of Internet telephony with telephony services
will require both the further deconstruction of Intelligent Network and computer telephony
platforms and reconstruction with Internet elements such as servers, gateways, and
gatekeepers, as these various platforms move toward a unified service architecture.
CONCLUSION
Someday, the telephone network and the Internet will become one network, called the
Network or the Net. The definition of this Network whether stupid, intelligent, or
something else will depend on the perspective of the person talking about it.
Intelligence that is, the programs and information will reside in many
different places across the network, including the customer premises either at the desktop
or within a data center, at an edge switch, or within service platforms. If the
intelligence physically resides with the user, it is the so-called Stupid Network. If it
resides with the carrier or service provider, it is the so-called Intelligent Network. In
either case, the network will use similar network resources.
The stupid and intelligent labels do not do justice to the complexity and diversity of
the network. These labels also can create mental boundaries to future possibilities. Some
of the confusion arises from the use of the word network. "Network" is typically
used to describe the physical nodes and connectors monitored and controlled by the
carriers or service providers. If we accept this usage, the Stupid Network is a concept
that makes sense.
If the usage of "network" is broadened to encompass everything between users
of a service, including devices, equipment, programs, or information, then the Stupid
Network is only part of the picture. There are certain applications and services that need
the ability to find people or information on a global scale. This information in many
cases will only be maintained and available from the carriers or service providers and be
part of what has been called the Intelligent Network.
The cycle of network construction and deconstruction is creating a new paradigm for the
development and deployment of network applications and services. Depending on the skills
and knowledge of the user, and depending on the scope of the desired applications and
services, either users, service providers, or the network will organize elements and
relationships to provide the necessary services to reach people and information. At any
time, the network itself will have services that are based on many complementary and
seemingly contradictory concepts. The one thing that is certain is that the network will
never be complete and will be in the perpetual and ongoing process of construction and
deconstruction.
Jeff Lawrence is president and CEO of Trillium Digital Systems, Inc., a leading
provider of communications software solutions for computer and communications equipment
manufacturers. Trillium develops, licenses, and supports standards-based communications
software solutions for SS7, ATM, ISDN, frame relay, V5, IP, and X.25/X.75 technologies.
For more information, visit the company's Web site at www.trillium.com.