Yesterday’s Dreams and Today’s Reality in Telecom

Published in Technology in Society, April 2004


Twenty-five years ago the telecom industry had its vision of the future. There were three dreams that, given the right technology, they could achieve. Those three dreams were 1.) The Dick Tracy wristwatch phone, 2.) The Picturephone, and 3.) The set-top box for the home television that could access an information store maintained by the telephone company. None of these dreams turned out as planned, and it is interesting to compare the original conceptions with what actually transpired in the last quarter century. Today, having achieved some facsimile of each of these three dreams, it seems as if the industry has no similar vision looking ahead to tomorrow’s technology.


1. Introduction

The telecommunications industry today is mired in what has been called the “telecom winter”. The industry’s principal source of revenue, the traditional landline voice telephone business, has been steadily declining. Long distance providers are facing overcapacity and the challenge of selling a commodity product into a highly competitive market [1]. Many service providers are carrying a heavy debt for infrastructure equipment that has become too quickly obsolete. Among other factors in this perfect storm, there has been considerable technological disruption.

At the bottom of the industry food chain, research in traditional telecom companies has been sharply curtailed. As engineers look to revive the sagging fortunes of telecom, the question gets asked: what is our vision of the future? Other than the technical answers of greater penetration for broadband access and higher data rates, no one seems to have a good answer to this pervasive question. What is the vision for services, applications, and social gain that telecom can bring?

We were not always without a simple vision for the future of communications. Twenty-five years ago, when Technology in Society began its publication, we had such a vision. At that time there were three prevalent dreams about what we could accomplish with communications if only we had the technological capability.

1. The Dick Tracy wristwatch phone
2. The Picturephone
3. The set-top box to adapt TVs for information access in the home

The intervening twenty-five years have given us the technology to accomplish each of these aspirations of yesterday. But none of them came out as we had envisioned. In the following sections I will give some perspective on each of these aims. In the case of the wristwatch phone, we could say that we have fulfilled the promise almost exactly, though not without both social and technological surprises. The Picturephone, on the other hand, remained stubbornly resistant to universal adoption. Finally, the industry’s concept of information access through telephony failed, only later to become a transcendent force in society through the external imposition of the World Wide Web.

2. The Wristwatch Phone

Dick Tracy first displayed his wristwatch phone in a cartoon published in 1946. That universally familiar picture has had a lasting and unique appeal. It is perhaps like recognizing something buried in your subconscious. “Yes, that’s exactly what I’ve always wanted,” you think. That miniature, portable phone has long been seen as the ultimate aspiration for communications.

Well, we did it. Today, at least in Japan, you can buy from DoCoMo a 4-ounce, wristwatch phone made by Seiko. Except for fulfilling the Dick Tracy image, however, it’s not big news in the rest of the world, because we already have a plethora of miniature phones of all shapes, sizes and fashions. The cell phone, or “mobile”, has become almost the symbol of a modern, connected society.

There is both a technological and a societal story in the wireless revolution. On the technology side the principle issues were the evolution of wireless standards and the irresistible force of Moore’s Law. Twenty-five years ago we already knew how to design cellular telephones. In fact, AT&T had asked the Federal Communications Commission to allocate spectrum for cellular telephony back in 1947. The principles of system design using analog modulation and cells with staggered frequency allocations were well known. What was there to do? We needed cellular spectrum from the government, we needed to convince business leaders that there was a market, and we needed to invest in the cellular infrastructure.

It was exactly twenty-five years ago, in 1978, when AT&T conducted the first trial of a prototype cellular service from Ameritech in Chicago. Then, finally, in 1982 the FCC authorized service, which began officially in 1983. There were no great expectations in those early days. AT&T had commissioned a study that had predicted a small market for mobile phones, and one that would soon be saturated. The only people that would want such phones would be doctors and rich people who had to show off the latest gadget. That study serves today as a reminder of the limitations of market studies when the people being surveyed have not had a chance to experience the actual service in scale.

2.1 Standards

As service was first offered in the United States, there was a conference among European national Posts and Telegraphs to address the issue of wireless standards. Incompatible systems were being developed and deployed in the various European states. A working group, the Groupe Speciale Mobile (GSM), was convened to study the problem and come up with a common European standard [2]. That group decided in 1987 upon a digital system, and thirteen European nations signed a memorandum of understanding on its adoption. Limited GSM service became available first in 1991, and two years later there were already more than a million subscribers. The GSM system was an immediate and resounding success.

Meanwhile, the United States went its own way, largely ignoring GSM. Neither the government, nor the equipment vendors, nor the market itself seemed motivated to decide upon a single standard. In fact, there were three standards – the original analog system, AMPS, a new time-division multiplex digital system, PCS, and a code division multiplex system, CDMA. These systems were incompatible with each other and with the European GSM system.

How many times I have heard people complain that their US phone wouldn’t work in Europe! Why didn’t we have a single standard? Perhaps we should have, but there are counterarguments. Communications is ruled by standards, and users benefit greatly from the network effects of having a common interconnection. Nonetheless, standards can stifle innovation by freezing progress. There is a constant tension between the need for standardization and the need for innovation. The case of cellular standards is particularly interesting, because there has been a strong desire for a worldwide standard for the next generation of wireless, 3G, and the standards (there are several variations) that have been chosen are based on CDMA. If the US had standardized on GSM in the first place, it is likely that CDMA would never have been allowed to develop.

The issue of how to handle patents in standards is a difficult one. ANSI recommends the RAND (Reasonable and Non-Discriminatory) criterion for the inclusion of patented content in specifications, while the World Wide Web Consortium has required that patents be declared royalty-free. Corporations argue that this gives them little incentive to innovate and allows free-riders to prosper, particularly in nations with lower costs and faster times to market.

2.2 Moore’s Law

Moore’s Law is the central driving force in information technology. It simply says that digital electronics doubles its power and cost-effectiveness every 18 months -- smaller, cheaper, and faster. Anyone who has ever bought computer equipment has experienced that steady, guaranteed obsolescence. But while we have an innate sense that technology always improves, we very seldom understand and truly believe that progress under Moore’s Law is exponential. Instead, we seem always to think in terms of linear behavior. Exponentials, in contrast, appear to begin slowly and not get noticed until one day they suddenly become overwhelming.

Moore’s Law predicted exponential progress in digital electronics. It also told us exactly how fast this progress would occur. If we had known about Moore’s Law twenty-five years ago, we could have predicted almost exactly when the Dick Tracy phone would have become a reality. We would have been able to guarantee a Dick Tracy phone sometime around the year 2000. Moreover, looking at the other side of Moore’s Law, we could have been certain that we could not have had a Dick Tracy phone before that time. Everything has its time in digital electronics, and cannot be accomplished before that time.

But we did know about Moore’s Law in 1978! Gordon Moore had made his observation, which became a prediction, in 1965 – considerably before the dawn of wireless telephony [3]. However, as an engineer working in the telecommunications field, I can assure the reader that we did not really know or understand Moore’s Law at that time. Intellectually, perhaps we had heard of it, but we neither appreciated its monumental significance nor really believed that it was true. That same statement could probably be made about any year between then and now. Even today, aside from a few futurists such as Ray Kurzweil [4], few engineers have any sense of what Moore’s Law could mean in the future.

Part of our innate difficulty with Moore’s Law is the inability to deal with exponential behavior. But another fundamental difficulty is the fact that Moore’s Law is not a law of physics. It is simply an observation of past behavior -- and as they say, past performance is not a guarantee of future success. While Moore’s Law has held true now for nearly a half century, there have always been predictions of its imminent demise. If you choose to disbelieve, there are always cogent arguments to back your disbelief. Given the profound implications of the law, choosing to disbelieve has often been the easy choice.

Even though Moore’s Law is not a law of physics, it is stronger than a simple extrapolation, because it has become a self-fulfilling prophecy for the industry. Everyone in the semiconductor industry knows what benchmarks must be met, and they plan and spend accordingly. Indeed, the economics of Moore’s Law for the industry have been a subject of considerable interest.

Finally, I would like to note that the discovery of exponential progress in digital electronics may only be indicative of exponential progress throughout technology. We may only have noticed that behavior in Moore’s Law because we had a concise metric for progress in semiconductors – the feature size in chip design. In other aspects of telecommunications where we have metrics, such as the data rate on fibers, capacity of cellular systems, and growth of the Internet, to cite a few examples, we also find exponential progress. Perhaps this is a law of technology in general.

2.3 Social Consequences

When cellular systems were first deployed, the service providers believed that they would be used for emergency and commercial applications. Instead, the public took over en masse. There was a kind of social lock-in, perhaps akin to the action of a strange attractor in chaos theory. Mobility itself turned out to be essential. Voice quality, the mantra of telephone engineers over the previous century, turned out not to be so important. The idea of being always accessible turned out to be attractive to a large number of people, particularly the youth. It became fashionable to have a cell phone. Then it became essential.

Now a great number of consumers are giving up their landline telephones and relying primarily on their mobile phones. The world has turned on its head. Though we dreamed of Dick Tracy, we had no vision of the revolution it would cause.

3. The Picturephone

In every prediction about the future of communications, throughout the years, the videotelephone has been featured. This vision has been promulgated in books, movies, and television shows. It has always been something about to happen. But it never really has.

3.1 The history of Picturephone

It was just about 25 years ago – 1964 to be exact – when AT&T demonstrated the Picturephone at the New York World’s Fair. Visitors could make video calls to tourists at Disneyland on the other coast. Afterwards, they were interviewed by marketing people, and presumably they said how much they had enjoyed the experience.

AT&T began the development of Picturephone in the mid-1960s and continued the development through the early 1970s. During the development Arthur C. Clarke visited Bell Labs and became enamored of the Picturephone. In 1968 Clarke put the Picturephone into Stanley Kubrick’s movie “2001 – A Space Odyssey” [5]. It has become one of the more familiar and enduring images of how we thought about the future twenty-five years ago.

In its1969 annual report AT&T predicted that by 1980 there would be over a million Picturephone subscribers and that the revenue would exceed one billion dollars. To realize this dream AT&T spent about a half billion dollars on the development of Picturephone service. The Picturephone employed an analog bandwidth of 1 MHz, for which it required three wire pairs to the terminal and a special switch adjunct in the central office. This transmission capacity was several hundred times the usual capacity needed for a voice telephone call.

In 1970 the first commercial service of Picturephone was offered in downtown Pittsburgh at a cost of $125 per month. And this is just about the end of the story of Picturephone, as it was considered shortly thereafter as a market failure.

3.2 Why Picturephone failed

In retrospective writings about Picturephone, several reasons are frequently cited for its failure. Chief among these reasons are: that it was too expensive, that no one wanted to be seen in his or her pajamas, and that there was little reason to be the first to own a Picturephone when there was no one else to call. In fairness to the concept it should be mentioned that there were also regulatory restrictions on its introduction, but the two most important issues were the small number of other Picturephone owners and the questionable value of video itself.

The value of network connectivity according to Metcalfe’s Law (network externalities in economic theory) grows with the square of the number of users, as each user’s utility is proportional to the number of potential interconnections [6]. For large networks, this means that there is great value for each user. On the other hand, for small networks there is very little value to anyone. Getting started is really hard. A good counterexample is the case of wireless telephony just discussed. Since new wireless users could instantly communicate with a whole universe of wired users, there was no barrier of network effects to stop the penetration of cell phones. There was no such reverse compatibility in the case of Picturephone.

As an early user of Picturephone myself, I would like to add another consideration. My Picturephone was on my desk at work, and since my appearance had to be presentable in that environment, there was never a question of being seen in anything less than full attire. There was even a “privacy” button on the Picturephone, but no one in my experience ever pushed that button. There was a fear that your caller would believe that you had something to hide.

Instead of the worry about privacy, my own concern was that the Picturephone demanded too much of my attention. It was intrusive. While on the telephone I could doodle or even type while talking. On the Picturephone I had to look straight ahead at the speaker. It was tiring. Contrast this with the usual use of the cell phone, which is the ultimate in multitasking. People do everything else while talking on their cell phones.

Most troubling about video telephony, though, is the realization that video seems to offer little extra value over audio, in spite of requiring hundreds of times more bandwidth. A picture may be worth a thousand words, but in ordinary face-to-face conversations, the information content is almost all in the words. There is, indeed, a nuance in the video, but is that nuance worth the cost and intrusion of the video? We still don’t know, but it seems not to be the case.

3.4 Video comes to the Internet

As in the case of the wristwatch phone, progress in digital electronics has made the video telephone relatively inexpensive. There has also been significant progress in both video and speech compression, and with the infrastructure of the Internet, there is now little barrier to pervasive video telephony.

The history of video on the Internet has been most curious. The first “netcam” is said to have been the Trojan Room Coffee Pot Cam in Cambridge, England in 1991. A group of scientists focused a camera on their coffee pot and made the periodic images available on their network. Later, this was made available on the Web, so that anyone could remotely view the state of the Cambridge coffee pot.

The wonderful thing about the Internet is that anyone can do experimental things and see what happens. You don’t need to prove a business case to try a new service. In 1995 engineers at Netscape put a camera on their aquarium and made the images available in the Amazing Netscape Fish Cam. (It’s still there.) The fish cam drew about 90,000 web surfers a day and was featured in the magazine Economist.

Webcams started to proliferate and were aimed at all kinds of pointless, amusing, interesting, and useful things – blank walls, turtles, Antarctica, highways, Mars, and so forth. There was no lack of imagination in their employment, and directories like Yahoo began keeping listings of webcam sites.

Among the early sites available on the Internet were webcams in peoples’ homes that monitored their daily existences. Jennifer Ringley became famous with her Jennicam, getting an astonishing 100 million hits a week, and Steve Mann at MIT wore a networked camera on his head for years. The popularity of these webcams may well have engendered the current spate of reality shows on television. It seems that the net got there first.

In 1992 Tim Dorsey at Cornell wrote a program called CU-SeeMe to enable video conferencing on Mac computers. Free downloads of his program were available for several years, and it became widely used. Today’s versions of that program are now a commercial product, as is Microsoft’s Netmeeting. With inexpensive quickcams and such software, it is now easy to make video calls and to have video conferences on the Internet.

The proliferation of inexpensive miniature webcams has led to the social quagmire of potentially ubiquitous surveillance. When is it ethical and legal to spy on unaware passers-by? In the UK the government has installed video cameras in many public places, and in the interests of security, they are accepted by the populace, while in the United States this idea has so far been rejected. David Brin has called this vision of the future
the “Transparent Society”, and has said that the only recourse for the public is to “watch the watchers” [7].

So finally anyone can have a video telephone. They seem to be used for everything on the Internet except person-to-person telephony, which of course was the original intention of the Bell System Picturephone.

4. Home Information Systems

Since the time of the fabled Library of Alexandria in the second century BC, the vision of information access has been embodied in the library. With the advent of computer technology the natural inclination was to recreate the library, using the computer to store the digitized versions of the books on the shelves. The library was a centralized store for archived information, and computer systems in those early days were thought of in the same way. Expensive mainframes maintained the databases, and users approached only through the intermediary acolytes who guarded the precious machines standing like a kind of Stonehenge behind glass walls.

Twenty-five years ago modems were becoming available to send data at relatively low speeds over the voice telephone network, and the idea took hold of bringing the library into the home via those modems. Unlike most libraries, access to this remote library would be a for-profit endeavor of industry. Of course, at that time almost no one had a personal computer. Engineers were focused on something they called “Home Information Systems”. The idea was to turn the home television into a display device using a set-top box that could access a centralized database owned and maintained by the telephone company. Users could access news, weather, and transportation schedules – maybe even an encyclopedia. A number of trials of such systems were undertaken, including a well-publicized AT&T trial in conjunction with Knight-Ridder in Coral Gables, Florida.

Trials of home information systems were deemed successful, since that success itself was ultimately the sole purpose of the trials. However, it soon became apparent that consumers were unwilling actually to pay for such a service, and the idea of accessing information from the home languished for about two decades. During almost that entire time the technical community remained focused on centralized systems where the information was owned and controlled by corporate entities. It was an incredible leap from that thinking to the web pages of today.

4.1 The rise of the Internet

By 1984 the Internet was gaining momentum in the technical community [8]. That was the year that the domain name system was created, and the year that William Gibson wrote Neuromancer, the prescient description of a cyberspace society. There were then one thousand hosts on ARPANET. By 1987 the number of hosts had exceeded 10,000, and was seen to be doubling annually. But as long as the number of users was relatively small, that remarkable growth rate failed to attract the attention of telephone executives. However, if something continues to double every year, it soon becomes significant, and that is exactly what the Internet did for almost 25 years. Only in the last three or four years has the growth of users slowed to a 60% annual rate, while the traffic on the net is still thought to be doubling annually [9]. Today it is believed that there are about a billion users on the Internet.

The proliferation of personal computers and the infrastructure enabled by Internet growth created a fertile environment on which information access could be overlaid. But more than just a providing technological infrastructure, the Internet fostered a world-changing underlying architecture called the end-to-end principle [10]. This profound idea deserves much more attention in the study of system design. Simply put, the center of the network should be transparent and devoid of intelligence, while the periphery should be empowered and intelligent. A well-known paper by David Eisenberg went so far as to call this principle the “stupid network” [11]. It is a principle that contradicts traditional thought about economic optimization. The original idea of the telephone network was to support the cheapest terminal in the world – the dial telephone – with an expensive and intelligent central network. Since there are so many telephones, this is clearly preferable from an economic standpoint.

In the Internet the intelligence is inverted. Users have expensive, intelligent, and programmable terminals in the form of PCs, while the central network serves only to forward packets transparently from node to node. The original designers of the Internet – whether through design or happenstance – chose a protocol, IP, for the transfer of packets that was minimal in its functionality. Because of the resulting intelligence inversion, users at the periphery are empowered to create new applications without any network functionality or intelligence serving as a roadblock. Instead of a small group of engineers within the telephone companies having the sole power to create new services, millions upon millions of users have been empowered to be creative with their uses of the system. It might have cost more, but everyone benefited enormously from the massive innovation.

4.2 The World Wide Web

User empowerment came startlingly to the fore with the development of the World Wide Web, which was pioneered by Tim Berners-Lee at CERN in 1989 [12]. The standards for hypertext documents and document transfer, HTML and HTTP, were half of the story. The other half was the development of the first browser, Mosaic, at the University of Illinois in 1993. It is surprising to think that the browser is only ten years old, and to look at what has happened in that mere decade since its invention!

The World Wide Web created an open library. Anyone could put books on the shelf of this library, and the library could be visited by anyone in the world at any time. You didn’t have to ask permission of the librarian or anyone else either to place material or to take material. At first there were only a relatively small number of web sites, and the principal directory of these sites was through the “What’s New” listing from the Mosaic team at NCSA (National Center for Supercomputing Applications). That quaintness didn’t last long, however, as web pages multiplied seemingly without bound.

Today no one knows how many web pages are extant. Google searches more than three billion documents, and this is probably about half of the public web. Then there are a great many additional documents hidden behind corporate firewalls. One estimate is that there are about 30 billion pages total on the web.

What an incredible resource! Once we dreamed of making the Library of Congress accessible on the net. As work on digital libraries progresses we hope one day to achieve that goal, but in the meantime we have something out there in the World Wide Web that far surpasses the Library of Congress in its size, scope, and volatility. It differs palpably from our original conception of an on-line library, and it is shocking to think that we once believed that the only way to provide information access was to have centralized stores. No one ever dreamed that the users themselves would provide the documents that would be accessed. Now it seems obvious, but it has been a truly monumental social discovery.

4.3 Reaching for wisdom

With the World Wide Web we have an enormous library of ever-changing material. But what does it all mean? With the vast number of web pages (and even the adjectives “enormous” and “vast” seem inadequate) comes the concomitant pollution of useless, wrong, misleading, outdated, and illegal material – just to mention a few categories of info-junk. How do we rise above the muck and achieve wisdom? In 1934, TS Eliot memorialized this problem in “The Rock.”

“Where is the life we lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?”

A library obviously needs something more than books on the shelves. It needs a catalog and it needs librarians, and if we are to achieve wisdom, it needs even more than that. The first step is an intelligent card catalog, and that is what the first search engines tried to emulate.

The earliest search engines on the Internet pre-dated the World Wide Web. Archie and Gopher enabled the finding and transfer of documents from selected Internet addresses. In the beginnings of the web, Mathew Gray’s World Wide Web Wanderer was the first engine to crawl the web compiling a list of reachable pages. Soon the web was full of spiders or robots that went from site to site finding what was out there. In fact, all this browsing noticeably slowed the web in the early days before rules were agreed upon for the behavior of robots.

The subsequent evolution of search engines had users quickly jumping from one favorite system to another. People would ask each other what search engine they used, and word-of-mouth would spread the news about the superiority of the latest entrants in the sweepstakes. In 1994 we had Yahoo from Stanford and Lycos from Carnegie-Mellon. The next year it was Alta Vista from Digital Equipment, and then HotBot from Berkeley, among others. Today about 75% of searches are handled by Google, which was developed by Larry Page and Sergey Brin at Stanford in 1998.

Google goes beyond the metaphor of a library by incorporating a quality judgment in its ordering of responses, using a page rank algorithm whose metric depends on which pages link to which other pages, and how important those referring links are judged to be. Currently there are about 200 million searches a day on Google, so it is obvious that, at least for now, users value their particular judgment of quality.

Looking forward, we need better ways to dialog with humans on searches, and better unbiased judgments on quality. Moreover, there is a difficult social issue with the ever-changing nature of the web. Should we preserve the entire history, or is it better to let the past disappear gracefully? Currently, the Wayback Machine at the Internet Archive is attempting to save the daily content of the web, and the British Library has asked and been given permission to archive the web. There are obviously thorny issues on copyright and privacy in such systems.

Remembering our early focus on Home Information Systems, what has happened has gone far, far beyond those naïve dreams. Even so, we have the strong feeling that this revolution has only just begun.


It is almost impossible now to recreate the context for the vision in telecommunications that we had twenty-five years ago. At that time the Apple I computer had just been introduced. Bill Gates had just left Harvard to form Microsoft. The Internet protocol was just being written, and there were only a few hundred hosts on the ARPANET.

Nonetheless, telecom had its vision, and it had the resources and the will to realize that vision. Of course, history has a way of making yesterday’s visions look misguided, and that certainly was the case for telecom. Yet today there is no compelling vision of the future, there are no resources available to realize a vision if we had one, and the industry has been split into so many competing entities that there would be no collective will to implement a vision. Today the power to dream the future lies largely outside the traditional telecom industry. Perhaps some teenager, like a Shawn Fanning of Napster fame, has the future in his school notebook now.


[1] Noam E. Too Weak to Compete. Financial Times, July 19, 2002.
[2] Scouria J. A Brief Overview of GSM.
[3] Moore G. Cramming More Components into Integrated Circuits. Electronics, April 19, 1965.
[4] Kurzweil R. The Age of Spiritual Machines. Penguin, 2000.
[5] Clarke, Arthur C. 2001: A Space Odyssey. Roc, Reissue 2000.
[6] Gilder, G. Metcalfe’s Law and Legacy. Forbes ASAP, Sept 13, 1993.
[7] Brin D. The Transparent Society. Perseus, 1999.
[8] Dyson, G. Darwin Among the Machines. Perseus, 1998.
[9] Odlyzko A. Internet Traffic Growth: Sources and Implications. Proc. SPIE, vol. 5247, [10] Saltzer JH, Reed DP, and Clark DD. End-to-End Arguments in System Design. ACM Transactions in Computer Systems, Nov. 1984, pp 277-288.
2003, pp 1-15.
[11] Eisenberg D. Rise of the Stupid Network. Computer Telephony, Aug. 1997.
[12] Berners-Lee, T. Weaving the Web. Harper, 2000.