Bits Versus Atoms—The Future of Information Technology



Robert W. Lucky

Telcordia Technologies, Inc.

Red Bank, New Jersey


Earlier this year, I was asked to be on a National Academy of Engineering committee selecting the greatest engineering achievements of the last century.  I thought it was going to be fun, but it wasn’t.  Everyone argued in favor of his own favorite thing and it was fairly acrimonious in many ways.  At first the committee was not going to order the achievements, but I said, “You can’t have a list and not order them.  Nobody would care.”  So in the end, we fought it out and did order them.

The criterion was the impact on society for the better, or how much the achievement bettered society.  Number one was electrification.  People pretty much agreed on that, because without electricity you really can’t do much of anything.  People take electricity for granted until there is a power failure, and then you’re thrown back into the last century, and you realize how important it is.  Number two was the automobile, which changed dramatically the way and where we live.  Number three was the airplane.  Some argued that a lot of people don’t ride on airplanes, therefore, it wasn’t something experienced first-hand by everyone.  However, the airplane brought the continents together, and it was such a dramatic invention—the idea of men taking wings—that I think that helped bring it up on the list.  Number four was pure drinking water.  Then electronics, radio and television, the computer, telephone, air conditioning, the highway system, spacecraft, health technologies, petrochemical technologies, fiber optics, and on down the list. 

A couple of weeks ago I got a call from CNN, and they wanted to know why the Internet, which was ranked thirteenth, was so low on the list.  They did a couple interviews with people who said the committee didn’t know what they were doing, and the Internet should have been ranked much higher.  So I had to do an interview and defend a decision that I didn’t believe in, because I actually voted the Internet much higher.  But the truth is that there are certainly more than half a billion people in the world on the Internet, but with six billion people in the world, that’s less than 10 percent, and the growth has slowed.  For many years the number on the Internet doubled every year, and then it grew 60 percent per year the last two years.  Now, we track it every day, and it’s growing about 50 percent a year.  That’s still very significant, but the truth is that most of the world is not connected yet.  However, I have thought about the Internet versus the other things on the Greatest Achievements list, and that leads into what I want to talk about today, that is, the nature of information technology.

Everything else on the list is a big physical thing that took 100 years to develop.  The Internet is not a physical thing at all, and it’s very recent.  With electrification, for example, it took 50 years or so to bring electricity everywhere.  The first electric power plant was in 1882.  The Rural Electrification Act goes back to the 1930’s when electricity was brought to rural areas.  There were automobiles at the turn of the century, and highway systems—even the Romans had highways.  What is the Internet after all?  It’s an idea.  It’s just a set of rules, protocols.  It’s not a thing at all. 

Now what kinds of things will be on a list at the turn of the next century?  Are there going to be big things like dams and great bridges?  Last week I went through the computer museum, which is in Silicon Valley at Moffett Field.  (They’re building a new building, and 3 years from now, that will be open.)  We were looking at all the old computers, such as the Sage system, the Stretch computer, the Enigma machine and so on, and someone said “Where’s the software?”  How do you show software in a museum?  And yet that is what it’s all made of.  People gather round the old vacuum tubes and the old core memory, and they look at all this stuff and say, “Wow!  That is neat.”  But the real stuff isn’t shown there at all.  Perhaps the list 100 years from now will be not big things at all but will be the virtual things—the ideas, the concepts, the software, whatever form that might take in the next 100 years.

Back when I was a kid in Pittsburgh, I grew up on a dead-end street in one of the suburbs.  My dad went off to work every day, and I had no idea what he did.  The only adults who I saw working were the carpenters building houses along my street.  At the end of the street was a big farm, and I saw the farmers working away and growing things, and I thought, “That’s what adults do.  They make things, they grow things.”  Then one day after I grew up, I looked around and realized that I didn’t know anybody who makes or grows anything.  How do we get away with this?  I walk down the halls of my company and other companies I visit, and I see silent rooms of people staring at CRTs and say, “What are they doing?”  What is the nature of work today?  Some of them look like they’re sleeping!  The whole nature of what we do and the way we do it has changed.

Last fall I was chartered to do a paper for The New Republic.  Apparently, the editors got into an argument and said, “Look.  What makes you think information technology is going to last?  There are lots of promises being made about information technology and how it’s going to shape the economics of the future.  But 100 years ago, those are the things they were saying about the railway.  The railway was going to determine where commerce went, it was going to shape the nation, it was going to determine who made money and who didn’t, and look today, who cares about railroads anymore?  What makes you think that information technology isn’t going to be the same kind of thing?”  Naturally, I had to defend this and say, “A hundred years from now information technology will still be big.”  But who knows?  We don’t have any clue of what it’s going to be a hundred years from now.

I thought about those trains.  You see, a giant, powerful train is the epitome of the physical world.  It throws out steam; the feeling of power, weight, force, and mass is there in the railroad.  Information technology, on the other hand, is nothingness personified.  It weighs nothing, is created from nothing, and is indestructible in many ways.  The train is not that much different from what it was 100 years ago.  How much better can it be?  How much faster can it go?  A train is subject to the rules of physics.  Information technology, on the other hand—what rules is it subject to?  We don’t know.  Is it infinitely expandable?  It doesn’t take any energy to create information.  Information is nonrivalrous—I give it to you and I still have it, which is a troublesome thing I’ll return to later.

A couple years ago I visited Microsoft in Redmond, and one of their executives took me into their company store.  It’s about the size of a convenience store, like a 7-11, and they have a rack that has all their products, and they sell them to their employees for something like $19.00.  I told the executive I was with, “I wish I was an employee so I could buy that product at this price.”  He nudged me and said, “We still make a profit.”  That is because those are empty boxes that they are selling!  This company at that time had the largest valuation of any corporation on earth, and they sold empty boxes.  This is scary.  In fact, I have to tell you, I met with Bill Gates, and he was predicting all the things that would happen, and I wrote them all down.  I have them somewhere, and they are all absolutely wrong.  I wish I could be wrong like that.

Here is another story that is indicative of the way the world works right now:  Gates wrote the book The Road Ahead.  One of my ex-bosses said that he visited a bookstore in Dallas, and there was a stack of Gates’ book near the checkout counter.  He noticed that affixed to the cover of every book was a sticker that read, “Recently updated to include the Internet.”  The road ahead.

This transformation of society and the way we work from the world of atoms to the world of bits is something that I have a hard time understanding.  One of the things that epitomizes this to me is the death of the Heathkit.  A lot of you are too young to have built one.  I gave a talk in New Orleans to heads of about 300 electrical engineering schools, and I showed pictures of old Heathkits and asked how many had built one.  Every single person had.  There was something about working with your hands and building physical things.  What happened to Heathkits?  They just don’t exist anymore.  I’ve written a column for many years, and I’ve written a couple on Heathkits.  I got more mail on those than anything else.  People say “They took it away from me.”  This sense of loss comes from having been able to create a physical thing with your own hands, understand it, and take pride in it, and now all that is gone.  The beauty of how something works has sunk beneath the complexities.  I hate it now when I have to go out and buy a new computer without caring what’s inside it anymore.  It’s a tragedy to me that it’s sunk into nothingness, it’s disappeared in the misty veils of complexity and microcircuits, and it’s no longer a physical, tangible thing.

Now, how does this new world work?  There are a couple laws I think about and try to understand.  They’re familiar to you. but you probably don’t understand them either:  Moore’s Law and Metcalfe’s Law.  Now, everybody knows Moore’s Law.  It is incredible that progress in semiconductor technology has been at a constant exponential—a doubling of effectiveness every 18 months for 25 years.  Why?  Why is it exponential and why is the period 18 months?  To me this is critical, because progress is balanced on the precipice of chaos or stagnation.  Think of a world in which the doubling period of Moore’s Law was only 6 months.  It’s bad enough now that the computer you buy is obsolete in a couple years, but imagine if it were obsolete so quickly that there was no stability in the world.  Or think of a world where Moore’s Law stopped, and computers stopped getting faster.  You can see, then, vastly different scenarios of how the world would work if Moore’s clock ran at different rates.  On one hand you would have the stagnation, and your computer would be like an antique, perhaps coming in a mahogany case.  On the other hand it would be utter chaos—maybe the software wouldn’t last because the platforms would disappear so quickly.  Why that particular constant?  Engineers and scientists all deal with exponentials all the time, but I really feel that none of us has an intuitive feel for an exponential.  Many times in the past I have made serious management mistakes by underestimating Moore’s Law and the power that doubling upon doubling upon doubling has.

There’s an old parable about the inventor of chess.  The emperor of China wanted to reward the inventor of chess, so he asked him what he’d like for his reward.  The inventor said, “I’d like a single grain of rice in the first square of the chessboard, and you can double it in the second and third and so forth.  That’s all I want.”  The emperor said, “Well, sure.  No problem.”  Is it a problem?  You all know mathematically that it’s a problem, but do you know intuitively how much of a problem it is?  I did a little Web research on this.  At the University of Maryland they have a physics experiment for freshmen where they give them rice and they say “You do it.  It’ll give you a feel for how fast exponentials build up.”  It is quite easy in the first and second squares and so forth.  In fact, the first half of the chessboard is really pretty easy.  When you get to about square number 30, it takes a wastebasketful of rice, and when you finish the first half of the chessboard at square 32, it takes a closetful of rice.  From then on, you’re in trouble.  By the time you get to the last square, the amount of rice would cover the entire area of the earth.  Now here’s the scary thought.  So far, since the invention of the transistor in 1947, with doubling of power every 18 months, we have exactly covered the first half of the chessboard.  The second half awaits us right now.

So can this continue?  All the papers that are written say that Moore’s Law has to stop, because it runs into quantum limits.  People have been writing these papers for 20 years.  They’ve all been wrong, because some fundamental assumption is found to be wrong.  It could be that the drive for Moore’s Law has such momentum, perhaps because the largest industry on earth depends on Moore’s Law continuing, and they will make it continue.  Moore’s Law is not necessarily a law simply of feature sizes and semiconductors.  Rather, it’s probably a law of functionality, which is an envelope of the succession of curves, each of which may top out.  However, the envelope of all those curves continues to go up in functionality. 

I have a secret thesis that all of technology is exponential in progress.  The reason we first noticed it with Moore’s Law is because we had a way of measuring the progress in that particular field by the feature sizes in circuits.  It’s hard to think of many other cases where there is a quantitative measure of the progress of technology, but there are some.  Take the capacity of wireless communication or optics—there is a number.  Each of those is also exponential, with different time constants.  With optics it’s 12 months doubling, and with wireless it’s 9 months doubling.  All measurable technologies, it seems, are exponential.  Moore’s Law may be some kind of a law of nature.

There’s an episode in Carl Sagan’s book Contact when the character played by Jodie Foster in the movie goes to visit the aliens.  Before she leaves them, the aliens say, “All right.  We will answer one question.  What one question do you want to ask to take back to earth?”  She thinks and says, “Do you believe in God?”  The aliens say, “Not as you define it.  But there is something that bothers us.  You have a number, you call pi, 3.14159, and you’ve run it out to thousands and thousands of places.  But our computers are much, much better than yours and we’ve run it out millions and billions of places and something strange happens.  When you get far out the decimal digits turn to ones and zeros.”  She says, “What does it mean?”  The alien says, “We don’t know.  But there’s a message encoded in the structure of the universe.”  I told this to a mathematician friend, and he said, “Well, it’s an irrational number.  All sequences are in there, so it’s to be expected.”  But I think there’s a mysterious law here of exponential progress in technology. 

I think Moore’s law is so fundamental to what’s happening in information technology that it’s going to continue for another 100 years.  How could it go past the next half of the chessboard?  And if it stops, what happens?  What happens to the industry?  What happens to all the edifices that we have built on the continuation of this growth?  However, perhaps like the train it stops getting better, and that’s it.  I don’t think so.  I think progress will continue, and I have no idea what this awesome exponentiation will realize as we go ahead.

Let me talk about the other great law, Metcalfe’s Law.  Economists know it as the law of increasing returns, of network externalities, but the idea is that the more people that are connected to a network the more valuable it is.  Specifically, the value of a network grows by the square of the number of users.  The value is measured by how many people I can communicate with out there, so the total value of the network grows as the square of the number of users.  Now, what this means is that a small network has almost no value, and a large network has a huge value.  What it gives you is the lock-in phenomenon of winner takes all.  You want to have the same thing as everybody else.  Examples include the classic case of Betamax versus the VHS standard in VCRs, or Microsoft Word versus WordPerfect.  You don’t want to have something different from other people because it has less value.  There are so many things that are like that in networks where everybody wants to have the same thing because that’s where the maximum value is.  Whoever has what tips the balance and becomes what everybody wants, gets a monopoly.  This phenomenon stops a lot of applications.  The videotelephone is a classical example because if only one person has a videotelephone, it’s useless.  The videotelephone only becomes useful when a lot of other people have it.  I remember back in the 1960s, when the AT&T Picturephone was first developed.  The mathematicians did a study of the rate that the market would adopt the Picturephone, and they predicted a curve that was very much like Metcalfe’s Law.  They modeled it after the spread of the plague.  The idea is that you don’t want to be the first person on your block to get the plague.  But when all your friends get it, you think about getting it.  The more people have it, the more you’re likely to get it and suddenly there is this capture effect where everybody has it.  This law of network externality governs so much of the business and is at the heart of the Microsoft trial.  Why does Microsoft have a monopoly?  Is this a natural phenomenon that has to do with networks?

I heard a talk recently by Brian Arthur, an economist at the Sante Fe Institute, who is credited with writing the original paper on network externalities.  He’s got a new law that he unabashedly calls Arthur’s Law, and it is, “Of networks there will only be a few.”  This really goes to the law of network externalities, but the kind of networks he’s talking about are not AT&T versus AOL or anything like that.  He’s talking about customer association-type networks, for example, how many credit cards will there be?  The answer is, only a few, because you want to have the same thing that’s accepted everywhere else.  Arthur’s Law doesn’t apply to everything, but he gives two specific examples that would be familiar to all of us.  One is Amazon.  If Amazon doubles its network, or number of users, it isn’t worth a lot more to the individual consumer because all they may do is add a few more products to meet the needs of this expanded number of users.  However, if eBay doubles it network, is it worth more to you?  Absolutely yes.  Yahoo is trying to compete with eBay, but the critical mass is with eBay.

David Reed coined another law—Reed’s Law—that says there’s something beyond Metcalfe’s Law.  There are three kinds of networks.  First, there’s broadcast like radio and TV, which we’ll call a Sarnoff network.  The value of that network is proportional to the number of people receiving the broadcast.  Amazon would be this type of network, because people shop there but don’t interact with each other.  Then there’s the Metcalfe’s Law-type network where people talk to each other, for example, classified ads.  Reed said that the important thing about the Internet is neither of those.  The Internet exhibits a third kind of law—where communities with special interests can form.  The thing about communities is there are 2n of them, so in a large network the value of having so many possible communities and subnetworks is the dominant factor.  He predicts a scaling of networks, starting with small networks having only the Sarnoff linear factor, larger networks dominated by the square factor, and giant networks dominated by the 2n factor of the formation of communities.

Napster is another example of what’s going on in information technology.  First, it’s an example of the kind of network where winner takes all.  Napster is where all the songs are, so that’s where everybody else is.  If Napster goes under, when they go under, then all the little sites won’t be able to replace it because people won’t find what they want there.  Napster also brings up one of the other properties of information, which is troublesome and is going to shape our society in the coming years—the idea that information can be copied perfectly at zero cost.  That flies in the face of so much of what we believe about commerce.  As my friend Douglas Adams said to me, we protect our intellectual property by the fact that it’s stuck onto atoms, but when it’s no longer stuck onto atoms, there is really no way to protect it.  He would like to sell his books at half a cent a page, the idea being that for every page you read, you pay him half a cent.  If you get into the book 20 pages and you say, “This book is really bad,” you don’t pay anymore.  That would eliminate the “copying of information at zero cost” issue that he experiences as an author.  He says people come up to him in the street and say, “I’ve read your book 10 times,” and he says, “Yes, but you didn’t pay 10 times.”

So these are some of the things that trouble me about the future of information technology.  What are its limits?  Will the laws of network effects doom us all to a shared mediocrity?  What will happen to intellectual property and its effect on creativity?  Is it like the railroads, or is this something fundamentally different that will last through the next century?

Whatever the answers are, I’m sure that when we make a list 100 years from now, it’s going to be dramatically different from the list that we made this year.  The big things of the last century – such as spacecraft, highways, the automobile, and the airplane – may not characterize progress in this new century.