In the most recent season of the TV comedy “Silicon Valley,” Richard Hendricks is obsessed with trying to create his dream vision of a decentralized Internet. This theme did not go unnoticed in the print media as a number of publications, including IEEE Spectrum, noted that there do exist ongoing efforts to build a decentralized Internet. All this has started me to consider the relative advantages of centralization and decentralization.
Until the 1980s, most communication networks were relatively centralized. In the US, AT&T had a monopoly that was regulated by the government. Connections were dedicated, point-to-point channels and attachment of foreign devices (meaning, not manufactured by AT&T’s Western Electric) was forbidden. I remember with embarrassment my testimony at the Federal antitrust trial trying half-heartedly to justify AT&T’s fall-back strategy of allowing such connections only through a Western Electric “data coupler.”
The antitrust trial resulted in the dismantling of AT&T, and at the same time the rise of the Internet unraveled the centralization of the network. The original Internet was designed to be open and decentralized. Packet switching ensured robustness, and the IP protocol enforced open and universal interconnection. The end-to-end principle became a guiding philosophy of network transparency.
After almost a half-century, the plumbing of the Internet is still decentralized, but the overlay of the World Wide Web is not. A handful of giant companies has evolved virtual monopoly control of traffic and commerce. Governments censor the web, control access, and are one of many sources of surveillance. But, I ask myself, is Hendrick’s peer-to-peer network the answer to these ills?
We technologists have some powerful tools to create distributed systems, including mesh networks, peer-to-peer protocols, cryptography, and blockchain. Ongoing efforts like blockchain-based Etherium offer exciting potential. “Imagine facebook without Facebook, twitter without Twitter, and uber without Uber,” raves one review. Of course, this is possible, but does seem rather unlikely.
In a way, we’ve been there, done that. Napster was a peer-to-peer network not unlike Hendrick’s. For a while it was a supernova on the net, but it had a central directory and someone to sue, and is no more. BitTorrent, however, also employs a peer-to-peer protocol and has had substantial use for quite a few years.
Richard Hendricks envisioned his peer-to-peer network storing data on a distributed swarm of cell phones. I’ve been wondering about this — do I want my data stored on random cell phones? I’d have no problem if it were transient data or entertainment files, but what about my family photos? Ten years from now, when they might be wanted, all those cell phones would be gone. The data probably would have been erased years earlier. The problem with systems implemented with peers is that no one is responsible. Who do you call when your mesh network is out? Or who, ultimately, stands behind a blockchain?
On the other hand, distributed architectures have an engineering appeal. They offer organic growth, resilience, and are a cradle for democratic usage and control. Sometimes having no one in charge can be a good thing. However, as Larry Lessig has pointed out, regulation is achieved through four means — law, norms, market and architecture. We engineers usually only control one of one of these, and that's something we should always keep in mind.