Jaron Lanier, from You Are Not A Gadget, 2010

The design of the web as it appears today was not inevitable.  In the early 1990s, there were perhaps dozens of credible efforts to come up with a design for presenting networked digital information in a way that would attract more popular use.  Companies like General Magic and Xanadu developed alternative designs with fundamentally different qualities that never got out the door.

A single person, Tim Berners-Lee, came to invent the particular design of today’s web.  The web as it was introduced was minimalist, in that it assumed just about as little as possible about what a web page would be like.  It was also open, in that no page was preferred by the architecture over another, and all pages were accessible to all.  It also emphasized responsibility, because only the owner of a website was able to make sure that their site was available to be visited.

Berners-Lee’s initial motivation was to serve a community of physicists, not the whole world.  Even so, the atmosphere in which the design of the web was embraced by early adopters was influenced by idealistic discussions.  In the period before the webs was born, the ideas in play were radically optimistic and gained traction in the community, and then in the world at large.

Since we make up so much from scratch when we build information technologies, how do we think about which ones are best?  With the kind of radical freedom we find in digital systems comes a disorienting moral challenge.  We make it all up—so what shall we make up?  Alas, that dilemma—of having so much freedom—is chimerical.

As a program grows in size and complexity, the software can become a cruel maze.  When other programmers get involved, it can feel like a labyrinth.  If you are clever enough, you can write any small program from scratch, but it takes a huge amount of effort (and more than a little luck) to successfully modify a large program, especially if other programs are already depending on it.  Even the best software development groups periodically find themselves caught in a swarm of bugs and design conundrums.

Little programs are delightful to write in isolation but the process of maintaining large-scale software is always miserable.  Because of this, digital technology tempts the programmer’s psyche into a kind of schizophrenia.  There is constant confusion between real and ideal computers.  Technologists wish every program behaved like a brand-new, playful little program, and will use any available psychological strategy to avoid thinking about computers realistically.

The brittle character of maturing computer programs can cause digital designs to get frozen into place by a process known as lock-in.  This happens when many software programs are designed to work with an existing one.  The process of significantly changing software in a situation in which a lot of other software is dependent on it is the hardest thing to do.  So it almost never happens.


One day in the early 1980s, a music synthesizer designer named Dave Smith casually made up a way to represent musical notes.  It was called MIDI.  His approach conceived of music from a keyboard player’s point of view.  MIDI was made of digital patterns that represented keyboard events like “key-down” and “key-up.”

That meant it could not describe the curvy, transient expressions a singer or a saxophone player can produce.  It could only describe the tile mosaic world of the keyboardist, not the watercolor world of the violin.  But there was no reason for MIDI to be concerned with the whole of musical expressions, since Dave only wanted to connect some synthesizers together so that he could have a larger palette of sounds while playing a single keyboard.

In spite of its limitations, MIDI became the standard scheme to represent music in software.  Music programs and synthesizers were designed to work with it, and it quickly proved impractical to change or dispose of all that software and hardware.  MIDI became entrenched, and despite Herculean efforts to reform it on many occasions by a multi-decade-long parade of powerful international commercial, academic, and professional organizations, it remains so.

Standards and their inevitable lack of prescience posed a nuisance before computers, of course.  Railroad gauges—the dimensions of the tracks—are one example.  The London Tube was designed with narrow tracks and matching tunnels that, on several of the lines, cannot accommodate air-conditioning, because there is no room to ventilate the hot air from the trains. Thus, tens of thousands of modern-day residents in one of the world’s richest cities must suffer a stifling commute because of an inflexible design decision made more than one hundred years ago.

But software is worse than railroads, because it must always adhere with absolute perfection to a boundlessly particular, arbitrary, tangled, intractable messiness.  The engineering requirements are so stringent and perverse that adapting to shifting standards can be an endless struggle. So while lock-in may be a gangster in the world of railroads, it is an absolute tyrant in the digital world.


The fateful, unnerving aspect of information technology is that a particular design will occasionally happen to fill a niche and, once implemented, turn out to be unalterable.  It becomes a permanent fixture from then on, even though a better design might just as well have taken its place before the moment of entrenchment.  A mere annoyance then explodes into a cataclysmic challenge because the raw power of computers grows exponentially.  In the world of computers, this is known as Moore’s law.

Computers have gotten millions of times more powerful, and immensely more common and more connected, since my career began—which was not so very long ago.  It’s as if you kneel to plant a seed of a tree and it grows so fast that it swallows your whole village before you can even rise to your feet.

So software presents what often feels like an unfair level of responsibility to technologists.  Because computers are growing more powerful at an exponential rate, the designers and programmers of technology must be extremely careful when they make design choices.  The consequences of tiny, initially inconsequential decisions often are amplified to become defining, unchangeable rules of our lives.

MIDI now exists in your phone and in billions of other devices.  It is the lattice on which almost all the popular music you hear is built.  Much of the sound around us—the ambient music and audio beeps, the ring-tones and alarms—are conceived in MIDI.  The whole of the human auditory experience has become filled with discrete notes that fit in a grid.

Someday a digital design for describing speech, allowing computers to sound better than they do now when they speak to us, will get locked in.  That design might then be adapted to music, and perhaps a more fluid and expressive sort of digital music will be developed.  But even if that happens, a thousand years from now, when a descendant of ours is traveling at relativistic speeds to explore a new star system, she will probably be annoyed by some awful beepy MIDI-driven music to alert her that the antimatter filter needs to be recalibrated.


Before MIDI, a musical note was a bottomless idea that transcended absolute definition.  It was a way for a musician to think, or a way to teach and document music.  It was a mental tool distinguishable from the music itself.  Different people could make transcriptions of the same musical recording, for instance, and come up with slightly different scores.

After MIDI, a musical note was no longer just an idea, but a rigid, mandatory structure you couldn’t avoid in the aspects of life that had gone digital.  The process of lock-in is like a wave gradually washing over the rulebook of life, culling the ambiguities of flexible thoughts as more and more thought structures are solidified into effectively permanent reality.

We can compare lock-in to scientific method.  The philosopher Karl Popper was correct when he claimed that science is a process that disqualifies thoughts as it proceeds—one can, for example, no longer reasonably believe in a flat Earth that sprang into being some thousands of years ago.  Science removes ideas from play empirically, for good reason.  Lock-in, however, removes design options based on what is easiest to program, what is politically feasible, what is fashionable or what is created by chance.

Lock-in removes ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows the ideas it immortalizes, by cutting way the unfathomable penumbra of meaning that distinguishes a word in natural language from a command in a computer program.

….If it’s important to find the edge of mystery, to ponder the things that can’t quite be defined—or rendered into a digital standard—then we will have to perpetually seek out entirely new ideas and objects, abandoning old ones like musical notes.  Throughout this book, I’ll explore whether people are becoming like MIDI notes—overly defined, and restricted in practice to what can be represented in a computer.  This has enormous implications: we can conceivably abandon musical notes, but we can’t abandon ourselves.

When Dave made MIDI, I was thrilled.  Some friends of mine from the original Macintosh team quickly built a hardware interface so a Mac could use MIDI to control a synthesizer, and I worked up a quick music creation program.  We felt so free—but we should have been more thoughtful.

By now, MIDI has become too hard to change, so the culture has changed to make it seem fuller than it was initially intended to be.  We have narrowed what we expect from the most commonplace forms of musical sound in order to make the technology adequate.  It wasn’t Dave’s fault.  How could he have known?

Trap for a Tribe

The intentions of the cybernetic totalist tribe are good.  They are simply following a path that was blazed in earlier time by well-meaning Freudians and Marxists—and I don’t mean that in a pejorative way.  I’m thinking of the earliest incarnations of Marxism, for instance, before Stalinism and Maoism killed millions.

Movements associated with Freud and Marx both claimed foundations in rationality and the scientific understanding of the world.  Both perceived themselves to be at war with the weird, manipulative fantasies of religions.  And yet both invented their own fantasies that were just as weird.

The same thing is happening again.  A self-proclaimed materialist movement that attempts to base itself on science starts to look like a religion rather quickly.  It soon presents its own eschatology and its own revelations about what is really going on—portentous events that no one but the initiated can appreciate.  The Singularity and the noosphere, the idea that a collective consciousness emerges from all the users on the web, echo Marxist social determinism and Freud’s calculus of perversions.  We rush ahead of skeptical, scientific inquiry at our peril, just like the Marxists and Freudians.

Premature mystery reducers are rent by schisms, just like Marxists and Freudians always were.  They find it incredible that I perceive a commonality in the membership of the tribe.  To them the systems Linux and UNIX are completely different, for instance, while to me they are coincident dots on a vast canvas of possibilities, even if much of the canvas is all but forgotten by now.

At any rate, the future of religion will be determined by the quirks of the software that gets locked in during the coming decades, just like the futures of musical notes and personhood.


The Only Product That Will Maintain Its Value After the Revolution 

There is, unfortunately, only one product that can maintain its value as everything else is devalued under the banner of the noosphere.  At the end of the rainbow of open culture lies an eternal spring of advertisements.  Advertising is elevated by open culture from its previous role as an accelerant and placed at the center of the human universe.

There was a discernible ambient disgust with advertising in an earlier, more hippielike phase of Silicon Valley, before the outlandish rise of Google.  Advertising was often maligned back then as a core sin of the bad old-media world we were overthrowing.  Ads were at the very heart of the worst of the devils we would destroy, commercial television.

Ironically, advertising is now singled out as the only form of expression meriting genuine commercial protection in the new world to come.  Any other form of expression is to be remashed, anonymized, and decontextualized to the point of meaninglessness.  Ads, however, are to be made vever more contextual, and the content of the ad is absolutely sacrosanct.  No one—and I mean no one—dares to mash up ads served in the margins of their website by Google.  When Google started to rise, a common conversation in Silicon Valley would go like this:  “Wait, don’t we hate advertising?”  “Well, we hate old advertising. The new kind of advertising is unobtrusive and useful.”

The centrality of advertising to the new digital hive economy is absurd, and it is even more absurd that this isn’t more generally recognized.  The most tiresome claim of the reigning official digital philosophy is that crowds working for free do a better job at some things than paid antediluvian experts.  Wikipedia is often given as an example.  If that is so—and as I explained, if the conditions are right it sometimes can be—why doesn’t the principle dissolve the persistence of advertising as a business?

A functioning, honest crowd-wisdom system ought to trump paid persuasion.  If the crowd is so wise, it should be directing each person optimally in choices related to home finance, the whitening of yellow teeth, and the search for a lover.  All that paid persuasion ought to be mooted.  Every penny Google earns suggests a failure of the crowd—and Google is earning a lot of pennies.

If you want to know what’s really going on in a society or ideology, follow the money.  If money is flowing to advertising instead of musicians, journalists, and artists, then a society is more concerned with manipulation than truth or beauty.  If content is worthless, then people will start to become empty-headed and contentless.

The combination of hive mind and advertising has resulted in a new kind of social contract.  The basic idea of this contract is that authors, journalists, musicians, and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind.  Reciprocity takes the form of self-promotion.  Culture is to become precisely nothing but advertising.