Note: I’ve made some podcast appearances recently, where I’ve talked about The Machine War, the Mars Review of Books (available at store.marsreview.org), Urbit, and sundry other topics.
Now back to the book:
Behind the Throttle
In 2021, it would seem that most of us are able to fill out the rest of the story by ourselves. Jobs & Woz, Gates & Ellison, Zuck & Jack. It’s a triumphalist story, in which the computer revolution spreads across the globe, for now anyone across the globe can purchase a device, more powerful than the greatest supercomputers of decades past, which fits snugly in the palm of one’s hand.
This is a seductive story, and it has some truth to it. Certainly mass adoption of personal computers has changed the way humanity goes about being human. But there’s another way of looking at the history—as there so often is—which conveys more of a coup than a revolution. To wit, what if the computer revolution was a complete failure? What if, today in 2021, no one actually uses a personal computer?
Think about it. What are the top 5 things you do on your computer? If you’re the median computer user, then the answer is: on your laptop you open up your web browser, and go to one of a dozen or so popular websites; on your smartphone you tap the icon of one of a dozen or so popular apps. None of these activities has all that much to do with the device at your fingertips.
This was apparent as early as the 1960s. In his epoch-defining book Hackers, Steven Levy notes how the use of applications was considered “fine for Users, but it was sort of a waste in the minds of the hackers (meaning, in this case, people who love to program computers). What hackers had in mind was getting behind the console . . . much in the same way as getting behind the console of a plane.”(1){Steve Levy, Hackers, 19}.
This difference—between the “users” and the “hackers” has always been there—the main difference is that today, there are over a billion users. But back in the 1960s, when select groups of people were first discovering the enormous power of modern computers, the main passion was for programming these machines—not simply using them. The magic of a computer was the sense of control it gave the person behind the terminal. While early computers and computer programs may have been complicated, they were way less complicated than life itself. That is, a dedicated hacker could seek to understand all the moving parts in toto, like a modern god counting the hairs on the heads of his newly sculpted golems.
The culture that grew out of this ethos was one in which code is king. This was certainly the case at the computer lab run under the leadership of Marvin Minsky at MIT. As with any community that's passionate about its raison d'être, the early hackers could be totally selfless in the service of the activity they’d come to love—disdaining wealth, love, even basic creature comforts in the service of work—and also passionately partisan toward their own way of seeing the world of computing, and the world at large. This ethos was certainly double-edged, but one of its clearly positive aspects was that it encouraged computer usage as something expressive and personal. As Gerlad Sussman, one of the early MIT hackers, put it, “the important thing about a program is that it’s something you can show to people, and they can read it, and they can learn something from it. It carries information It’s a piece of your mind that you can write down and give to someone else just like a book.” {2}(Levy, 112)
A negative aspect is an indifference or wilful ignorance toward what might happen to non-programmers when they interact with this artfully constructed code. The canary in the coal mine on this question was an MIT computer science professor named Joseph Weizenbaum. Weizenbaum developed a program called ELIZA, an early chatbot, which was designed to mimic the open-ended questions common among psychotherapists. Once the program went live, Weizenbaum was aghast to discover just how seriously ELIZA’s interlocutors would take the thing: people were willing to reveal their secrets to the bot, and sometimes reacted with emotion traditionally reserved for discussion with actual human beings. {3}(Levy, 127).
Another ambivalent coder was a Berkeley drop-out named Lee Felsenstein. Felsenstein, who had been denied his dream job at NASA because his parents were professed communists, swayed uneasily throughout his early life between the poles of left-wing activism and hackerdom. Felsenstein was interested in computers only insofar as he could bring computing power to the people, whoever they may be. He appeared to have hit his sweet spot with the Community Memory project, a public terminal in the streets of Berkeley that functioned more or less like a single-computer Craigslist. The project was an overall success, with users highlighting the locations of health clinics, finding collaboration partners for art projects, even impersonating characters from William S. Burroughs’s Naked Lunch. What they didn’t do, however, was keep the computer running. {4)(Levy, 178, 280) The terminal closed in 1975. Felsenstein’s follow-up project, a computer called the Sol-20, which ran on an explicitly convivial design called the “Tom Swift Terminal,” would be a failure, and the personal computing market would come to be dominated by people who didn’t share Felsenstein’s zeal for conviviality.5
The tensions between the Minsky crowd and the Felsenstein crowd never really went away. And yet in some ways we have the worst of both worlds. Ironically, it appears that what you get when you combine the philo-engineering idealism of the early MIT hackers with the power-to-the-people ethos of Felsenstein and other activist-programmers, is a hybrid system in which computer programmers engineer the desires of the people, who are bombarded with propaganda that persuades them that their thralldom to well-written code is equivalent to power. Sometimes, it would appear, two positives add up to a negative. And sometimes strains of idealism lead to a situation far from ideal.
Another situation in which positive aspirations led to negative outcomes is the history of the operating systems that would culminate in Linux, a system which has taken over the interconnected world, although it was invented by a student, as a side project, and given away for free. It all started with a bit of fun. Then as now, people love to play games.
Flipping a Finger
Early MIT hackers, operating under the aegis of Marvin Minsky, loved their machines for the sense of creativity and control that the devices unlocked. These were people who were interested in understanding systems from the ground up: many of them had come together, before the advent of personal computing, as model railroad enthusiasts. Nor were they above pilfering scrap electronics from unsuspecting businesses, in order to keep the model trains running on time. What they did not love was people who used computers simply as a means to an end, and clumsily at that: incompetent students hogging computer time to calculate sums, or big corporations like IBM, which let bureaucratic red tape get in the way of designing and implementing programs the right way.
So they especially did not love the system, created with the oversight of General Electric, and later Honeywell, using a clunky language created by IBM, known as Multics. Multics (Multiplexed Information and Computing Service) was a system designed to bring computer time-sharing to the general public. But the early MIT hackers, who were working on the very same floor of the same building where Multics was being developed, considered the architecture of the system to violate their sense of systems design. They even coined the term “brain-damaged,” which became a common slur against bad computer systems among hackers, to describe the way Multics ran. {6}(Levy, 114)
Around the same time, just a hop and a skip down I-95, at the Murray Hill, NJ offices of Bell Laboratories, Multics’s failures were being acutely felt. And although the problems with Multics were there for all to see, at least the system provided the Bell Labs programmers with a collaborative computing environment where they could work and learn together. As one Bell Labs pioneer Dennis Ritchie put it, “We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.” 7
Ritchie and his cohort lobbied Bell Labs to give them a machine on which they could write a new operating system, but they never got official approval. So, in the hacker tradition, they threw something together, because there was a problem they needed to solve. In 1969, Ritchie’s colleague Ken Thompson wrote a program for Multics called “Space Travel,” which allowed the player to navigate his ship throughout a digital simulation of the cosmos. But the machines it ran on were both clumsy and costly. So Thompson hauled out a PDP-7 minicomputer—the same sort of device the MIT hackers favored for creativity and control—and wrote a new operating system to create a cozy environment for his game. Fellow Bell Labs engineer Brian Kernighan suggested that he call this operating system “Unix”, i.e., Uniplexed Information and Computing Service, a play on the Multics acronym, and a sly homophone with eunuchs.
Unix was built with universality and flexibility in mind. Ritchie would note that “ the success of Unix follows from the readability, modifiability, and portability of its software that in turn follows from its expression in high-level languages”8. “High-level” in this sentence means something like level of abstraction from the machine’s hardware: most computer programmers these days, such as web developers, work on high-level languages, but back in the late ‘60s an operating system was more likely to be written in assembly language, which was closer to the binary sequences at the computer’s core.
Unix worked, Unix was relatively simple to use, and, in the 1970s, Unix was cool. As systems developer and computer historian Eric S. Raymond would put it in his history of Unix, Unix programmers “delighted in playing with an operating system that not only offered them fascinating challenges at the leading edge of computer science, but also subverted all the technical assumptions and business practices that went with Big Computing. Card punches, COBOL, business suits, and batch IBM mainframes were the despised old wave; Unix hackers reveled in the sense that they were simultaneously building the future and flipping a finger at the system.” 9 Adding considerably to Unix’s popularity was the fact that the U.S. Government chose it as the system on which to test out a new project that would come to be known as the internet.
By 1980, this tool for connecting geographically disparate computers already included the computer science department at the University of California, Berkeley; the Berkely hackers had developed their own dialect of Unix, and they had a close relationship with Bell Labs, due to Bell impresario Ken Thompson’s visiting professorship there. The folks at Defense Advanced Research Programs Agency chose Berkeley Unix for their internet prototype specifically because, in accord with the general Unix vibe, it was open source, and it was also young and unencumbered with excess code. Notably, the networking protocol used by this new internet thing, which is known as TCP/IP, had no specific Unix compatibility built into it from the start. But as soon as the US Government decided they wanted the TCP/IP Protocol on Unix, then the protocol and the operating system of their choice became intimate bedfellows—and they have remained so ever since, for better or for worse, till death do them part.
Yet soon after the TCP/IP-Unix merger, all was not well in the world of Unix. Like anything popular, people began to look for ways they could profit from it. In the early days of Unix’s development at Bell Labs, AT&T, which was then Bell’s parent company, was prohibited from marketing Unix as a product. But that all changed after regulators broke up AT&T in 1983. This seemed like an event that would have liberatory effect on the Unix community; but what actually happened was that it created a race among developers to make different Unix designs proprietary, thus ending the Unix tradition of collaborating openly. Meanwhile, Microsoft’s Personal Computers were gradually taking over the world, while the Unix community spent more and more time on internecine squabbles.
Just for Fun.
Then on August 26, 1991 a Finnish computer science student named Linus Torvalds posted a laconic note to a little forum. “I'm doing a (free) operating system,” he wrote, “(just a hobby, won't be big and professional like gnu) . . . I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat.” 10 The operating system, created as “just a hobby” by this unassuming and, by his own assessment, “lazy” 11 Finnish engineer was called Linux, a portmanteau of the inventor’s first name and Unix. Today it is used by 96.3% of the world's top 1 million servers, 90% of all cloud infrastructure, and 100% of supercomputers.12 While the median computer user may think of MacOS and Windows as the two dominant operating systems, Linux is like the sunshine that makes the grass grow, powering the large-scale operations that hold the net together. Without it, the internet as we know it truly would not exist.
For all its technical elegance, Linux's main advance would come in the form of the people who loved it. As Eric S. Raymond puts it, “the most important feature of Linux . . . was not technological but sociological” {13}(Eric S. Raymond, The Cathedral and the Bazaar, 16). That is, the social structure of the team that worked on it was something like the opposite of top-down management. Rather than a boss or manager deciding what needed to be worked on and instructing his workers what to do, Torvalds’s development cohort simply ran the latest version of Linux, and when one of them happened to notice something was wrong, he or she said so. Really what was going on here was that the new culture of the internet was taking shape: it was becoming apparent that certain types of projects really did work better when they were crowdsourced. Raymond would go on to describe this shift in modes of work as the cathedral mode vs. the bazaar mode—with the bazaar representing the horizontally distributed system that would characterize Linux development, and the emergent open source movement as a whole. This process gave rise to what Raymond coined as Linus’s Law: “given enough eyeballs, all bugs are shallow.” 14. Meaning: a large number of people looking at a coding difficulty should be sufficient to to fix the issue, no matter how difficult the issue may appear. (Support for this notion would arrive in 2020, when research showed that popular Google projects were much more likely than unpopular ones to have a high proportion of bug fixes.) 15
Linux took off like a rocket in the mid-’90s, and in June, 1998, it scored its biggest win yet, when IBM announced they would run Linux on their web servers. By 1999, Dell, Intel, Hewlett-Packard, and Oracle had also put their muscle behind Linux. 16 Part of this story is that these companies wanted to stick it to Microsoft. Another part of the story mirrors earlier Unix adoption pretty closely. That is, Linux was free and open for anyone to use, and therefore free of any of the difficulties that might come from using proprietary software. As Torvalds himself put it in his biography Just for Fun: The Story of an Accidental Revolutionary, “I think one of the reasons {IBM} liked Linux was because they could just do what they wanted to do without having to deal with licensing issues.” 17
As with Unix back in the early ‘70s, the system was in it’s youth. It was full of potential, and everyone—from IBM to the free software enthusiasts to Steve Jobs, who called a befuddled Torvalds into his office to explain to him why Torvalds should really be getting behind MacOS—could project their ambitions onto it.
But then comes awkward adolescence—to say nothing of full-on thick-around-the-midriff adulthood and chronically painful middle age. Fast forward to 2009, and here is a direct quotation from Torvalds: "I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago . . . . The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse. . . . It's unacceptable but it's also probably unavoidable."18
Linux’s founder is not the only one who has noticed. One anonymous Linux user’s blog, humbly titled “Main Linux Problems on the Desktop,”19 provides an exhaustive and exhausting list of the various ailments that plague this once sprightly system. The gist of it is that once an open source project becomes large, the principles that guided it when it was small are no longer sufficient. “Microsoft,” the anonymous blogger states, “reports that Windows 8 received 1,240,000,000 hours of testing whereas new {Linux} kernel releases get, I guess, under 10,000 hours of testing - and every Linux kernel release is comparable to a new Windows version. Serious bugs which impede normal workflow can take years to be resolved.” Another clear problem, at Linux’s current scale, is the lack of coordination: “There's no central body to organize the development of different parts of the open source stack which often leads to a situation where one project introduces changes which break other projects.” And finally, Linux was always based around the notion that hacking is fun, and that the concepts of fun and community were sufficient to keep a project going. (Torvalds named his autobiography Just for Fun, after all.) Yet this ethos may not be the panacea Torvalds and his fellow open sourcers would like it to be. “The fun is over,” the Linux user at “Main Linux Problems” writes, “developers want real money to get the really hard work done.”
To everything there is a season, it would seem. There is a time to live and a time to die, a time to plant and a time to pluck, a time for open source development and a time to close ranks. Of course, as of early 2021, Linux is not dead, nor is it in any immediate danger of dying. After all, it’s the core system behind the servers that hold up the web. But what is dead is the idea that people are going to use Linux, or any other widely known current system, for truly personal computing. Back in the early days when Unix was first being developed, and as recently as the late ‘90s in the heyday of Linux, there was still a potent notion that this free and openly developed software might be used primarily for peer-to-peer services. Linux users might, in this vision, be able to use Linux to hang out with other Linux users over a network. Certainly, Torvalds and his cohort were not dreaming of a world where Linux boxes would run web servers, which everyone would access using MacOS (Torvalds was not a fan of its kernel) or Windows (widely despised by hackers of all stripes).
But this is what ended up happening. This is the story of computers that is most familiar to us, though it tends to be dressed up in garments of freedom and revolution. That storyof the rise of proprietary software on Windows PCs and Macs, and the rise of centralized services like Facebook and Google, which are run on Linux servers, but which we boot into remotely from our laptops or phones—is really the story of cunning business deals, good marketing, and amoral social engineering.
But that story has been told a dozen times already, in a dozen different ways. Even Justin Timberlake and Ashton Kutcher got in on the action. For our story, it’s time to return to 2021. And, now that we’ve learned something from the elder statesmen of computing’s past, we might, as the poet once said, “go to school to youth to learn the future.”