Chapter 4
Pascal's Fearful Calculating Machine. The Only Thing He Couldn't Count. Symbiosis or Parasitism?
This is the fourth chapter of The Machine War; or, Ab Urbit Condita, a book about Urbit and the history of computing. To support this and other projects, you can buy a beautiful print copy of the Mars Review of Books at store.marsreview.org.
Pascal’s Fearful Calculating Machine
In 1951 the Argentinian writer Jorge Luis Borges penned a short essay called “La esfera de Pascal,” rendered into English in 1962 as “The Fearful Sphere of Pascal.” The essay begins, “It may be that universal history is the history of a handful of metaphors. The purpose of this note will be to sketch a chapter of this history.” Borges goes on to note the recurrence, throughout recorded time, of the concept of God as sphere—more specifically, according to a line in the ancient Corpus Hermeticum: “God is an intelligible sphere, whose center is everywhere and whose circumference is nowhere.” But whereas this was a joyous concept for a hermeticist like Giordano Bruno, it had become an entirely gloomy, or fearful one for the religious philosopher and mathematician Blaise Pascal, who happens also to be the one of the people often credited with inventing the computer. Borges merely reports the facts. He offers no explanation for how it might be that a concept like this could spontaneously recur in different eras, and be inflected so differently. The concept is simply there, in Egypt, Italy, or France, just as the naturally occurring form of a sphere is just there.
In some ways, we could say the same thing about the concept of a computer. What is a computer? And what would somebody do with it? Is it an adding machine, the next step in the course of human evolution, a device for pressing the “Like” button, or merely one of a handful of metaphors that comprise universal history?
These days a lot of people tend to think of computers as the sleek laptops that come in shiny boxes at the local Apple Store or Best Buy. But really, a computer is just something that does calculations. Various more sophisticated definitions can be given, but these are like descriptions of the leaves or bark or fruit of a tree. A tree is just a tree. And a computer is just a computer: something that computes.
One thing a computer might be is a person. That, at least, was the most common use of the English word computer up until the end of the 1940s. As the American Captain Herman Heine Goldstine wrote in a post–World-War-II report on how digital computers came to exist: “No amount of augmentation of the staff of human computers—then around 200—would suffice. It was accordingly decided . . . to sponsor the development of a radically new machine, the ENIAC, which if it were successful would reduce the computational time consumed in preparing a firing table from a few months to a few days.” {1}(George Dyson, Turing's Cathedral, 70)
Goldstine and the rest of the Ballistic Research Laboratory at the Aberdeen Proving Ground in Maryland were trying to build better weapons. Necessity is the mother invention, and winning a World War is about as necessary as it gets. (Or, at least, it certainly feels that way when you’re in the middle of one.) Specifically, the BRL was trying to figure out how shock waves would behave, and what the behavior of various nuclear bombs might be. From the beginning the idea of a computer is pretty closely tied to people who are trying to do things. That is, a computer is a (necessarily imperfect) tool for navigating an imperfect world. And as with art, or design, or human relationships, the history of computers is the history of deciding which imperfections are best.
If we take as a given that a person can be a computer, or part of a computer, then we could say that one of the first known computers is the system of “peasant multiplication”2 in use as early as the 18th Century BC in Egypt. In this case, the software is simply the binary-like system for multiplying numbers, and the hardware is the papyrus and reed pen. (This may not sound like much, but it sure beats multiplication by, say, counting out beads, the same way using =SUM in Excel beats adding up your expenses by hand.) An example of ancient computing that might hit a little closer to home is Archimedes’s use of various orders of myriad3 (a myriad = 10,000, a myriad of myriads = 10 to the 8th), because Archimedes invented this system to try to complete a practical task that would have been impossible without it. That is, he was trying to count the number of grains of sand in the universe. “Some people believe,” Archimedes begins the text now known to us as The Sand Reckoner,4 “that the number of sand is infinite in multitude . . . . I will attempt to prove to you through geometrical demonstrations, which you will follow, that some of the numbers named by us . . . exceed . . . the number of the sand having magnitude equal to the world.” As is the case with our modern computers, for some problems you just need a good enough calculating machine. In this case, to give an approximate number of grains of sand, Archimedes needed to invent better numbers.
About 1800 years later, in 1642, Blaise Pascal had all the numbers he needed; in fact, he had too many of them. Pascal’s father Etienne was an overworked tax assessor in Rouen, France5 (the same city where Gustave Flaubert would later contribute his own augury of the information age: the novel Bouvard and Pecuchet, about two tireless fools who attempt to learn every possible fact about every possible subject). To aid his father, Blaise Pascal set out to create a machine that would do the math for him. In short, the result was a calculator (which could only do addition). While Pascal is often credited with this invention, it turns out that a German named Wilhelm Shickard beat him by a few decades, though his machine was never created, and his plans for the project were only unearthed in the 20th century.
Shickard called his invention the “Calculating Clock,” and Pascal was also influenced by the idea of a timepiece. The concept of time is central to the concept of a computer, although we don’t tend to think about it that way these days. The second hand of a mechanical clock is in one place one second; one second later it’s somewhere else. In a sense, a mechanical clock is a computer, which only runs one function, answering the question: given that it is a certain time, what time will it be exactly one second from now? Then the hardware runs this would-be software, and 11:11:11 becomes 11:11:12. A fundamental difference between Pascal’s calculating machine and a mechanical clock is that you can ask Pascal’s machine more than one question. A mechanical clock has no idea what time it will be 20 minutes from now (and you wouldn’t either, if you couldn’t run the y = x + 20 software in your mind).
Later that century the German philosopher Gottfried Wilhelm Leibniz took the concept behind Pascal’s invention considerably further. On Pascal’s contraption, any function other than addition was a pain. Leibniz, however, was able to come up with the scheme for a machine that would perform all the functions of arithmetic. But first, he needed to formulate the system we now know as binary. As Leibniz put it in his 1703 explanation of the system, “But instead of the progression of tens, I have for many years used the simplest progression of all, which proceeds by twos, having found that it is useful for the perfection of the science of numbers. Thus I use no other characters in it bar 0 and 1, and when reaching two, I start again. . . . {R}eckoning by twos, that is, by 0 and 1, as compensation for its length, is the most fundamental way of reckoning for science, and offers up new discoveries, which are then found to be useful, even for the practice of numbers and especially for geometry. The reason for this is that, as numbers are reduced to the simplest principles, like 0 and 1, a wonderful order is apparent throughout.”6 Leibniz drew up a table of the binary numbers, from 0 to 1 to 10 to 11 to 100 and so on, and noticed that these numbers could be represented by the property of being either open or shut. Given a set of marbles, and a series of opening and closing mechanical gates—the first instances of what are now known in computing as logic gates—one could perform any arithmetical operation, the same way one could slide one’s hand down the rows of a binary table to add, subtract, multiply, or divide.
Leibniz did not invent binary merely to do arithmetic. He had a profound sense of the numinousness of numbers, and believed that all the thorniest questions of man and God and law could be perfectly calculated, if only one had the right techniques. All throughout history the concept of computing seems to have been intertwined with notions of finiteness and infinity, or the calculability of the transcendent by mundane means. In a way, the modern computer could be said to derive from the imperfection of mathematics—or, perhaps more accurately, the imperfect compatibility between the human mind and mathematics. And no one person better embodies this tension than the 20th century polymath John von Neumann.
The Only Thing He Couldn’t Count
John von Neumann, born in 1903 as Margittai Neumann János Lajos, to a prosperous Jewish family in Budapest, seems to have been recognized by nearly everyone who knew him as a genius. A mathematical wünderkind, von Neumann made outstanding contributions to the mathematical fields of logic, set theory, group theory, ergodic theory, and operator theory; more-or-less invented the field of game theory;7 and is said by several sources to have had near-perfect recall, being able to recite back the contents, word for word, of books he had recently read. As von Neumann’s colleague Edward Teller (no slouch, himself) put it, “if a mentally superhuman race ever develops, its members will resemble Johnny von Neumann.”[8](TC, 45)
Von Neumann was a mathematician at heart. But, by his own admission, a mathematician’s skill, like that of a chess player or a running back, is likely to decline in his late twenties. Unlike the stereotype of the pure mathematician, however, von Neumann seemed to operate best when he had a practical task toward which his immense powers of abstract thought could be applied; he was interested in weather prediction, optimal poker strategy, outsmarting stock traders on the open market, and, of course, computers.
His passion for computers might be seen to stem from one of the great mathematical quandaries that he didn’t solve. Like many mathematicians of his generation, von Neumann was deeply interested in the great German mathematician David Hilbert’s Entscheidungsproblem or “decision problem”—which was inspired by Leibniz’s dream of a machine for calculating the universe. The decision problem asks “whether provable statements can be distinguished from disprovable statements by strictly mechanical procedures in a finite amount of time.”[9](TC, 94) In a way, Hilbert was asking the same question Leibniz had asked. That is, what are the limits to what is calculable? One significant response to Hilbert’s program, and one of the more significant contributions to mathematics of all time, was German-Austrian logician Kurt Gödel’s incompleteness theorems. As Turing’s Cathedral author George Dyson puts it, “Gödel proved that within any formal system sufficiently powerful to include ordinary arithmetic, there will always be undecidable statements that cannot be proved true, yet cannot be proved false.”[10](TC, 50) Hilbert had actually been von Neumann’s mentor back in Göttingen; now that the Hilbertian program was dashed (and Europe along with it) what was left to do but explore the earthly world of finitude? And what better place to do it than America?
Von Neumann arrived in the New World in 1930. And after being brought into the Manhattan Project at Los Alamos during World War II, he got to work with some of the punched card IBM machines they had assembled there in the desert of New Mexico, in order to model the behavior of this new type of bomb. From then on, computers were a ruling passion. And once the war was over, von Neumann set out to build what would be the progenitor of the personal computers we use today. Of course, von Neumann did not do this alone. And he was not the first to think about how to house both program instructions and memory in the same machine. All the same, the design on which modern personal computing rests is known to this day as the “von Neumann architecture.”
The von Neumann architecture consists of the following functional elements: “a hierarchical memory, a control organ, a central arithmetic unit, and input / output channels.”[11](TC, 78) One way of thinking about this is to think back on Leibniz’s machine. It was pretty versatile in that it could do whatever arithmetic you wanted it to do. But it still knew only one thing at a time. And that thing was: what is my current state? Like a stop watch or a self-help guru, Leibniz’s calculator isn’t interested in the past, it only knows now. But what if it knew what its state was two hours ago? And what if the machine was robust enough to access that knowledge on its own, while also containing the instructions for what to do with that knowledge? One of the major breakthroughs von Neumann and his associates came up with was to break up a string or “word” of this knowledge (aka data) into its component parts, and spread those components parts (aka bits, meaning binary digits) into different containers within the computer.[12](TC, 105) By breaking down information in this way, a computer’s processor is able to access these bits simultaneously, and perform multiple operations at once.
Much of von Neumann’s insight about computers can be tied directly to his youthful interest in number theory. Von Neumann was shaken when Gödel’s proof came out. “I know myself,” he said, “how humiliatingly easy my own values about absolute mathematical proof changed during this episode . . . .”13 It’s hard not to see a kind of parable in the differing attitudes of von Neumann and Gödel, and the different ways in which they went about their lives after Gödel’s shocking discovery.
In addition to his deep love of powerful machines, Von Neumann loved fine food and sought it out lustily (“the only thing he couldn’t count was calories,” his wife quipped); prided himself on being able to drink his guests under the table at the weekly salons he held, drove fast cars (sometimes while reading a book at the same time) and crashed them so often that a particularly tricky intersection in Princeton became known as “von Neumann’s corner”; he preferred working in noisy environments to quiet ones and sometimes annoyed his Princeton colleagues by blasting German march music in Fuld Hall; and he liked to play practical jokes on Einstein, once escorting poor Albert onto a train hurtling in the wrong direction.14
Meanwhile, after writing his great proof, Gödel became fixated on mathematical Platonism: the notion that numbers exist objectively outside human perception of them. Although it was a rich and fascinating idea, very few scholars took these airy notions seriously. After some ridiculous bureaucratic bullying (the Nazis were suspicious of his consorting so much with Jewish mathematicians; the US Government was suspicious of his “Germanness,” even though he was Austrian) Gödel’s depression and tendency toward hypochondria worsened. Eventually he became so paranoid that he would not eat a morsel of food unless his wife had prepared it, for fear of being poisoned. After his wife was hospitalized in 1977, Gödel refused all nourishment and starved himself to death.
Symbiosis or Parasitism?
Pretty soon after von Neumann helped build the EDVAC, another American had a vision for the future of computers, and how they might affect the people who use them. A soft-spoken preacher’s son from Missouri, Joseph Carl Robnett Licklider, “Lick” to his friends, came up in the world of psychoacoustics, where he investigated questions like how altitude affects the ability of two parties to communicate. Licklider was already thinking about how human communication changes depending on the technology they use to communicate. It made sense, then, for Licklider, who was by all accounts a restlessly curious and intuitive thinker, to be ensorcled by the idea of a network of computers. Like Archimedes with his grains of sand, Licklider was at MIT working on a specific problem—a radar system that could provide early warning to Washington in case of Soviet attack[15](Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late, 31)—when he happened on the 250-ton IBM-manufactured “Q7” computerized command and control system.
Soon after his experience with the Q7, Licklider crossed paths with physicist and engineer Wesley A. Clark, who was then working on the TX-2 computer at MIT’s Lincoln Labs. This was one of the first ever computers to utilize a graphical interface: the Sketchpad program, designed by Ivan Sutherland as his PhD thesis at MIT16, allowed users to draw interactively on the screen of the computer17 just as one might draw with one’s finger on an iPad today. Not only that, Sketchpad allowed its users to augment their own intuitive drawings, just as a modern graphic design program might help a user repeat a certain pattern, or “snap” a line to a parallel angle with another line.
From that point on, Lick was hooked. Earlier in his academic career, Licklider had spent time in the Cambridge, MA circle of Norbert Wiener,18 a fellow Missourian and former child prodigy, who went on to create the field of cybernetics, the study of “communication and control systems in living organisms and machines.”19 Now with a computer at his hands, it was evident to Lick, if not yet to the rest of the world, that a whole lot more communication between living organisms and machines was about to take place. His seminal 1960 paper “Man-Computer Symbiosis” related how “a fantastic change has taken place during the last few years. ‘Mechanical extension’ has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped.”20 It is an incredibly prescient document, written at a time when the concept of widespread computer-use, to say nothing of a system of interlocking computers, hadn’t crossed most people’s minds.
Licklider was an optimist. He envisaged something very much resembles the present-day internet, but it doesn’t seem to have crossed his that making the barrier between human and machine more permeable might mean that humans are far more susceptible to being manipulated and controlled by whoever programs the machine, or by machines that learn, in some form or another, to program themselves. Licklider used the term “symbiosis” in his paper, confident in his prediction of a society in which computers and humans are mutually beneficial to one another. However, the most prevalent relationship between living organisms is actually parasitism, not symbiosis.21 And it’s worth at least wondering which relationship man’s present-day relationship with computers most resembles.
Yet It’s hard to fault Lick for his optimism. When he was thinking over these questions, everyone who used a computer was a computer programmer. It was uncontroversial that the ability to share research between computers was a good thing; and so it seemed uncontroversial to generalize out from that concept toward a world where an entire global computer network is “open” in such a manner. But first, Licklider and a select team of computer pioneers began by connecting just four computers.
After a stint at MIT and then at the influential Cambridge, MA psychoacoustics company Bolt Beranek Newman, Licklider was tagged to head the Information Processing unit of ARPA (Advanced Research Projects Agency) at the United States Government. ARPA was meant as a Cold War vehicle, but after the launch of NASA it became something of an agency without a raison d’etre, and Lick was able to use the vacuum to steer the division toward a realization of his dream of networked computing. To that end, ARPA enlisted some of the brightest minds at Bolt Beranek Newman to develop Interface Message Processors, the primogenitors of modern-day routers, which were able to connect geographically disparate computers on what came to be known as the ARPAnet. The first computer researchers on the ARPAnet just wanted to do simple things that we take for granted, like try out software that was developed on a computer other than one’s own. To that end, the “IMP guys” at BBN first connected computers at Stanford Research Institute, UC Santa Barbara, UC Berkeley, and the University of Utah. Other research universities with strong computer science departments followed. Pretty soon it became a major disadvantage not to have a connection to the ARPAnet—not only because one was missing out on using other computers’ programs, but because the net had become a Schelling Point where useful information was distributed.
Communication became one of the core features of ARPAnet. At first, the network wasn’t really meant for non-professional messages. But human beings are full-time communicators, and only part-time professionals, so the core use of what came to be known as email simply branched from what was natural. ARPAnet user and UCLA professor Len Kleinrock described the downright illicit feeling of using his Teletype connection in 1973 to ask a colleague to retrieve an electric razor that he had left across the ocean in Brighton, England. [22](Wizards, 189].
It’s interesting to note the ad hoc manner in which both computers and computer systems developed. The world of computing was built on technology, but it was also built on ideas—and it was anybody’s guess whether the first-order ideas of the people building these things were good or bad. One prominent example came when a computer programmer at Carnegie Mellon University altered a widely used program for the sake of what he considered civil liberties. According to Katie Hafner and Matthew Lyon’s Where Wizards Stay Up Late, “In an effort to respect privacy, Ivor Durham at CMU changed the FINGER default setting; he added a couple of bits that could be turned on or off, so the information could be concealed unless a user chose to reveal it. Durham was flamed without mercy. He was called everything from spineless to socially irresponsible to a petty politician, and worse—but not for protecting privacy. He was criticized for monkeying with the openness of the network.” [23](Wizards, 216)
No one who was flaming Durham did so because they felt it would be important, in forty years' time, for multi-billion dollar corporations to make their profits by tracking their users’ every move and using that information to sell ads. In fact, the programmer who called Durham “spineless” did so, it turns out for a reason that might even be more sympathetic in 2021 than it was at the time: he felt his colleagues were spending too much time glued to the computer. “We can go days without seeing one another,” Brian K. Reid wrote in 1979, “each hiding in our office or at home with our terminal, sending mail instead of talking. And now I can't even find out whether or not somebody has ‘come in’ to the virtual research lab today, simply because he's too lazy or too preoccupied to tweak a couple of damn bits. Every communication path, every bit of information, is vital.”24
The debate over FINGER eventually fizzled out. But the fundamental questions are baked into the software we use today. Is every bit of information we transmit over the web vital? Who owns it, and where should it be stored? The way such questions get answered will determine the course of history, whether we pay attention to it or not. In some sense, it already has. In 2002 Reid became director of operations at what was then a little start-up called Google.