Chapter 3
Total Aesthetic Environment, Redux. Fake Engagement Team: World Police. Higher Ground.
Total Aesthetic Environment, Redux
Of course, nasty rumors and yellow journalism antedate Facebook. And media manipulation has been around at least since the time of Emperor Qin Shi Huang, who declared that information from every age before him was irrelevant and, on that logic, gave an order for all books to be burned. What’s new is the degree to which the exchange of information can be controlled—both in terms of big tech’s ability to censor or hide information it doesn’t like, and to flood the gates with information that it wants its users to see. Whereas only twenty years ago or so it might have seemed salutary, even noble, to turn off the television and shut out its attendant pleas for consumption, today such an attitude is far more likely to be seen as nihilistic, or depressive, or irresponsible. The essayist Mark Greif wrote, in a prescient 2006 essay, about the tangle of dramatic stories, advertisements, violent images, and coercive propaganda that have all threaded together into a web from which we seem unable to free ourselves. Greif called this mad tapestry the total aesthetic environment. 1
With the rise of twenty-four-hour channels, news has become the core and most general case of the total aesthetic environment, because twenty-four-hour news does not play the old game of pretending you can choose to turn it off. Rather, it uses the conceit that there is always something “happening,” and experience—though somebody else’s—that you must also know about, and the TV is only connecting you transparently to phenomena that should be linked to you anyway. This lie is predicated on notions of virtue, citizenship, responsibility.
If you've spent any part of this past decade on Facebook, Twitter, or Instagram, there's a good chance you've noticed just how much more deeply this "lie," predicated on notions of virtue, citizenship, and responsibility, has become ingrained in our lives. Social media has given the total aesthetic environment a whole new meaning. Messages now arrive 24/7, not from some anonymous news source, but from friends and loved ones: exhortations to resist, to register, to donate, to educate yourself, to post a black square, to not post a black square, to call your local representative, to mobilize, to actualize, to get involved. Crises demand not only our attention but our outrage and our active contribution to the machine. Whatever else we have to say about corporate media, social media, and politics, reasonable people can all agree that the problem of the total aesthetic environment has become more extreme by several orders of magnitude since Greif coined the phrase in 2006.
And the ill effects of what was, in Greif’s essay, a rather amorphous problem, have now been pinned down by thorough research, much of which Jonathan Haidt and Tobias Rose-Stockwell strikingly summarized in their 2019 Atlantic article, “The Dark Psychology of Social Networks”2. If, for instance, it seems to you that social media has made the people you know more vitriolic and self-righteous than they used to be, that’s because it has. Haidt and Rose-Stockwell cite a 2017 study that “measured the reach of half a million tweets and found that each moral or emotional word used in a tweet increased its virality by 20 percent, on average,” and a similar 2017 study from Pew, demonstrating that “posts exhibiting 'indignant disagreement' received nearly twice as much engagement.”
It might be said that indignance was a good way to get attention before social media, too. But one key difference is the creation on social networks of hellish positive feedback loops of trolling and actual rage. Clever people figured out pretty quickly that the way to garner social media followings was to either generate outrage or to express it; this prefigured an infinite loop in which opponents' outrage at one another’s increasingly outrageous comments bring on more and more, you guessed it, outrage. In person, we have corrective measures against such feedback loops, including shame and the threat of suffering physical violence. But less so online. As Haidt and Rose-Stockwell put it, “If you constantly express anger in your private conversations, your friends will likely find you tiresome, but when there’s an audience, the payoffs are different—outrage can boost your status.”
The world seems to be catching up, too, to the ill effects of social media addiction. In 2019 Rookie Magazine founder and influencer Tavi Gevinson detailed her journey down the rabbit-hole of an Insta-centric life. “With Instagram,” writes Gevinson,3
self-defining and self-worth-measuring spilled over into the rest of the day, eventually becoming my default mode. If I received conflicting views of my worth or, looking at other people’s accounts, disparate ideas about how to live, the influx of information could lead to a kind of panic spiral. I would keep scrolling as though the cure for how I felt was at the bottom of my feed. I’d feel like I was crawling out of my skin, heartbeat first, for minutes and hours. Finally, I’d see something that made me feel bad enough to put my phone away.
Levinson’s essay details an effect of social media addiction that often gets buried under lurid discussions violence, suicide, and bullying: “I think I am a writer and an actor and an artist. But I haven’t believed the purity of my own intentions ever since I became my own salesperson, too.”
Co-editor of n+1 Dayna Tortorici takes a different approach in her essay "My Instagram,"4 putting more emphasis on the uncanny process of watching oneself be outwitted by an amoral algorithm. "In truth," she writes,
my self-image began to prune from swimming so long in the sea of fitstagram. . . . My Explore page, which drives users via algorithm toward content similar to what they’ve seen or liked, became a mosaic of increasingly extreme exercisers. Looking at competitive bodybuilders, I caught myself thinking they didn’t look all that weird. This is how dysmorphia works, I thought; the algorithm only encourages it, nudging you toward extremity.
And yet, even in this introspective and shrewd essay, Tortorici admits the difficulty of responding to a phenomenon that seems to exist outside the scope of what can be assimilated into normal human experience. We have the words to describe tyranny from without, and the inner tyranny of neuroses or a guilty consciensce. But the tyranny of the algorithm? What do we have to blame, other than the hackable nature of our own minds? "Instagram," Tortorici writes, "grows on subjectivity like a fungus whose shape and color varies from person to person, and to describe what it feels like is not to describe how it works."
Fake Engagement Team: World Police
So how does it work? A slew of articles have appeared in the last few years, detailing the remorse that certain social media engineers feel about the features they have created, and their desire to make amends. The most thorough of these is a 2017 piece5 from the Guardian, featuring testimony from former Google engineer Tristan Harris and former Facebook engineer Justin Rosenstein, among others. ““All of us are jacked into this system,” Harris opines, “all of our minds can be hijacked. Our choices are not as free as we think they are.” Harris lays emphasis on the way Facebook notification dots were colored red when it was discovered that this color subconsciously influenced user behavior, and on the pull-to-refresh mechanism, which, he explains, exploits the same variable intermittent rewards system used by slot machines. The Guardian piece also references a leaked Facebook memo claiming that the company receives data "in real time to determine when young people feel 'stressed', 'defeated', 'overwhelmed', 'anxious', 'nervous', 'stupid', 'silly', 'useless' and a 'failure'." Given that anxiety in American has risen precipitously in the last decade,6 and that Advertising 101 stipulates that you want to create in the consumer a feeling of lack which your product will fill, it is not a stretch to imagine that Facebook and other social networks know that their products induce anxiety in teens and children, and that they exploit this feature to sell ads.
In January 2020, Harris, Rosenstein, and others appeared in a Netflix documentary, The Social Dilemma, which drives home the ill effects of today’s social networks with relentless force. In scene after scene we hear testimony about the ways these technologies reduce our common humanity and addict us to their services. The film is especially valuable for the way some engineers drill down into the problem more deeply than is common. The phrase "If the product is free, you are the product" has become a truism. But in The Social Dilemma we find out, "that's a little too simplistic. It's the gradual, slight, imperceptible change in your own behavior and perception that is the product." We also get a refinement of the common perception that the algorithm merely reflects our own preferences back at us. "People think that the algorithm is deigned to give them what they really want," states former YouTube algorithm designer Guillaume Chaslot, "only it's not. The algorithm is actually trying to find a few rabbit holes that are very powerful and try to find which rabbit hole is closest to your interest. And then if you start watching one of those videos, then it will recommend it over and over again." The catalog of degradation ends with what is meant as a clarion call from Harris: “We can demand that these products be designed humanely. We can demand to not be treated as an extractable resource. ” But to demand implies a certain amount of might. And the question remains: Who is it that possesses this might by which to change the way that core developers at big companies design their products?
The solution put forward in The Social Dilemma turns out to be: a demand for government oversight. And yet this proposition is rather like waiting inside a burning building, and demanding that the fire trucks arrive (while adding, as the flames rise and the flesh begins to singe, a recommendation that fire-retardant walls and floors be installed in all future buildings). The government solution seems immediately strange for a few reasons—the first being, which government? The internet is global, and there’s no one government that can regulate the world. The second reason is that it would require government emlpoyees who understand the technology well enough to oversee it. But the third problem is the most glaring: What if governments would prefer to simply use these methods of mass-manipulation for themselves.
In September 2020, a data scientist at Facebook circulated a memo7 which gave ample proof that governments are no better actors than private companies when they’ve got such insidious powers at their fingertips. Sophie Zhang had worked for about three years as a “data scientist for the Facebook Site Integrity fake engagement team” before leaving the company. The “fake engagement team,” which has a piquant dystopian ring all of its own, refers to the group of employees who are paid to root out fake accounts on Facebook’s platforms. The primary purpose of the team Zhang worked on appears to have been to prevent bots or sock puppet accounts—that is, fully automized accounts, or accounts operated by people other than the users they purport to represent, respectively—from influencing political situations. This is a scenario familiar to anyone who followed the U.S. news cycle in 2017, when allegations of Russian interference in the 2016 Presidential election dominated the national media. And yet the frenzy over Russian interference obscured a more obvious problem. The Russian government was never even accused of manipulating any actual technology. Rather, they were accused of doing what modern governments do to their own citizens all the time: using propaganda to mislead the citizenry.
Zhang’s memo contains a laundry list of national governments that used bots manipulate the extent to which their own parties appeared popular, and to harass opponents. In Azerbaijan, Zhang discovered “a large network of inauthentic accounts used to attack opponents of President Ilham Aliyev of Azerbaijan and his ruling New Azerbaijan Party.” In Honduras she unearthed a political operation that “used thousands of inauthentic assets to boost President Juan Orlando Hernandez . . . on a massive scale to mislead the Honduran people.” In Bolivia she found “inauthentic activity supporting the opposition presidential candidate in 2019” which contributed toward President Evo Morales’s resignation, and to what Zhang referred to as “mass protests leading to dozens of deaths.” In Ecuador Zhang chose to let slide certain inauthentic activity from the government, which she later worried contributed to the government’s botched handling of COVID-19.
“I have made countless decisions in this vein,” Zhang wrote in here memo, “from Iraq to Indonesia, from Italy to El Salvador.” But because Facebook heavily prioritized combating fake engagement in the United States and Western Europe, the bots and sock puppets of the Third World roamed free. The Buzzfeed report which published excerpts from Zhang’s memo related that “the amount of power she had as a mid-level employee to make decisions about a country’s political outcomes took a toll on her health”—and no wonder. Of all the revelations of the memo, it is the sad portrait of Zhang and her role, both at Facebook and in world politics, that lingers in the mind. Here we have the most elegant possible distillation of global bureaucracy’s symbiosis with 21st century technological advances: the success or failure of large-scale psychological operations, which themselves could determine the fate of nations, depended on the workflow of a stressed out millennial woman, handpicking regimes to live or die from her corporate office in Menlo Park.
With the revelations of Zhang’s memo in mind, the idea that benign governmental bodies will be set up to benignly regulate all internet activity seems to fall somewhere between hopelessly naïve and built to fail. Even if governments are not actively using persuasive computing to manipulative their citizens, as in the above examples, the engineers who are interviewed in The Social Dilemma must understand the impossibility of regulating the way the internet is accessed across the globe. Even a powerful and sovereign authority, such as the Cyberspace Administration of China, which is in charge of preventing wrongthink from appearing on the screens of Chinese citizens, can only play whack-a-mole with the viral content it intends to keep buried. And that is in the case of a nation under relatively efficient autocratic control. Now imagine what efforts in the United States might look like.
The very idea of governmental oversight of tech is, in some sense, a contradiction of terms: it assumes a coterie of overseers would have to be experts in the technology they’d be meant to rein in. Though the engineers interviewed in The Social Dilemma never explicitly mention the United States, it is implied that the United States government is the one they have in mind; and the vision of a U.S. government staffed with employees of equal computer programming prowess as the best engineers at Facebook and Google is a distant prospect indeed. The only way this might make sense is if the web developers at Google, Facebook, and the like, were by some extraordinary incentive to be lured to the Capitol. And regardless of your political affiliations, you'll likely admit that the hypothetical solution that Harris et al. favor would be the rare case in which a special interest group became less corrupt through its proximity to Washington.
Nor does there does seem to be much evidence that “starting a conversation” about the effects of manipulative tech companies, as The Social Dilemma encourages us to do, is going to bring on a solution. The nexus of this problem is that humans have proved no match for the machines that have subtly altered their behavoir; the escape route is not likely going to be crowdsourced, just as bridges or skyscrapers or aircraft are built by teams of engineers with specialized knowledge, and not by the general public.
The engineers who are interviewed in The Social Dilemma give the general impression of being honest, well-intentioned people who genuinely wish to retract some of the worst features of the global machine they’ve helped create. And yet their position on this existential issue brings to mind Freeman Dyson’s quip about J. Robert Oppenheimer, the physicist who led the Manhattan project, before later coming out against nuclear weapons, “He wanted to be on good terms with the Washington generals and to be a savior of humanity at the same time.”8
Higher Ground
A more reasonable recommendation comes from long-time refusenik Jarod Lanier, who inveighs, in response to a question about deleting social media, “Do it! Get out of the system. Delete, get off the stupid stuff. The world’s beautiful. Look, it’s great out there.”
Lanier, a large, jolly man with roving blue eyes and floor-length dreadlocks, stands out as a figure separate from the rest of the programmers who are interviewed in The Social Dilemma, and not just because he is filmed in what looks like a home for a human instead of a warehouse, or because of his arresting physical appearance. Unlike the other interviewees, he does not appear to want to reform the current internet, but rather to abandon it. Perhaps this has to something to do with some real life experience as an outsider: when he was eleven years old Lanier designed a geodesic dome for himself and his newly widowed father to live in, in the deserts of New Mexico.9 A quick perusal of LinkedIn also reveals that Lanier is rare among his cohort in having spent a significant amount of time outside elite educational institutions and well-heeled tech companies. A practiced homesteader and a one-time professional musician10, Lanier seems to possess a greater capacity outline the necessary steps for answering this most pressing social dilemma.
He pulls no punches, at least, with the title of his most recent book Ten Arguments for Deleting Your Social Media Accounts Right Now. "I know perfectly well that I'm not going to get everybody to delete their social media accounts," Lanier explains, "but I think I can get a few. And just getting a few people to delete their accounts matters a lot. And the reason why is: that creates the space for a conversation, because I want there to be enough people out in society who are free of the manipulation engines to have a societal conversation that isn't bounded by the manipulation engines."
With Lanier's words ringing in one's ears, it begins to seem odd that none of the other developers in the documentary have mentioned the possibility of simply building something outside of this corrupt infrastructure, instead of posting hypothetical reforms to it. After all, these engineers are all highly intelligent and hardworking people who have made their fortunes by building things, not by lobbying governments. Granted, different people have different opinions about what the internet should be like. But if there's one thing it appears that everyone can agree on, it's that we’d all be so much better off if we could just reduce our current computing stack to rubble and build something new on top of it. The funny thing is, we absolutely can.
Think of an ancient city, like Rome. It's radius extends horizontally, across space, but also vertically, across time. The city is the same city that it was in the time of Republic; it is also a completely different city. Some useful ancient features, like aqueducts, have remained in some form up through today. Other ancient features, like the Coliseum, have remained in large part because they're beautiful. Still others, like the human waste that citizens would throw from their windows out into the street, have long been buried—and for good reason. As the computer game designer and amateur archaeologist Ernest R. Adams put it, “in the old days the outdoor ground level slowly rose as people threw their garbage and sewage out in the street and it gradually turned to dirt. I’ve seen a house in Egypt where the occupants eventually blocked off the door and started going in an upstairs window.” 11 But Rome wasn’t re-built in a day, any more than it was built in a day. Its occupants, let's imagine, only moved to higher ground when the lower level really started to stink, nor were the higher levels immediately fully separate from what was below. There were still things Rome’s citizens might want down there—land mammals, for instance. The important thing is that they stopped swimming around in the landfill.
As in Rome, so it may be on the internet. At least that’s what a community of people working on the project called Urbit believe.
Their story is a strange one. It begins with 33 lines of an invented language—like a Martian’s first attempt at human speech, or a Principia Mathematica written by someone who had never laid eyes on a math textbook—which is so compact it can fit onto a T-shirt. It involves the elevation of conferences on Lambda Calculus and the Haskell programming language—not usually fire starters—into events that became national news. Its first blueprints were drawn up by one of the most eccentric and incendiary American political-philosophical firebrands this side of Tom Paine, and yet it rests on an idea so straightforwardly benign that it has brought a nod of interest and excitement to anyone I’ve ever explained it to—from novelists to carpenters to activists to film producers. It is a project that has drawn into its orbit architects (of physical buildings), lawyers, survivalists, neo-luddites, Buddhist monks, and the man who became a meme when he held a sign behind the head of Federal Reserve Chair Janet Yellen, exhorting those watching on C-SPAN to “Buy Bitcoin.” The effort of these people has been called a “hoax,” an “art project,” a “stroke of genius” 12 and “something out of a Neuromancer world.” One thing that’s impossible to argue is that it is not only among the most ambitious computing projects of the 21st century, but among the most ambitious social projects, and that if it takes off, it is has the power to change how the world communicates for good. Its enthusiasts sometimes refer to themselves as Martians, and in that spirit, I invite the reader to buckle up before zoom around the globe a bit, and see the world we live in from a new perspective.
But before we try space travel, we’ve got to go back to the beginning. And as with anything related to digital computing machines, this boils down to a matter of one or zero, on or off, false or true.
Wonderful piece. Beautiful Rome analogy. I see you are starting a magazine. Its mission excites me very much. Cannot find your email. Would love to be involved in any writerly capacity. leilalovelace1@gmail.com