A new book brings stark clarity to the formulas that guide our behavior online.

In 1973, the writer Arthur C. Clarke formulated an adage meant to capture the relationships humans were building with their machines: “Any sufficiently advanced technology is indistinguishable from magic.”

The line became known as Clarke’s Third Law, and it is regularly invoked today as a reminder of technology’s giddy possibilities. Its true prescience, though, lay in its ambivalence. Technology, in Clarke’s time, encompassed cars and dishwashers and bombs that could take millions of lives in an instant. Technology could be awe-inspiring. It could also be cruel. And it tended to work, for the typical person, in mysterious ways—an opacity that, for Clarke, suggested something of the spiritual. Today, as technology has expanded to include self-driving cars and artificial intelligence and communications platforms that divide people even as they connect them, his formulation suggests a darker form of faith: a creeping sense that technological progress amounts to human capitulation. To exist in an ever more digitized world is to be confronted every day with new reminders of how much we can’t know or understand or control. It is to make peace with powerlessness. And then it is, very often, to respond just as Clarke suggested we might—by seeking solace in magic.

Because of that, there’s power in plain language about how technology functions. The plainness itself acts as an antidote to magical thinking. That is one of the animating assumptions of Filterworld: How Algorithms Flattened Culture, the journalist and critic Kyle Chayka’s new book. “Filterworld,” as Chayka defines it, is the “vast, interlocking, and yet diffuse network of algorithms that influence our lives today”—one that “has had a particularly dramatic impact on culture and the ways it is distributed and consumed.” The book is a work of explanatory criticism, offering an in-depth consideration of the invisible forces people invoke when talking about “the algorithm.” Filterworld, in that, does the near impossible: It makes algorithms, those dull formulas of inputs and outputs, fascinating. But it also does something that is ever more valuable as new technologies make the world seem bigger, more complicated, and more difficult to understand. It makes algorithms, those uncanniest of influencers, legible.

Algorithms can be teasingly tautological, responding to users’ behavior and shaping it at the same time. That can make them particularly challenging to talk about. “The algorithm showed me,” people commonly say when explaining how they found the TikTok they just shared. “The algorithm knows me so well,” they might add. That language is wrong, of course, and only in part because an algorithm processes everything while knowing nothing. The formulas that determine users’ digital experiences, and that decide what users are and are not exposed to, are elusively fluid, constantly updated, and ever-changing. They are also notoriously opaque, guarded like the trade secrets they are. This is the magic Clarke was talking about. But it hints, too, at a paradox of life in an age of digital mediation: Technology is at its best when it is mysterious. And it is also at its worst.

One of Chayka’s specialties as a critic is design—not as a purely aesthetic proposition, but instead as an influence so omni-visible that it can be difficult to detect. He applies that background to his analyses of algorithms. Filterworld, as a term, conveys the idea that the algorithms of the digital world are akin to the architectures of the physical world: They create fields of interaction. They guide the way people encounter (or fail to find) one another. Architectural spaces—whether cubicles or courtyards—may be empty, but they are never neutral in their effects. Each element has a bias, an intention, an implication. So, too, with algorithms. “Whether visual art, music, film, literature, or choreography,” Chayka writes, “algorithmic recommendations and the feeds that they populate mediate our relationship to culture, guiding our attention toward the things that fit best within the structures of digital platforms.”

Algorithms, Filterworld suggests, bring a new acuity to age-old questions about the interplay between the individual and the broader world. Nature-versus-nurture debates must now include a recognition of the cold formulas that do much of the nurturing. The matters of what we like and who we are were never straightforward or separable propositions. But algorithms can influence our tastes so thoroughly that, in a meaningful way, they are our tastes, collapsing desire and identity, the commercial and the existential, into ever more singular propositions. Chayka invokes Marshall McLuhan’s theories to explain some of that collapse. Platforms such as television and radio and newspapers are not neutral vessels of information, the 20th-century scholar argued; instead, they hold inescapable sway over the people who use them. Mediums, line by line and frame by frame, remake the world in their own image.

Read: We’ve lost the plot

McLuhan’s theories were—and, to some extent, remain—radical in part because they run counter to technology’s conventional grammar. We watch TV; we play video games; we read newspapers. The syntax implies that we have control over those experiences. We don’t, though, not fully. And in Chayka’s rendering, algorithms are extreme manifestations of that power dynamic. Users talk about them, typically, as mere mathematical equations: blunt, objective, value free. They seem to be straightforward. They seem to be innocent. They are neither. In the name of imposing order, they impose themselves on us. “The culture that thrives in Filterworld tends to be accessible, replicable, participatory, and ambient,” Chayka notes. “It can be shared across wide audiences and retain its meaning across different groups, who tweak it slightly to their own ends.” It works, in some ways, as memes do.

But although most memes double as cheeky testaments to human ingenuity, the culture that arises from algorithmic engagement is one of notably constrained creativity. Algorithm, like algebra, is derived from Arabic: It is named for the ninth-century Persian mathematician Muhammad ibn Musa al-Khwarizmi, whose texts, translated in the 12th century, introduced Europeans to the numeral system still in use today. The Arabic title of his book The Rules of Restoration and Reduction, a series of strategies for solving equations, was shortened by later scholars to Al-jabr, and then translated to “algeber”; al-Khwarizmi, through a similar process, became “algoritmi.”

Chayka reads that etymology, in part, as one more piece of evidence that “calculations are a product of human art and labor as much as repeatable scientific law.” Algorithms are equations, but they are more fundamentally acts of translation. They convert the assumptions made by their human creators—that users are data, perhaps, or that attention is currency, or that profit is everything—into the austere logic of mathematical discourse. As the internet expanded, and as the data it hosted proliferated, algorithms did much of their work by restoring scarcity to all of the abundance. The web, in some sense, became its own “rule of restoration and reduction,” an ongoing attempt to process the new inputs and churn out tidy solutions. “Filtering,” as Chayka puts it, “became the default online experience.”

Algorithms do that winnowing. More specifically, though, the companies that create the algorithms do it, imposing an environmental order that reflects their commercial interests. The result is a grim irony: Although users—people—generate content, it’s the corporations that function most meaningfully as the internet’s true authors. Users have limited agency in the end, Chayka argues, because they can’t alter the equation of the recommendation engine itself. And because the internet is dominated by a handful of massive companies, he writes, there are few alternatives to the algorithmic feeds. If algorithms are architectures, we are captives in their confines.

Read: Procrastinating ourselves to death

Though Chayka focuses on the effects algorithms have on culture, his book is perhaps most acute in its consideration of algorithms’ effects on individuals—namely, the way the internet is conditioning us to see the world itself, and the other people in it. To navigate Filterworld, Chayka argues, is also to live in a state of algorithmic anxiety: to reckon, always, with “the burgeoning awareness that we must constantly contend with automated technological processes beyond our understanding and control, whether in our Facebook feeds, Google Maps driving directions, or Amazon product promotions.” With that awareness, he adds, “we are forever anticipating and second-guessing the decisions that algorithms make.”

The term algorithmic anxiety was coined in 2018 by researchers at the Georgia Institute of Technology to describe the confusion they observed among people who listed properties on Airbnb: What did the platform’s algorithm, in presenting its listings to potential guests, prioritize—and what would improve their own listings’ chances of being promoted high in those feeds? They assumed that factors such as the quality and number of guest reviews would be important signals in the calculation, but what about details such as pricing, home amenities, and the like? And what about the signals they send as hosts? The participants, the then–doctoral student Shagun Jhaver and his colleagues reported, described “uncertainty about how Airbnb algorithms work and a perceived lack of control.” The equations, to them, were known unknowns, complicated formulas that directly affected their earnings but were cryptic in their workings. The result, for the hosts, was an internet-specific strain of unease.

Algorithmic anxiety will likely be familiar to anyone who has used TikTok or Facebook or X (formerly Twitter), as a consumer or creator of content. And it is also something of a metaphor for the broader implications of life lived in digital environments. Algorithms are not only enigmatic to their users; they are also highly personalized. “When feeds are algorithmic,” Chayka notes—as opposed to chronological—“they appear differently to different people.” As a result, he writes, “it’s impossible to know what someone else is seeing at a given time, and thus harder to feel a sense of community with others online, the sense of collectivity you might feel when watching a movie in a theater or sitting down for a prescheduled cable TV show.”

That foreclosure of communal experience may well prove to be one of the most insidious upshots of life under algorithms. And it is one of Filterworld’s most resonant observations. This is a book about technology and culture. But it is also, in the end—in its own inputs and outputs and signals—a book about politics. The algorithms flatten people into pieces of data. And they do the flattening so efficiently that they can isolate us too. They can make us strangers to one another. They can foment division and misunderstanding. Over time, they can make people assume that they have less in common with one another than they actually do. They can make commonality itself seem like an impossibility.

This is how the wonder of the web—all of that wisdom, all of that weirdness, all of that frenzied creativity—can give way to cynicism. A feature such as TikTok’s For You page is in one way a marvel, a feed of content that people often say knows them better than they know themselves. In another way, though, the page is one more of the internet’s known unknowns: We’re aware that what we’re seeing is all stridently personalized. We’re also aware that we’ll never know, exactly, what other people are seeing in their stridently personalized feeds. The awareness leaves us in a state of constant uncertainty—and constant instability. “In Filterworld,” Chayka writes, “it becomes increasingly difficult to trust yourself or know who ‘you’ are in the perceptions of algorithmic recommendations.” But it also becomes difficult to trust anything at all. For better and for worse, the algorithm works like magic.

​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

QOSHE - The Uncanniest Influencers on the Internet - Megan Garber
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

The Uncanniest Influencers on the Internet

6 4
17.01.2024

A new book brings stark clarity to the formulas that guide our behavior online.

In 1973, the writer Arthur C. Clarke formulated an adage meant to capture the relationships humans were building with their machines: “Any sufficiently advanced technology is indistinguishable from magic.”

The line became known as Clarke’s Third Law, and it is regularly invoked today as a reminder of technology’s giddy possibilities. Its true prescience, though, lay in its ambivalence. Technology, in Clarke’s time, encompassed cars and dishwashers and bombs that could take millions of lives in an instant. Technology could be awe-inspiring. It could also be cruel. And it tended to work, for the typical person, in mysterious ways—an opacity that, for Clarke, suggested something of the spiritual. Today, as technology has expanded to include self-driving cars and artificial intelligence and communications platforms that divide people even as they connect them, his formulation suggests a darker form of faith: a creeping sense that technological progress amounts to human capitulation. To exist in an ever more digitized world is to be confronted every day with new reminders of how much we can’t know or understand or control. It is to make peace with powerlessness. And then it is, very often, to respond just as Clarke suggested we might—by seeking solace in magic.

Because of that, there’s power in plain language about how technology functions. The plainness itself acts as an antidote to magical thinking. That is one of the animating assumptions of Filterworld: How Algorithms Flattened Culture, the journalist and critic Kyle Chayka’s new book. “Filterworld,” as Chayka defines it, is the “vast, interlocking, and yet diffuse network of algorithms that influence our lives today”—one that “has had a particularly dramatic impact on culture and the ways it is distributed and consumed.” The book is a work of explanatory criticism, offering an in-depth consideration of the invisible forces people invoke when talking about “the algorithm.” Filterworld, in that, does the near impossible: It makes algorithms, those dull formulas of inputs and outputs, fascinating. But it also does something that is ever more valuable as new technologies make the world seem bigger, more complicated, and more difficult to understand. It makes algorithms, those uncanniest of influencers, legible.

Algorithms can be teasingly tautological, responding to users’ behavior and shaping it at the same time. That can make them particularly challenging to talk about. “The algorithm showed me,” people commonly say when explaining how they found the TikTok they just shared. “The algorithm knows me so well,” they might add. That language is wrong, of course, and only in part because an algorithm processes everything while knowing nothing. The formulas that determine users’ digital experiences, and that decide what users are and are not exposed to, are elusively fluid, constantly updated, and ever-changing. They are also notoriously opaque, guarded like the trade secrets they are. This is the magic Clarke was talking........

© The Atlantic


Get it on Google Play