Is it still cool to memorize a lot of stuff? Is there even a reason to memorize anything? Having a lot of information in your head was maybe never cool in the sexy-cool sense, more in the geeky-cool or class-brainiac sense. But people respected the ability to rattle off the names of all the state capitals, or to recite the periodic table. It was like the ability to dunk, or to play the piano by ear—something the average person can’t do. It was a harmless show of superiority, and it gave people a kind of species pride.
There is still no artificial substitute for the ability to dunk. It remains a valued and nontransferrable aptitude. But today who needs to know the capital of South Dakota or the atomic number of hafnium (Pierre and 72)? Siri, or whatever chatbot you use, can get you that information in nanoseconds. Remember when, back in the B.D.E. (Before the Digital Era), you’d be sitting around with friends over a bottle of Puligny-Montrachet, and the conversation would turn on the question of when Hegel published “The Phenomenology of Spirit”? Unless you had an encyclopedia for grownups around the house, you’d either have to trek to your local library, whose only copy of the “Phenomenology” was likely to be checked out, or use a primitive version of the “lifeline”—i.e., telephone a Hegel expert. Now you ask your smartphone, which is probably already in your hand. (I just did: 1807. Took less than a second.)
And names and dates are the least of it. Suppose, for example, that you suspected that one of your friends was misusing Hegel’s term “the cunning of reason.” So annoying. But you don’t even have to be sober to straighten that person out. As you contemplate another glass, Siri places in your hand a list of sites where that concept is explained, also in under a second. And, should the conversation ever get serious, Hegel’s entire corpus is searchable online. Interestingly, when I ask Siri, “Is Dick Van Dyke still alive?,” Siri says, “I won’t respond to that.” It’s not clear if that’s because of the Dick or the Dyke. (He is, and he’s ninety-four.)
There is also, of course, tons of instant information that is actually useful, like instructions for grilling corn on the cob, or unclogging a bathtub drain. And it’s free. You do not have to pay a plumber.
Leaving the irrefutably dire and dystopian effects of the Web aside for a moment, this is an amazing accomplishment. In less than twenty years, a huge percentage of the world’s knowledge has become accessible to anyone with a device that has Wi-Fi. Search engines work faster than the mind, and they are way more accurate. There is plenty of misinformation on the Web, but there is plenty of misinformation in your head, too. I just told you what the atomic number of hafnium is. Do you remember it correctly?
The most radical change that instant information has made is the levelling of content. There is no longer a distinction between things that everyone knows, or could readily know, and things that only experts know. “The cunning of reason” is as accessible as the date Hegel’s book was published and the best method for grilling corn. There is no such thing as esoterica anymore. We are all pedants now. Is this a cause for concern? Has it changed the economic and social value of knowledge? Has it put scholars and plumbers out of business and made expertise obsolete?
In the early years of the Web, the hub around which such questions circled was Wikipedia. The site will be twenty years old on January 15th, and a collection of articles by scholars, called “Wikipedia @ 20: Stories of an Incomplete Revolution” (M.I.T.), is being published as a kind of birthday tribute. The authors survey many aspects of the Wiki world, not always uncritically, but the consensus is that Wikipedia is the major success story of the Internet era. A ridiculously simple principle—“Anyone can edit”—has produced a more or less responsibly curated, perpetually up-to-date, and infinitely expandable source of information, almost all of it hyperlinked to multiple additional sources. Andrew Lih’s history of the site, “The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia,” published in 2009, is similarly smitten.
Wikipedia took off like a shot. Within a month, it had a thousand articles, a number that would have been impossible using a traditional editorial chain of command. Within three years, it had two hundred thousand articles, and it soon left print encyclopedias in the dust. Today, Wikipedia (according to Wikipedia) has more than fifty-five million articles in three hundred and thirteen languages. In 2020, it is the second most visited site on the Web in the United States, after YouTube, with 1.03 billion visits a month—over four hundred million more visits than the No. 3 Web site, Twitter. The Encyclopædia Britannica, first published in 1768 and for centuries the gold standard of the genre, had sixty-five thousand articles in its last print edition. Since 2012, new editions have been available only online, where it currently ranks fortieth in visits per month, with about thirty-two million.
In the beginning, the notion that you could create a reliable encyclopedia article about Hegel that was not written by, or at least edited by, a credentialled Hegel expert was received, understandably, with skepticism. Teachers treated Wikipedia like the study guide SparkNotes—a shortcut for homework shirkers, and a hodgepodge compiled by autodidacts and trivia buffs. The turning point is customarily said to have been a study published in Nature, in 2005, in which academic scientists compared forty-two science articles in Wikipedia and the Encyclopædia Britannica. The experts determined that Wikipedia averaged four errors per article and Britannica averaged three. “Wikipedia comes close to Britannica in terms of the accuracy of its scientific entries” was the editors’ conclusion. By then, many teachers were consulting Wikipedia regularly themselves.
The reason most people today who work in and on digital media have such warm feelings about Wikipedia may be that it’s one of the few surviving sites that adhere to the spirit of the early Internet, to what was known affectionately as the “hacker ethos.” This is the ethos of open-source, free-access software development. Anyone can get in the game, and a person doesn’t need permission to make changes. The prototypical open-source case is the operating system Linux, released in 1991, and much early programming was done in this communal barn-raising spirit. The vision, which now seems distinctly prelapsarian, was of the Web as a bottom-up phenomenon, with no bosses, and no rewards other than the satisfaction of participating in successful innovation.
Even today, no one is paid by Wikipedia, and anyone can (at least in theory, since a kind of editorial pecking order has evolved) change anything, with very few restrictions. In programming shop talk, all work on Wikipedia is “copyleft,” meaning that it can be used, modified, and distributed without permission. No one can claim a proprietary interest. There are scarcely any hard-and-fast rules for writing or editing a Wikipedia article.
That seems to have been what got hacker types, people typically allergic to being told what to do, interested in developing the site. “If rules make you nervous and depressed,” Larry Sanger, the site’s co-founder, with Jimmy Wales, wrote in the early days, “then ignore them and go about your business.”
Wikipedia is also one of the few popular sites whose content is not monetized and whose pages are not personalized. Nothing is behind a paywall; you do not have to log in. There are occasional pop-ups soliciting contributions (in 2017-18, almost a hundred million dollars was donated to the nonprofit Wikimedia Foundation, headed by Wales), but no one is trying to sell you something. Everyone who looks up Pierre, South Dakota, sees the same page. There is no age-and-gender-appropriate clickbait, no ads for drain de-cloggers and books by German philosophers.
Wikipedia has some principles, of course. Contributors are supposed to maintain a “neutral point of view”; everything must be verifiable and, preferably, given a citation; and—this is probably the key to the site’s success with scholars—there should be no original research. What this means is that Wikipedia is, in essence, an aggregator site. Already existing information is collected, usually from linkable sources, but it is not judged, interpreted, or, for the most part, contextualized. Unlike in scholarly writing, all sources tend to be treated equally. A peer-reviewed journal and a blog are cited without distinction. There is also a semi-official indifference to the quality of the writing. You do not read a Wikipedia article for the pleasures of its prose.
There are consequently very few restrictions on creating a page. The bar is set almost as low as it can be. You can’t post an article on your grandmother’s recipe for duck à l’orange. But there is an article on duck à l’orange. There are four hundred and seventy-two subway stations in New York City; each station has its own Wikipedia page. Many articles are basically vast dumping grounds of links, factoids, and data. Still, all this keeps the teachers and scholars in business, since knowledge isn’t the data. It’s what you do with the data. A quickie summary of “the cunning of reason” does not get you very far into Hegel.
But what about the folks who can recite the periodic table, or who know hundreds of lines of poetry “by heart,” or can tell you the capital of South Dakota right off the bat? Is long-term human memory obsolete? One indication of the answer might be that the highest-rated syndicated program on television for the first ten weeks of 2020 was “Jeopardy!” The ability to recall enormous numbers of facts is still obviously compelling. Geek-cool lives.
“Jeopardy!” is thirty-seven years old under its host Alex Trebek, who died earlier this month, at the age of eighty. But the show is much older than that. It first went on the air in 1964, hosted by Art Fleming, and ran until 1975. And the “Jeopardy!” genre, the game show, is much older still. Like a lot of early television—such as soap operas, news broadcasts, and variety shows—game shows date from radio. The three national broadcast networks—CBS, NBC, and ABC—were originally radio networks, so those were genres that programmers already knew.
Shows like “Jeopardy!” were as popular in the early years of television as they are today. In the 1955-56 season, the highest-rated show was “The $64,000 Question,” in which contestants won money by answering questions in different categories. Soon afterward, however, a meteor struck the game-show planet when it was discovered that Charles Van Doren, a contestant on another quiz show, “Twenty-One,” who had built up a huge following and whose face had been on the cover of Time, had been given the answers in advance. It turned out that most television quiz shows were rigged. The news was received as a scandal; there were congressional hearings, and the Communications Act was amended to make “secret assistance” to game-show contestants a federal crime.
Whom did such “assistance” help? Mostly, the networks. When a player is on a streak, audience size increases, because more and more people tune in each week to see if the streak will last. In the nineteen-fifties, there were usually just three shows to choose from in a given time slot, so audiences were enormous. As many as fifty-five million people—a third of the population—tuned in to “The $64,000 Question.” It was the equivalent of broadcasting the Super Bowl every week. The financial upside of a Van Doren was huge.