In the Tradition of Liberty.

In the Tradition of Liberty.

The World After Reading

Weโ€™ve all seen the headlines: Johnny canโ€™t read anymore.ย Reading scores are plummeting.ย Forty-six percent of Americans didnโ€™t read a single bookย in the past year. Young people no longer read for pleasure. The ability of most college students to comprehend even a paragraph of decent English prose isย all but non-existent. We donโ€™t even need to click the links or read the studies. We all have plenty of our own anecdotes to back them up at this point.

And we all know why: Itโ€™s the screens, dummy.

But what weโ€™ve seen so far is just a preview of the post-literate society that awaits us. Four years ago, after all, most of us werenโ€™t much good at reading but we could hardly avoid it altogether. If you were a student, your professors still assigned at least substantial excerpts of books. If you wanted to cheat with SparkNotes, you still had to read the summary. Once in the workplace, there were few jobs that didnโ€™t require reading some articles and training manuals. And then there were always the emailsโ€”what felt like dozens of them per day. 

Now AI will do the reading for us. Open any PDF in Adobe Acrobat and it will bombard you with pop-ups all but demanding you to let AI read the article. Just last week, I opened Gmail to find a new โ€œSummarize this email?โ€ message at the top of every email and discovered there was no way to disable the feature without all but incapacitating my inbox. (Thankfully, Google seems to have bowed to public backlash, because the message has disappearedโ€”for now.) 

 At the dawn of 2026, most of us still have to have enough reading ability to process AI-generated summaries. But pretty soon, as voice-interface AI becomes more common, we wonโ€™t even have to do that. We are regressing into an oral culture.

But is this really such a bad thing? The oral world of the Homeric bard harbors a kind of fascination for us. Is it a kind of eurocentrism or colonialism or white supremacy to think literacy is better? For that matter, wasnโ€™t Socrates worried about the shift to literacy, fearing that the rise of writing would lead to the loss of memory? (I have heard more than one AI booster dismiss worries about AI-driven cognitive decline with a vague and dismissive wave in Socratesโ€™s direction.)

But letโ€™s pause and be serious. Who is prepared to argue that our regression to orality will bring back the achievements of memory and rhetoric that characterized the ancient world? Walter Ong observed more than four decades ago in Orality and Literacy that electronic media were even then initiating a return to an oral culture. Yet he stressed that this โ€œsecondary oralityโ€ could be no simple return to the innocence of โ€œprimarily orality.โ€ Orality in a world of mass media and the โ€œglobal villageโ€ seems destined to be a far baser currency than that of the tribal campfire.

As for Socrates, his chief fear concerned the shift from a world in which people carried around all their knowledge and cognitive skills in their own head, to a world in which they were able to โ€œoffloadโ€ some of them, storing them externally. Today with AI, it should be obvious that far from reversing this process, we are intensifying it. Few of those using AI to do their reading are using their newfound leisure to memorize epic poems or engage in Socratic dialogues. If we are headed for a new culture of orality, it seems destined to be an anti-culture of utterly banal, shallow, and ignorant discourse. 

Besides, while Socrates was not wrong to recognize the downsides of reading and writing, the subsequent verdict of history seems pretty unequivocal that the tradeoffs were worth it. The great irony of Socratesโ€™s warning is that we only know of it because his disciple Plato disagreed with him and wrote it down. In so doing, Plato inaugurated a tradition of philosophical reflection thatโ€”together with the products of the other great writing culture of the ancient world, the Jewsโ€”has become foundational to Western culture and thought. While it is perhaps possible to imagine oral transmission of the ideas in Platoโ€™s dialogues, it is utterly impossible to imagine it for the writings of his student, Aristotle, who (in the words of Richard Hooker) โ€œachieved more in almost every branch of natural knowledge than anyone else has ever done in any single branch.โ€ 

Socrates, perhaps, was unable to see that the technology of writing inaugurated a kind of virtuous feedback loop: by magnifying menโ€™s capacities for sequential reasoning and logical argumentation, it facilitated the generation of more and better ideas, which were in turn also written down, reinforcing the intellectual development of subsequent generations. It is no coincidence that the period and locus of the greatest literacyโ€”England beginning in the 1600s and France and Germany in the 1700sโ€”gave birth to an explosion of human knowledge and a self-reinforcing engine of progress across many domains.

Several decades ago, this feedback loop began to crank into reverse and has been steadily accelerating since. It began with the mesmerizing flicker of the television screen, which as Neil Postman lamented, was ever so much more entertaining than a book could be. We began to read less and watch more, but we consoled ourselves that this was no big deal. After all, we were still perfectly able to read if we wanted to. And for several decades after the advent of television we succeeded in boosting basic reading scores in schools. 

It turns out, however, that screen time does not just impose an opportunity cost, by stealing time that might be spent reading. It rewires the brain to render it ever less able to genuinely read and comprehend. Nicholas Carr first sounded the alarm in 2008, warning that the consequences of the shift to the fragmented, rapid-refresh visual space of the internet would be far worse than anything Postman had warned about. At the same time, a new business model of โ€œaddiction by designโ€ was emerging, which began to weaponize the brainโ€™s primal impulses to co-opt attention and render disciplined focus impossible. Not only did we read less, then, but when we attempted to pull our eyes back to the page, we found ourselves less and less able to make sense of it.

As reading comprehension declined and reading became vastly more effortful, we gravitated still more to the comfort of the screen, further rewiring our neural circuits until, for most of us, reading a novel had been transformed from a delight to a chore to an impossibility. Having thus deprived ourselves of the capacity to read, our technologists eagerly hurried forward with a patent medicine to cure their own disease: AI. Donโ€™t worry if you canโ€™t read anymore! The bot will do that for you.

With this cycle, we find the ultimate expression of what Marshall McLuhan in his classic Understanding Media called the โ€œNarcissus narcosisโ€ inherent in technology. Every technology, he argues, represents a magnification of some part or capacity of the human body, an extension of our nervous system. In so doing, it magnifies our ability to act, but also magnifies the stimuli we receive, overwhelming our senses until they become numb in response. (One need think only of the internetโ€™s vast expansion of our capacity to receive information, leading within less than two decades to an information overload that leaves us incapable of processing the inputs.) This โ€œnarcosisโ€ or โ€œauto-amputation,โ€ McLuhan argued, extends our power to act in the world while weakening our native capacities, tending to make us ever more dependent upon the technology that was meant to make us more independent. 

Profound as this insight is, McLuhan did not follow through fully on the metaphor. In the shift from analog to digital technology, optimized for engagement, the narcosis became a narcotic. Today, our most dominant technologies are, as Clare Morell likes to say, digital fentanyl, creating โ€œan ever-intensifying craving for an ever-diminishing reward,โ€ in the words of the devil Screwtape from C.S. Lewisโ€™s masterpiece.

To the extent that some educators and legislators push back on the AI revolution, they often do so on the shaky ground that โ€œAI is an unreliable reader.โ€ Itโ€™s the same reason that teachers advised students not to cite Wikipedia around 15 years ago. Since then, of course, Wikipedia has become extremely reliable on most topics, and thereโ€™s good reason to think that AI will as well, as developers succeed in cutting down on โ€œhallucinations.โ€ Betting against AIโ€™s ability to supply reliable information is a bad wager. 

Perhaps a more serious objection is that AI is not a reader at all. As Naomi Baron points out in Reader Bot, โ€œEvery AI reply is a hallucination. LLMs โ€ฆ are constructing their own predictive concatenations out of bits and pieces,โ€ not offering distillations of rational comprehension. When AI โ€œreadsโ€ and โ€œsummarizesโ€ a text, it does something rather like what I do when I read the front and back cover a book, recall everything I already know about the author, skim a few pages, and then BS my way to answering your question about the bookโ€™s argument. To the extent that we increasingly rely on such algorithmic โ€œreadingโ€, we are apt to imitate it ourselves, mistaking skill in generating plausible BS for rational deliberation. What matters will no longer be truth but โ€œtruthinessโ€โ€”the extent to which an idea maps onto familiar patterns and offers comforting frameworks.

Such a post-literate world, as Mary Harrington argued last year in her much-discussed essay โ€œThe King and the Swarm,โ€ will be one in which the preconditions for liberal democracy simply no longer obtain: โ€œEvery culture that has transitioned from print-first to digital-first ceased, in so doing, to form its population for democratic citizenship. They are, quite plainly, the wrong kind of subject.โ€ Perhaps, as Harrington suggests, we need to reconcile ourselves to this trend and embrace monarchy as a better alternative than oligarchic despotism. If so, however, we had best take proper stock of all that we are giving up, rather than assuming our constitutional order will still be right there waiting for us every time we return from our excursions in the metaverse. 

But how do we know we wonโ€™t gain more in the process? Maybe the new capabilities our civilization will gain from AI will compensate for those we will lose. To this, however, we must retort: Will these really be our capabilities? If we really outsource human functions to a machine, that will only be an augmentation of our powers if the machine is under our control. But what reason is there to think that it will remain so if we have so weakened ourselves? 

I am not speaking here of sci-fi scenarios like those in the Terminator movies; let us assume that some humans remain in control of the AI. But those humans will not be most of us, and they certainly wonโ€™t be the college graduate who never read more than a page on his own powers. As C.S. Lewis warned in The Abolition of Man, โ€œWhat we call manโ€™s power over nature is some menโ€™s power over other men using nature as its instrument.โ€ Even if we avoid Skynet, the price of AI is a loss of freedom.

The literacy crisis, then, is not merely a problem for college professors or employers. It is not even merely a crisis for liberal democracy as we have become accustomed to it. It is a crisis in what it means to be human. Till now, technology has invited us to hand over the work of our lower faculties to machines, so that we might have more leisure to develop our higher faculties. But what happens when, in search of ever more output with ever less labor, we hand over our highest faculties, too? What then? โ€œThis time,โ€ observes Lewis, โ€œthe being who stood to gain and the being who has been sacrificed are one and the same.โ€

There are no easy answers or solutions to this crisis. In the face of technological and social changes on such a large scale, the conservative is likely to be left not standing athwart, but running alongside the runaway train of history yelling โ€œStop!โ€ If all the popeโ€™s horses and all the popeโ€™s men could not stop literacy from gaining steam in the sixteenth century, it seems unlikely that any policy levers we could pull today would do more than modestly slow the reverse trend. 

Thatโ€™s not to say theyโ€™re not worth pulling. The current interest in getting phones and screens out of the classroom and age-gating forms of social media and AI should slow the progress of illiteracy and at least make it easier for bibliophile students to swim against the stream. On a more modest but no less civilizationally critical scale, the current revival of classical education must be nurtured and strengthened by any means possible, offering some share of our citizens not merely the blessings of literacy, but the blessings of liberty from algorithmic domination. If Harrington proves right and the best we can hope for is a return to monarchy in some form, let us at least take measures to ensure it is the monarchy of an Alfred or Charlemagne, not an Attila.  

About The Author