
The latest wave of tech layoffs has revived an old storyline. Scott Galloway notes that U.S. employers have announced roughly 946,000 job cuts this yearโnumbers we have not seen since 2020. At the same time, internal memos from firms like Google report that AI has boosted engineer productivity by about ten percent. For many observers, stories like these appear to show that AI is displacing workers. But that leap from juxtaposition to explanation is a familiar temptation. It assumes that every labor market tremor occurring near a new technology must be read as evidence of the advance of AI.
Public debates about artificial intelligence often take the form of dueling certainties. One side of the debate predicts a wave of technological unemployment, while the other side responds that there will be a renaissance of productivity and opportunity. Depending on the side of the debate, the tone shifts from dread to optimism, but the underlying presumption remains that the trajectory of work can be mapped in advance without knowing the facts on the ground in the future. Whether AI is cast as a threat or as a means of economic and material liberation, the fact remains that each view especially when too specific, rests on questionable confidence and a partial grasp of the future.
The temptation toward that inference is understandable, but it rests on a shallow picture of how labor markets evolve. The decisions firms make to restructure depends on many factors like interest rates in financial markets, hiring cycles, investor expectations, regulatory uncertainty, internal organizational aims, and even tariff rates. These forces operate at different scales and timescales, and so to read these decisions as a preview of a technological future is to compress a multidimensional phenomenon into a single storyline.
This kind of mistake is rooted in the structure of economic knowledge itself. In his 1937 essay Economics and Knowledge, Friedrich Hayek argued that economic coordination depends on a vast field of local, contextual, and often tacit information. Prices help convey this information, but they do not reduce it to something an individual or group can know with certainty. The key facts of economic life exist only in the minds and contexts of the people who confront them at that time and place. As Hayek emphasized, the relevant knowledge โis never for long the sameโ because it is acquired, updated, and discarded by individuals who cannot see the whole system.
If economic order arises from the dispersed plans of individuals who learn and adapt over time, then the effects of a new technology cannot be forecast just by designing the tech or looking at its capabilities. As Hayek emphasizes, markets are discovery procedures that reveal facts that could not be known prior like individual preferences, production possibilities, cost structures, entrepreneurial opportunities, and institutional constraints. The knowledge uncovered here is created by the local interactions at the time. By a similar line of reasoning, predictions about the labor market are attempts to describe a future knowledge landscape that requires knowledge that does not yet exist but will be generated by millions of small decisions in the future.
This is why so many technology predictions age poorly. Geoffrey Hinton predicted in 2016 that radiologists were on the brink of redundancy. Nearly a decade later, radiology remains one of the most competitive specialties, and demand has increased. ATMs were supposed to reduce the number of bank tellers, but instead the number rose for decades because branches became cheaper to operate, enabling banks to expand, and then with teller jobs declining. A classic economic parable, โI, Pencil,โ that illustrates the complicated lines of production and expertise need to make a simple graphite highlights how complicated such things are.
So, then, why do people continue making predictions that fail so routinely? Part of the answer is perverse incentives. People who innovate want to appear transformative. The more the future is convey as disruptive, the greater the potential for investment and attention. But incentives alone do not explain how and why the predictions appear urgent and sincere. This is where self-deception enters. Politicians who defend tariffs sincerely persuade themselves that the policy exists to protect workers, even when it just reward firms that lobby. The sincerity itself is produced by incentives to make it easier for politicians to come off as convincing when trying to sell tariffs. In a similar fashion, the belief in transformative change is often authentic, even when the evidence is weak. Claims of transformationโeven if too specific and ill-foundedโattract talent and investment and boosts the reputations of the technology entrepreneurs as visionaries and leaders who know the road ahead.
Consider an illustrative example: asking founders to predict the effects of their technology on the labor market is like asking pencil manufacturers to predict what people will write with their pencils. They can describe the tool, but they simply cannot know, in great detail, what people will write with those pencils other than the vague and generic. The aims of people using pencils often emerge the interplay of millions of individuals with different motivations, contexts, and constraints. There are obvious usesโa letter home to Momโbut there are many uses that the pencil manufacturers could not anticipate because they lacked the information about specific situations, opportunities, and incentives, similar to jokes where you โhad to be thereโ to get why they are funny.
In that sense, the future of work will not resemble the stories being told about it today. Nor will it be a world in which automation frees everyone for higher pursuits. It will be something more textured with an economic landscape shaped by the incremental but hard to predict choices of people trying to improve their position under new constraints that are hard to predict from the desk chair. The proper posture toward that landscape is epistemic and predictive modesty. So, in that sense, technological predictions are more about poor assumptions, perverse incentives, and self-deception than are reflections of how the uncertain future will play out. The future will be discovered and constructed, and so cannot, by nature, be perfectly predicted.
About The Author