October 1, 2024
Charles Fain Lehman
Although it would take some months more to bring to fruition, the public unveiling of the Great Society—which turned 60 this year—is usually taken to be Lyndon Johnson’s 1964 commencement address at the University of Michigan. Just six months since he had been sworn in as JFK’s successor, and with election in his own right not yet under his belt, Johnson’s vision was as much a campaign promise as a policy document. Nonetheless, the basic argument of the speech foreshadowed the coming revolution in domestic policy.
America, Johnson told the assembled class of 1964, was on the threshold of transformation. “For half a century we called upon unbounded invention and untiring industry to create an order of plenty for all of our people,” Johnson said. “The challenge of the next half century is whether we have the wisdom to use that wealth to enrich and elevate our national life, and to advance the quality of our American civilization.”
The work of using America’s great wealth to ‘advance the quality of our American civilization” very quickly became the work of bureaucracy. From Medicare and Medicaid to the Office of Economic Opportunity, new agencies sprang up, dedicated to directing American wealth towards the elevation of national life. The leaders of these agencies, drawn from the highest reaches of academia and business, were expected to bring about a profound improvement in the American situation.
Looking back at that American great leap forward, what is the legacy of the Great Society? It was an unprecedented experiment in the power of the American government to alter American society. In one basic regard—increasing the amount of money in poor people’s bank accounts—it succeeded. But it failed to address, and may even have exacerbated, the pathological behaviors which go with and cause poverty, and which caused great social upheaval over the course of the twentieth century. Indeed, six decades on, the Great Society is less a testament to the power of government than to the limits of what it can do.
The Moynihan Report
Among the men who went to work on the Great Society was Daniel Patrick Moynihan, then a little-known academic toiling in the Department of Labor. It was in this position that Moynihan first began to notice alarming trends concentrated among black Americans (the people the Great Society was most meant to help). Above all, Moynihan saw a dramatic increase in single-mother families, a then-unthinkable phenomenon nonetheless affecting a quarter of black children.
From single motherhood, Moynihan argued, came rising welfare dependency, crime and delinquency, and drug use, all of which combined to create a “tangle of pathology” which was gripping parts of black America like a vice. An effective welfare policy, Moynihan argued, would have to contend with these deeper pathologies. Just moving money around would do nothing to lessen, and might exacerbate, the problems.
Many, of course, disagreed. The so-called Moynihan Report, upon its release, prompted massive backlash—the phrase “blaming the victim” was popularized by critics who ignored Moynihan’s humane normative conclusion, and instead focused on his shocking factual claims. The idea that black America’s problems were so deeply rooted—that it would take more than a hand up from Washington to address them—was too disturbing for the technocratic utopians of the 1960s to contemplate.
Six decades later, it’s clear that Moynihan was both right and wrong. The major error of the Report was not its pessimism, but in its attributing the tangle of pathology to the unique situation of black America. A large fraction of white America now suffers from the same bundle of problems.
America today is much richer than it was in the 1960s. And, thanks in large part to the Great Society, it spends far more of its wealth on the needs of the neediest. But for all of that wealth, it has not yet figured out how to address the family collapse, crime, and other persistent challenges that Moynihan identified. If the Great Society asked “whether we have the wisdom to use [our] wealth to enrich and elevate our national life” the answer appears, more or less, to be no.
Which is not to say that the legacy of the Great Society is, in the popular mind, one of failure. Quite the opposite: politicians of both parties now run on defending, and expanding, its programs. They are, in many cases, infected by the same optimism about government that Johnson was sixty years ago. But as the intervening decades have shown, such optimism is unwarranted. And the public trusts only at its peril those who claim to offer big solutions to America’s tangle of pathology.
Visions of the Great Society
What was the Great Society? That is to say, when we refer to the Great Society, what do we actually mean? At the level of concrete policy, the term encompasses a swath of programs and laws passed under Johnson: the Office of Economic Opportunity, Head Start, Medicare and Medicaid, the Civil Rights Act, the Clean Air Act, the War on Poverty, the War on Crime, just to name a few. In theory one could add on to these programs instituted under Kennedy and Nixon, like the Community Mental Health Act, the Peace Corps, or the Environmental Protection Agency.
Together, these programs represented a profound transformation of the American way of government. In 1960, total federal expenditures were just over $400 billion. In 1972, they exceeded $1 trillion for the first time. By 1975, the figure was $1.5 trillion, almost a quarter of GDP.
Great Society programs have been behind much of the growth in government since then, too. In 2022, Medicare and Medicaid alone accounted for $1.75 trillion in spending—more than the 1975 budget by themselves. As my Manhattan Institute colleague Brian Riedl has argued, Medicare is one of the two major drivers of our ever-growing public debt.
Beyond their scale, what unites these programs is a commitment to the idea that the living standard of Americans, particularly the poorest Americans, could be elevated by federal intervention. Black Americans in the segregated south were one group whose condition Johnson and colleagues wished to improve. But so were poor whites, whose poverty was publicized by The Other America, Michael Harrington’s influential 1962 book. Johnson’s dirt-poor upbringing in the Hill Country of Texas and—per Johnson biographer Robert Caro—his early experience as a school teacher near the border, were likely relevant as well.
Importantly, Johnson’s vision for the Great Society encompassed not merely the alleviation of material deprivation, but the elimination of poverty itself. As Robert Rector and Jennifer A. Marshall put it in a retrospective on the War on Poverty,
When Lyndon Johnson launched the War on Poverty in the mid-1960s, he intended it to strike "at the causes, not just the consequences of poverty." The aim of that effort, he explained, was "not only to relieve the symptom of poverty, but to cure it and, above all, to prevent it."
President Johnson's goal was not to create a massive system of ever-increasing welfare benefits for an ever-larger number of beneficiaries. Instead, he sought to increase self-sufficiency, enabling recipients to lift themselves up beyond the need for public assistance. "[M]aking taxpayers out of taxeaters" was Johnson's stated mission; "[w]e want to offer the forgotten fifth of our people opportunity and not doles," he declared.
That government had the capacity to address “the causes, not just the consequences of poverty,” was taken as not merely given but obvious. America was then still in the fading days of the post-war consensus—anti-communist yet not hostile to big government, pro-business but also pro-union, and optimistic, almost utopian, about the ability of the state to get things done. The great ideological disputes of the ‘30s and ‘40s had given way to a sense that dull technocracy would, slowly but steadily, address all the nation’s problems. Writing in the first issue of The Public Interest, Moynihan identified this trend as “the professionalization of reform,” asserting that “there is a good deal of evidence … that in the area of economic policy there has occurred a genuine discontinuity, a true break with the past: Men are learning how to make an industrial economy work.”
This technocratic liberal meliorism, more than anything else, was at the core of the Great Society. When Moynihan wrote his Report, it was not—as it would come to be seen—an argument against government dependence. Rather, it was an argument for aggressive government intervention to address a crisis. Implicit was the idea that the state could reverse the growth of single-parent households and, more generally, the social dysfunction that encouraged them. What everyone then assumed was that public policy was up to the task. But as Moynihan and many of his contemporaries would come to realize, they were sorely mistaken.
The Great Society's Effects
Did the kind of interventions that Moynihan proposed work? Did the Great Society help unify “the other America” with the mainstream? To answer that question, we should take a brief detour to discuss poverty and how it’s measured.
Poverty is a bit like pornography—you know it when you see it. Yet questions bedevil attempts to quantify the share of Americans who live in poverty. How poor do you need to be live in poverty? Should poverty be considered relative (for example, a certain fraction of median income) or absolute (the possession of or money to acquire certain goods)? Should government benefits count towards measures of a household’s income? If so, how should the value of non-cash benefits (health insurance, food stamps, etc.) be factored in? And so on.
The Census Bureau publishes two poverty measures. The first, the “official poverty measure” (OPM) was created in 1963-64 by Social Security Administration economist Mollie Orshansky. The OPM uses a relative simple formula; to quote the University of Wisconsin Institute for Research on Poverty, it “compares pre-tax cash income against a threshold that is set at three times the cost of a minimum food diet in 1963 and adjusted for family size.”
This approach has obvious limitations. It holds the threshold for poverty constant to the consumption behaviors of 1963, and it does not account for the effects of taxes and transfers. In response to long-standing concerns over these limitations, in 2010 the Census Bureau and the Bureau of Labor Statistics together created the “supplemental poverty measure” (SPM). The SPM uses a broader basket of goods to determine the poverty threshold. It also incorporates benefits, both cash and non-cash, as well as taxes and certain other expenses.
As the plot above shows, the two measures tell very different stories. After falling slightly through the second half of the ‘60s, the official poverty rate has barely budged since the mid-1970s. The most recent reading, 11.5%, suggests that we have done almost nothing to improve the condition of the poor, and against some years are moving backwards.
The supplemental poverty rate, by contrast, starts higher. Yet it has also shown an uneven but general decline. The most recent reading, 12.4%, is almost half of what supplemental poverty was taken to be at the end of the 1960s. Insofar as living standards have risen since 1963, the decline in the SPM is attributable primarily to the effect of transfers—meaning, in essence, that the Great Society and welfare programs that followed it have cut the poverty rate in half.
One can go further in arguing for the success of the War on Poverty. In a 2019 report, President Trump’s Council of Economic Advisers declared the Johnson War on Poverty “largely over” and “a success based on 1963 standards of material hardship.” Using a poverty measure of their own devising, the CEA argued that just 2.3 percent of Americans were poor by 1963 standards, after transfers and taxes were taken into account and dollar amounts were properly adjusted for inflation.
How many are truly poor is, of course, a matter of ongoing dispute. For whatever reason, it has become politically advantageous for the left to overestimate poverty, and the right to perhaps underestimate it. Nonetheless, it is hard to avoid two basic facts. One is that by most reasonable standards, many Americans are less poor today than they were in 1965. The second is that that accomplishment has been driven largely by increases in transfers, both in-kind and cash. That is to say, those Americans are less poor today largely because they have been given the money by the government—and primarily although not exclusively by programs created by the Great Society.
The Great Society in the Rear View Mirror
Measured by the quantitative reduction of poverty, the Great Society should be judged a resounding success. Particularly relative to the standard of 1963, the poor in America are far better off than they were 60 years ago. And this improvement is largely the consequence of a dramatic increase in government transfers to this population.
But the Great Society was not, at least in theory, interested in addressing poverty qua absence of money. It was meant to make “taxpayers out of taxeaters,” to alter the condition of the poor such that they were self-sustaining. To do that, Moynihan argued, government would need to address the “tangle of pathology”—the set of dysfunctional behaviors and patterns which overlap with profound poverty, and which plausibly cause dependence.
Considering the social variables which measure such pathologies, we get a less favorable picture than by looking at the poverty rate. If the Great Society was intended to promote personal and civil flourishing rather than simply reduce material deprivation, it has obviously failed.
Take family formation, the topic which so concerned Moynihan in 1965. At the time, he counted roughly 1 in 4 black children born out of wedlock, compared to roughly 3 percent of white children. From then, rates of out-of-wedlock birth rose more or less continuously for decades. In 2022, the most recent year for which complete data are available, two thirds of black women who gave birth reported being unmarried. Among white mothers, the rate was 30%—higher than the crisis rates Moynihan identified among black parents. In total, of about 3.7 million babies born, 1.3 million were born to an unmarried mother. Most of these children, moreover, will go on to live with just mom—80% of America’s 11 million single-parent households are led by women.
What about drug use? In the 1940s and 1950s drug use was rare, as the intoxicated decades of the late 19th and early 20th centuries gave way to aggressive legal prohibition combined with mounting social disapprobation. But the second half of the 20th century brought a new series of temptations. First there was heroin, which exploded nationwide in the 1960s. That was followed by a general surge of drug use in the 1970s, followed by the proliferation of cocaine and crack in the 1980s, then methamphetamine and opioids in the 1990s and 2000s, and most recently an unprecedented synthetic drug crisis. As one analysis notes, drug overdose deaths have almost perfectly followed an exponential growth curve since at least the late 1970s. As with the out-of-wedlock birth rate, that’s hardly an image of success.
Then there is crime. As James Q. Wilson observed in the first chapter of his 1975 book Thinking About Crime, the economic and political conditions of the 1960s were exactly what a liberal thinker would expect to reduce crime: high rates of government subsidy, rapidly growing GDP, and declining racial inequality. Instead, Wilson wrote, “crime soared. It did not just increase a little; it rose at a faster rate and to higher levels than at any time since the 1930s and, in some categories, to higher levels than any experienced in this century.” That increase would continue, largely unabated, for the next 20 years.
By its peak in the 1990s, America’s cities had been hollowed out. The great crime decline that followed the peak was obtained largely through dramatic increases in incarceration and policing—and yielded a violent crime rate still higher than it was in 1960. And, as recent years have demonstrated, even that peace is fragile. The surge in violence in 2020 and 2021 was, in large part, attributable to the temporary abandonment of exactly the tactics which brought crime under control in the 1990s.
Although he’s remembered for expanding welfare programs, Johnson’s goal was to liberate Americans from the dole. As the American Enterprise Institute’s Nicholas Eberstadt has shown, however, much the opposite happened. While they accounted for just 30 percent of government outlays in the early 1960s, entitlement transfers now substantially exceed all other government spending, rising above 60 percent as of the 2000s. As of 2023, the most recent data indicate, 99 million Americans received some social safety net benefit, including 18 million adults who participate in two or more programs.
At the same time that they were moving on to the welfare roles, many Americans were moving out of the labor force. As Eberstadt has documented, the share of working-age men not working or looking for work has fallen steadily for decades. As of the most recent reading, 89.6% of men ages 25 to 54 were employed or seeking a job, a 7.1 percentage point drop from the same month in 1964. Though small in relative terms, that represents some 4.5 million men not in the labor force who would have been at the 1964 rate.
These pathologies—unwed childbearing, drug use, criminality, joblessness, and welfare dependency—tend to go together. By the 1980s, American public intellectuals were speaking of those who displayed them as an “underclass,” or as what sociologist William Julius Wilson labeled “the truly disadvantaged.” At the time, concentrated pathology was thought of primarily as a black problem. But as Charles Murray has documented, and Senator J.D. Vance has detailed from personal experience, such behavior is now part of the lifestyle of many white Americans as well. This phenomenon suggests that the problem Moynihan identified can’t be attributed, as he believed, primarily to the influence of slavery or segregation.
Yet Moynihan’s phrase—“the tangle of pathology”—remains apt, inasmuch as different dysfunctional behaviors tend to reinforce each other. Economist Alan Krueger, for example, documented a correlation between declining male labor force participation and rates of opioid prescription, with each plausibly causing the other. Children raised in single-parent households—almost always by mothers—tend to do worse on a variety of outcomes, a tendency so powerful that economist Melissa Kearney has labeled it “the two-parent privilege.” On the other hand, Eberstadt’s “men without work” don’t do more childcare than their employed counterparts, but instead spend much of their time socializing, watching television, and doing drugs.
In short: while the 60 years since the Great Society have seen a dramatic decline in poverty, that decline has come in spite of the changes in the behavior of the disadvantaged that Johnson and his foot soldiers hoped to encourage. Rather than freeing Americans from dependence, public policy has become just another component of a dependent lifestyle.
Technocratic Optimism Confronts Political Realism
The obvious question, then, is whether the Great Society actually caused the rise in dysfunction? Did Lyndon Johnson, in attempting to help the poor, instead create the underclass? Or did other factors intervene, thwarting a noble effort?
For a time, the view that the Great Society undermined itself commanded substantial support. The canonical statement is 1984’s Losing Ground, Charles Murray’s first major foray into public policy. In the book, Murray argued that rising dysfunction and unchanged pre-transfer poverty (the OPM, i.e.) since the late 1960s were the product of changes in law and more generous benefits. In essence, by trying to be kinder to the most dysfunctional, government had made dysfunction relatively more attractive, and thus shifted people at the margin into it.
At the time, this argument was enormously influential. President Bill Clinton cited Murray’s work as part of why he supported welfare reform. The 1996 Personal Responsibility and Work Opportunity Act, which “end[ed] welfare as we know it” by replacing the largest federal cash-benefit program with a smaller, time-limited program, was influenced by Murray. And while today liberals dismiss “welfare queens” as a boogeyman, the reality is that a great number of women were dependent on welfare. Over 5 million households were on the program rolls in 1994, falling to less than a million in 2022.
But while welfare reform sent millions of single mothers to work, it had no discernible effect on family formation. And the growth of dysfunction among men, discussed above, continued more or less unabated, affecting black and white Americans alike. Even Murray would eventually acknowledge that the proverbial toothpaste could not be put back in the tube. In 2006, he retracted his earlier calls for ending the entire welfare state, instead endorsing replacing it with an annual $10,000 cash grant to all Americans.
It is hard to say for certain whether, as some once believed, the Great Society was uniquely responsible for the vast dysfunction that followed its passage. Certainly, other factors played a role—as I have argued, the current drug crisis is as much about supply as demand. And, as both Moynihan and William Julius Wilson argued, the decline of the black family predated the Great Society, indicating that the latter could not be the exclusive cause of the former. It is hard to tell the story of the rise of the underclass, moreover, without reference to macroeconomic factors like skills-biased technological change, which limited economic opportunities for workers with less education.
But the evidence supports a still important, albeit weaker, claim. The Great Society’s designers set out with technocratic optimism about the ability of the state to ameliorate not merely poverty, but the conditions which cause poverty. They aimed to produce not merely fuller piggy banks, but more able citizens. And across the board, they appear to have failed in these broader goals.
Yes, Great Society programs and their counterparts in other administrations lifted Americans out of material poverty. But if they did not cause, they also did not forestall, the emergence of a persistent dysfunctional class that not only remains a profound social challenge today, but actually seems to be growing.
We can’t go back in time to meet those challenges. But the failure of the Great Society in its highest aspirations gives lessons for public policy today. Above all, we can learn that social dysfunction is not merely about deprivation. Just giving people money can do very little to change their behavior for the better.
Indeed, we now have fairly conclusive proof of this fact. Four years ago, a group of researchers launched the most ambitious experiment to date to study the effect of unconditional cash transfers. They recruited 3,000 low-income individuals from the Dallas and Chicago areas. 2,000 of them, the control group, received $50 per month, just to keep them in the trial, for the next three years. The other 1,000 participants each received $1,000 per month for the same period. That additional $12,000 per year represented a substantial increase against household incomes below $30,000 on average.
What did this extra cash do? Three recent working papers tell the story. An extra $12,000 a year induced recipients to work and earn 4 to 5 percent less on average, relative to the control group. 1 in every 50 participants stopped working entirely. This new free time went overwhelmingly towards leisure activities, with no effect on valuable if uncompenstated time uses like exercise, “self-improvement,” or childcare.
The money also had little effect on measures of participants’ wellbeing. Using not only self-report but biomarkers, the researchers found that participation in the treatment group had no effect on physical or mental health, physical activity, sleep, and several other measures of well-being. Those who received the cash did see improvements to their credit scores, but there was no effect on their credit limits, delinquencies, utilization, bankruptcies, or foreclosures. They were mostly just as at risk of personal financial turmoil as before the experiment began.
This finding parallels another recent “unconditional income” study, which gave $1,000 or $50 monthly stipends to homeless people in Denver. In that study, the $12,000 payment had no discernible effect on the probability that recipients would be housed by the end of the study—giving the homeless cash does not make it any easier for them to find homes.
This recent social science confirms the lesson of the Great Society. It is entirely possible for government to make people richer on paper—just give them money. To the extent that money remediates specific, material hardships like malnourishment, it has some second-order effects. What is much harder, though, is to address the pathologies that come with poverty. Simply because someone has more dollars does not make him more likely to be married, or to work, or to avoid homelessness, indolence, and drug use. Bank accounts are almost arbitrarily malleable; people, it turns out, are far less so.
This is not what the architects of the Great Society expected. Sixty years ago, it was still possible to think that the next great challenge would be whether we would “have the wisdom to use that wealth to enrich and elevate our national life, and to advance the quality of our American civilization.” We had the wisdom to create a lot of transfer programs, but neither to elevate our national life nor to advance the quality of our civilization, at least not through government fiat.
Can the tangle of pathology be undone? The answer may still prove to be yes. But sixty years of hard lessons should make us wary of anyone who says they can do it—more likely than not, they just don’t know their history.
Charles Fain Lehman is a fellow at the Manhattan Institute, a contributing editor of City Journal, and a 2023-2024 Robert Novak fellow with the Fund for American Studies.