The Washington PostDemocracy Dies in Darkness

What’s missing in the debate about inflation

What we think we know about stifling inflation could be wrong

A customer shops for groceries in San Francisco on Nov. 11. (David Paul Morris/Bloomberg)

There is a specter haunting the U.S. economy: the Great Inflation of the 1970s. During this decade, a simultaneous increase in prices and unemployment paralyzed the U.S. economy and its policymakers.

Since the Great Depression in the 1930s, the Federal Reserve had dealt with economic downturns by lowering interest rates and letting the price of goods rise. In principle, lower rates would stimulate industries in the postwar period to borrow more and invest in production, creating jobs and reinvigorating the market. But rapidly rising prices in the 1970s failed to bring down unemployment while also eroding the wealth of households across the country.

Nearly 50 years later, some market watchers say a similar phenomenon is taking shape in the wake of extensive emergency government spending, pandemic-induced shortages and growing consumer demands. Many wary commentators argue that the U.S. government should nip this anticipated crisis in the bud by pursuing the same policies that stabilized prices in the 1980s: steeply raising interest rates even at the cost of job creation.

But this underlying historical interpretation may actually be wrong. New insights into the 1980s question whether higher borrowing costs and their extensive consequences on domestic employment and financial stability abroad were necessary to achieve long-term public good.

The Great Inflation of the 1970s left a deep imprint on people’s memories because it was extraordinary. Aside from the world wars, the Great Depression and a few transitory moments, inflation in the United States during the 20th century had for the most part stayed below 5 percent — except during the 1970s. Year-to-year changes in prices rose from 2 percent in 1965 to 14 percent by 1980.

For people living on the margins, the impact of this instability was devastating. The National Advisory Council on Economic Opportunity reported in July 1979 that “the households in the lowest 10 percent of income distribution [were spending] more than 119 percent of after‐tax income” on basic necessities. The price of milk had risen by 80 percent between 1965 and 1978. The price of sugar increased by 102.5 percent during the same period. Meanwhile, the average worker’s annual compensation grew by only 19.9 percent during those years.

These challenges were further aggravated by fuel shortages caused by the Organization of the Petroleum Exporting Countries’ oil embargo in response to the 1973 Yom Kippur War and the drop in Iran’s output during the country’s revolution in 1979. As a consequence, images of Americans queuing in long lines at the gas pump became conflated with the public memory of the broader market instability.

In response, Federal Reserve Chairman Paul Volcker decided in late 1980 to raise the rate at which banks lend to one another to double digits for two years. This was implemented with the explicit intention to raise borrowing costs. He believed that people would be discouraged from making investments unless there was greater predictability on where prices would be in the future. At the time, businesses were raising prices in anticipation of higher future costs. Volcker felt that only by establishing faith in the Fed’s commitment to stability through a long-term anti-inflationary posture could he curb this behavior.

The U.S. central bank’s decision to raise borrowing costs came at a steep cost — both at home and abroad. The high interest rates placed pressure on industries that were heavily reliant on borrowing, which included traditional sectors like construction and frontier industries like semiconductor manufacturing. Many businesses could not afford to take out new loans to invest in equipment or conduct research, stifling job creation. Many were forced to downsize to lower their repayment obligations.

The resulting recession that began in the third quarter of 1981 raised unemployment rates from 7.4 percent to nearly 10 percent a year later, dislocations that were unseen during the Great Inflation itself and would not be experienced again until the global financial crisis in 2008-2009.

Consequences abroad were possibly even more devastating. Emerging-market governments that had borrowed from U.S. banks to finance their domestic economic development suddenly found themselves unable to pay the high interest rates they owed. Latin America in particular was heavily affected — 16 governments in the region requested debt rescheduling. With their credit gutted by this crisis, these countries fell into economic stagnation and the region’s per capita GDP fell from 112 percent of the world average to 98 percent during this decade.

Many policymakers, however, saw these outcomes as necessary sacrifices. They believed that the high unemployment helped signal to the market that the government would no longer trade higher prices for more jobs. Moreover, the resulting recession was so deep and long that anxious job seekers were unlikely to demand high wages, further dampening the market expectations of consumers chasing goods. Adding to this dynamic, the declining influence of organized labor meant that workers had fewer avenues to collectively demand wage increases.

And indeed, year-to-year changes in prices fell below 5 percent by 1983 and unemployment also steadily dropped, reaching a low of 5 percent in 1989. Political scientists have since repeated the claim that the Federal Reserve had achieved price stability through what politicians like British Prime Minister Margaret Thatcher referred to as “bitter medicine.”

But new historical scholarship casts doubt on this narrative. Research by Itamar Drechsler, Alexi Savov, Philipp Schnabl, Karel Mertens and others suggests that rising prices in the 1970s might have been fueled by a 1965 Federal Reserve policy called Regulation Q, which set a ceiling on interest rates that banks were allowed to pay their depositors. With everyday people receiving below-market rates (and even negative real returns) for their deposits as inflation surpassed the ceiling set by the U.S. central bank, Americans were encouraged to spend rather than save. This unnatural uptick in spending created shortages, driving up prices on basic goods.

Adding credibility to this theory, the inflationary spiral that seemed unstoppable began to subside in the first quarter of 1980 after the Federal Reserve repealed Regulation Q in stages in 1978 and 1979. Crucially, this peak predates Volcker’s decision to increase interest rates at the end of 1980.

Drechsler and others do not exclude the possibility that Volcker’s policies might have contributed to ending the Great Inflation — but their research opens up important questions for policymakers as they diagnose challenges with price instability in the market.

This matters today as market observers look warily at the report from the Bureau of Labor Statistics that the consumer price index climbed 5.4 percent in September and 6.2 percent in October compared with the same period in 2020. Even as many market observers point out that the recent surge in prices may be the result of temporary blockage in supplies, the discussions pivot around a binary question: Should the Federal Reserve maintain its current accommodative stance or not?

But new interpretations of how the U.S. government ended the Great Inflation suggest that higher interest rate are not a panacea for price instability. Indeed, the stakes are similar to what the world faced in the 1980s. And if the convictions of leading policymakers were misplaced during those years, then there is ample reason for today’s leaders to move with caution when treading in their footsteps.