The tactics of Big Pharma are not new. In fact, they are time-tested classics: millions in payouts to legislators and a revolving door between industry and government that consistently undermines enforcement and regulation. What is truly alarming in the latest episode, covered by The Washington Post and “60 Minutes,” is that such raw corporate power could triumph at the height of a drug crisis that is killing more than 100 Americans every day.
This feat was possible only because the industry has successfully leveraged the historical logic and architecture of U.S. drug policy.
For more than a century, American drug policy has operated from the presumption that addiction is a problem native to, and emanating from, poor and nonwhite communities. That presumption helped sustain a mutually reinforcing cultural and regulatory split: on the one hand, a punitive “war” that dismissed addiction as a problem of crime, and on the other, a lightly regulated pharmaceutical industry that reaped billions on the sale of psychoactive drugs to white and respectable “patients.”
This binary architecture dividing “drugs” from “medicines” originated in the Progressive Era, when the demographics of drug use underwent a noticeable shift from white middle-class women to poor immigrant men. The social and political response hardened as addiction appeared to slide down the socioeconomic ladder and turned a medical phenomenon into a criminal one.
That shift was laid bare in the two marquee drug laws of the era. Based on a consumer protection model, the Pure Food and Drug Act of 1906 required honest labeling of drugs in the chaotic legal market and established the Food and Drug Administration (FDA) to enforce it. Eight years later, the Harrison Narcotics Tax Act restricted the use of narcotics to the medical realm but quickly evolved into a system of punitive prohibition, and in 1930 a dedicated police agency — the Federal Bureau of Narcotics (FBN) — was created to further stamp out drug use and “vice” among minorities and the criminal element.
Led by Commissioner Harry J. Anslinger, the FBN built up strong social taboos against illicit drug use, in part by sensationalizing the threat of nonwhite “pushers” preying upon white women. That cultural logic shaped how policymakers responded to new drug threats such as marijuana, which was associated with Mexican laborers and added to the Harrison Act as a “narcotic” (which it is not).
Meanwhile, with addiction ostensibly banished to the criminal realm, authorities were slow to recognize the dangers of new pharmaceuticals such as barbiturates (a class of sedatives introduced in 1903) and amphetamines (stimulants introduced in the 1930s), because they were marketed by respectable drug companies as medical treatments for the ills of the American middle classes.
Sales of uppers and downers boomed under the relatively gentle governance of the FDA, whose mission was to ensure the safety of the drug supply rather than to punitively restrict drug use. By the 1950s, these drugs could be found in up to a quarter of all prescriptions, and enough were sold to provide more than 50 doses per year to every man, woman and child in the United States.
This is not to say the pharmaceutical industry was entirely protected from scandal or federal crackdowns. In 1938, more than 100 people died from a toxic but properly labeled brand of the antibacterial sulfanilamide. The catastrophe helped produce the Food, Drug and Cosmetic Act, which gave the FDA the power to require that drugs be safe to use as labeled.
In 1951, Congress affirmed the FDA’s practice of classifying certain drugs — barbiturates and amphetamines included — as sufficiently dangerous to require a prescription for distribution and use. And in 1962, the thalidomide disaster helped produce the Kefauver amendments, which, for the first time, required that companies prove their drugs effective before marketing them (and, for the first time, required truth in advertising to physicians).
None of these reforms slowed the marketing or sale of barbiturates and amphetamines, however. And despite increasing evidence of abuse and addiction, the drug industry was able to fend off calls to add sedatives and stimulants to the Harrison Act — with the eager agreement of the drug warriors at the FBN, who feared that adding “medicines” to their purview might muddy the moral clarity of their crusade against the more visceral, criminal and racialized threat of heroin.
Real reform only came, ironically, in the context of Richard Nixon’s “war on drugs.” The 1970 Comprehensive Drug Abuse Prevention and Control Act wiped out all previous criminal drug statutes and transferred much of the responsibility for enforcement in the legal industry from the FDA to what would become the Drug Enforcement Administration.
Notably, these reforms didn’t spare Big Pharma. Abuse and overdose had become so widespread that groups like the National Council of Churches complained, “The biggest dope dealer in your community today may well be the good old family doctor, and the pusher supplying him is the tranquilizer manufacturer.” Nixon, too, thought the pharmaceutical industry bore responsibility for creating a wider “culture of drugs.”
But despite the sensationalized rhetoric, the overall approach to pharmaceuticals remained oriented toward consumer protection — making drug use safe — rather than criminalization. For a time it worked, and the 1970s witnessed a sharp decrease in the use of legal sedatives and stimulants, with a corresponding decline in addiction and overdose.
Yet this brief moment of effective pharmaceutical policy did not last.
Several trends converged in the 1980s and 1990s to transform “narcotics” into “opioids” and create another pharmaceutical boom market. A liberalizing medical profession began to embrace an imperative to treat pain at the same time that the Reagan administration was inaugurating a new era of deregulation. The new-look opioids developed by industry seemed to offer a perfect compromise between liberals and conservatives: The federal government would act to alleviate Americans’ suffering, as liberals wanted, but it would do so by unleashing the private market in the form of pharmaceutical opioids.
This marriage would have been impossible without one additional factor: the return of racialized anti-drug taboos during the “crack epidemic” of the 1980s. Crack represented the democratization of cocaine. It was inexpensive, fast-acting and perfectly tailored for America’s neglected and impoverished inner cities. Breathless media coverage of the “epidemic” resurrected hoary stereotypes of addiction as poor, black and criminal.
With addiction once more deemed alien to the white middle classes, the pharmaceutical industry was well positioned to market new opioids like OxyContin — whose narcotic ingredient, oxycodone, had been recognized as highly addictive since it was first marketed in the 1930s — as nonaddictive. Federal red tape shouldn’t impede access to important medicines, lobbyists argued. Physicians, already looking for insurance-reimbursable ways to treat pain, responded eagerly. The result was a fourfold increase in the volume of legal narcotics sold in America over the first decades of the 21st century.
Is it possible to build a strong regulatory regime without a punitive drug war? Yes, but Trump’s halting response to the opioid epidemic and Attorney General Jeff Sessions’s punitive instincts indicate that the administration would rather fight another drug war than face the realities of a bona fide public health crisis.
The solution to the opioid problem isn’t complicated: Keep a leash on industry, police the worst corporate offenders, and care for those harmed by drugs through medically assisted treatment programs and other harm-reduction strategies.
These strategies will work, however, only if we use this current crisis to escape our historical trap of dividing the risks and rewards of drugs along socioeconomic lines rather than according to the dictates of public health.