Are private prisons better or worse than public prisons?

This is the second post in a series about my new article, Prison Accountability and Performance Measures, which is in the current issue of the Emory Law Journal. Yesterday, I introduced the issue and advocated greater use of performance measures, which I’ll come back to later this week. Today, I’ll discuss the sad state of the comparative empirical studies on public and private prisons. It turns out we don’t know much about comparative cost or quality, so there isn’t much basis for strong empirical statements about public or private prisons. (That doesn’t stop advocates on either side of the issue.) Why don’t we know these things? Comparative analysis is hard. This post illustrates why.

*     *     *

Somewhat surprisingly, for all the ink spilled on private prisons over the last thirty years, we have precious little good information on what are surely some of the most important questions: when it comes to cost or quality, are private prisons better or worse than public prisons?

It’s safe to say that, so far at least, the political process hasn’t encouraged rigorous comparative evaluations of public and private prisons. Some states allow privatization without requiring cost and quality evaluations at all. The nineteen states that don’t privatize might, for all I know, be right to do so, but of course their stance doesn’t promote comparative evaluation.

When studies are done, they’re usually so inadequate from a methodological perspective that we can’t reach any firm comparative conclusions. Section A below discusses the problems with cost comparison studies, and section B discusses the problems with quality comparison studies. Section C takes a broader view and notes that even well-done comparative effectiveness studies don’t answer all our questions.

Which Sector Costs Less?

Difficulties in Calculating Costs

How do we determine whether the private sector costs more or less than the public sector? Ideally, we could work off of a large database of public and private prisons and run a regression in which we controlled for jurisdiction, demographic factors, size, and the like. In practice, this large database doesn’t exist, and so the typical study chooses a small set of public and private prisons that are supposedly comparable.

Unfortunately, this comparability tends to be elusive; the public and private facilities compared often “differ in ways that confound comparison of costs.” Sometimes no comparable facilities exist. Even where there are two prisons in the jurisdiction housing inmates of the same sex and security classification, they generally differ in size, age, level of crowding, inmate age mix, inmate health mix, and facility design. In particular, adjusting facilities to take into account different numbers of inmates is problematic, since facilities with more inmates, other things equal, benefit from economies of scale.

The GAO explained recently that “[i]t is not currently feasible to conduct a methodologically sound cost comparison of BOP [Bureau of Prisons] and private low and minimum security facilities because these facilities differ in several characteristics and BOP does not collect comparable data to determine the impact of these differences on cost.” The data problem mostly comes from the private side: information collected by the BOP from private facilities isn’t necessarily reported the same way that public data are reported, and the reliability of the data is uncertain. Moreover, “[w]hile private contractors . . . maintain some data for their records, these officials said that the data are not readily available or in a format that would enable a methodologically sound cost comparison at this time.”

Not only do federal regulations not require that these data be collected, but also, and more troublingly, at the time of the GAO study in 2007, the BOP didn’t believe there was value in developing the data collection methods that would make valid public-private cost comparison methods possible.

Probably more seriously, public and private prisons have accounting procedures that “make the very identification of comparable costs difficult.”

First, public systems, unlike private ones, don’t spread the costs of capital assets over the life of the assets, which overstates public costs when the assets are acquired and understates them in all other years.

Second, various public expenditures, including employee benefits and medical care, utilities, legal work, insurance, supplies and equipment, and various contracted services, are often borne by various other agencies in government, which might understate public costs by 30%–40%. One of the often-ignored costs in the public sector is the cost of borrowing capital. Conversely, governments bear some of the costs of private firms, for instance, in various cases, contract monitoring, inspection and licensing, personnel training, inmate transportation, case management, and maintaining emergency response teams.

And third, when public or private prisons incur overhead expenditures, there’s no obvious way of allocating overhead to particular facilities—Gerald Gaes gives a specific numerical example involving Oklahoma, a high-privatization state, where a difference in overhead accounting can alter the estimate of the cost of privatization by 7.4%.

As a bottom-line matter, McDonald says “the uncounted costs of public operation are probably larger than of private operation”; I tend to agree, but it’s hard to say for sure.

Competing Cost Estimates

The best way to see the importance of various assumptions is to look at a handful of cases where different people tried to estimate the same cost. Without committing myself to which way is correct, I’ll provide three examples: from Texas in 1987, from Florida in the late 1990s, and from the federal Taft facility in 1999–2002.

a. Texas

In Texas, private prisons were authorized in 1987 with the passage of Senate Bill 251, which required that private prisons show a 10% savings to the state compared to public prisons. Calculating the per-diem cost of public incarceration in Texas thus became important, since the maximum contract price for private providers would be 90% of that cost.

The Texas Department of Corrections came up with an estimate of $27.62 per prisoner per day. The Legislative Budget Board, however, proposed a number of additions to this cost, to better take into account the costs of complying with Ruiz v. Estelle (S.D. Tex. 1980), building costs, the state’s cost to provide additional programs that private firms would be required to provide, and the like. All these adjustments raised the estimated per-diem cost by about 50%—to $41.67. In the end, contracts were awarded within a range of $28.72 to $33.80—between the two estimates, though closer to the first one.

b. Florida

In Florida, the Office of Program Policy Analysis and Government Accountability (OPPAGA) compared two private facilities, Bay Correctional Facility and Moore Haven Correctional Facility, with a public facility, Lawtey Correctional Institution. After various adjustments, OPPAGA calculated that the per-diem operating cost was $46.08 at Bay and $44.18 at Moore Haven, versus $45.98 at Lawtey; that is, Bay was 0.2% more expensive and Moore Haven 3.9% cheaper than the public facility.

The Florida Department of Corrections had come up with its own numbers: $45.04 at Bay and $46.32 at Moore Haven, versus $45.37 at Lawtey: Bay was 0.7% cheaper and Moore Haven 2.1% more expensive.

The Corrections Corporation of America (CCA), which operated Bay, submitted comments to the OPPAGA report, disputing its analysis. It disagreed that Lawtey was comparable, and suggested its own adjustments to OPPAGA’s numbers for all three facilities. Under CCA’s analysis, Bay cost $45.16 and Moore Haven cost $46.32, versus $49.30 for Lawtey, which comes out to cost savings of 8.4% for Bay and 6.0% for Moore Haven. (OPPAGA, understandably, disputed CCA’s modifications.)

c. Taft

Perhaps the best example of competing, side-by-side cost studies comes from the evaluation of the federal facility in Taft, California, operated by The GEO Group.

A Bureau of Prisons cost study by Julianne Nelson compared the costs of Taft in fiscal years 1999 through 2002 to those of three federal public facilities: Elkton, Forrest City, and Yazoo City. The Taft costs ranged from $33.21 to $38.62; the costs of the three public facilities ranged from $34.84 to $40.71. Taft was cheaper than all comparison facilities and in all years, by up to $2.42 (about 6.6%)—except in fiscal year 2001, when the Taft facility was more expensive than the public Elkton facility by $0.25 (about 0.7%). Sloppily averaging over all years and all comparison institutions, the savings was about 2.8%.

A National Institute of Justice study by Douglas McDonald and Kenneth Carlson found much higher cost savings. They calculated Taft costs ranging from $33.25 to $38.37, and public facility costs ranging from $39.46 to $46.38. Private-sector savings ranged from 9.0% to 18.4%. Again averaging over all years and all comparison institutions, the savings was about 15.0%: the two cost studies differ in their estimates of private-sector savings by a factor of about five.

Why such a difference? First, the Nelson study (but not the McDonald and Carlson study) adjusted expenditures to iron out Taft’s economies of scale from handling about 300 more inmates each year than the public facilities. Second, the studies differed in what they included in overhead costs, with the Nelson study allocating a far higher overhead rate.

These examples should be enough to give a sense of the complications in cost comparisons; given these difficulties, it’s not surprising that most studies have fallen short.

Which Sector Provides Higher Quality?

Difficulties in Figuring Out Quality

Moving on to quality comparisons, the picture is similarly grim. As with cost comparisons, sometimes no comparable facility exists in the same jurisdiction. Some studies solve that problem by looking at prisons in different jurisdictions, an approach that has its own problems. (If one had a large database with several prisons in each jurisdiction, one could control for the jurisdiction, but this approach is of course unavailable when comparing two prisons, each in its own jurisdiction.) Many studies just don’t control for clearly relevant variables in determining whether a facility is truly comparable.

Often, the comparability problem boils down to differences in inmate populations; one prison may have a more difficult population than the other, even if they have the same security level. Usually prisons have different populations because of the luck of the draw, but sometimes it’s by design, as happened in Arizona, when the Department of Corrections chose “to refrain from assigning prisoners to [a particular private prison] if they [had] serious or chronic medical problems, serious psychiatric problems, or [were] deemed to be unlikely to benefit from the substance abuse program that is provided at the facility.” It’s actually quite common to not send certain inmates to private prisons; the most common restriction in contracts is on inmates with special medical needs. Not that all prisons must have totally random assignment; it can be rational to tailor prisoner assignment to, say, the programming available at a prison. But such practices do have “the unintended effect of undermining cost comparisons.” Another practice that undermines cost comparisons is contractual terms limiting the private contractor’s medical costs, though nowadays it’s increasingly common for contracts to transfer all medical costs to the contractor.

Some performance studies rely on surveys administered to a nonrandom sample of inmates or potentially biased staff surveys, or generally to populations of inmates or staff that aren’t randomly assigned to public and private prisons. Survey data aren’t useless, but they’re rarely used with the appropriate sensitivity to its limitations. The higher-quality survey-based studies don’t give the edge to either sector.

Most damningly, many studies don’t rely on actual performance measures, relying instead on facility audits that are largely process-based. Some supposed performance measures don’t necessarily indicate good performance, especially when the prisons are compared based on a “laundry list” of available data items (for instance, staff satisfaction) whose relevance to good performance hasn’t been theoretically established.

Gerald Gaes and his coauthors conclude that most studies are “fundamentally flawed,” and agree with the GAO’s conclusion that there is “little information that is widely applicable to various correctional settings.”

I would add that accountability mechanisms vary widely—the standard U.S. model, the Florida model, and the U.K. model are different, and these in turn differ from the French model or the model proposed for prison privatization in Israel before the Israeli Supreme Court invalidated the experiment. When a prison study finds some result about comparative quality, that tells us something about comparative quality within that accountability structure; if a private prison performed inadequately under one accountability structure, it might do better under a better one.

As an example of the problems with current quality metrics, consider the performance evaluations of the private federal Taft facility. As with the cost studies discussed above, we have two competing studies, the National Institute of Justice one by McDonald and Carlson and a Bureau of Prisons study by Scott Camp and Dawn Daggett—the companion paper to Julianne Nelson’s cost paper.

The Bureau of Prisons has evaluated public prisons by the Key Indicators/Strategic Support System since 1989. Taft, alas, didn’t use that system, but instead used the system designed in the contract for awarding performance-related bonuses. Therefore, McDonald and Carlson could only compare Taft’s performance with that of the public comparison prisons on a limited number of dimensions, and many of these dimensions—like accreditation of the facility, staffing levels, or frequency of seeing a doctor—aren’t even outcomes. Taft had lower assault rates than the average of its comparison institutions, though they were within the range of observed assault rates. No inmates or staff were killed. There were two escapes, which was higher than at public prisons. Drug use was also higher at Taft, as was the frequency of submitting grievances. On this very limited analysis, Taft seems neither clearly better nor clearly worse than its public counterparts.

The Camp and Daggett study, on the other hand, created performance measures from inmate misconduct data, and concluded not only that Taft “had higher counts than expected for most forms of misconduct, including all types of misconduct considered together,” but also that Taft “had the largest deviation of observed from expected values for most of the time period examined.” Camp and Daggett’s performance assessment was thus more pessimistic than McDonald and Carlson’s.

According to Gerald Gaes, the strongest studies include one from Tennessee, which shows essentially no difference, one from Washington, which shows somewhat positive results, and three more recent studies of federal prisons by himself and coauthors, which found public prisons to be equivalent to private prisons on some measures, higher on others, and lower on yet others.

Which Sector Leads to Less Recidivism?

Recidivism reduction is really just one dimension of prison quality, though it’s a particularly relevant one that deserves its own section.

If we found that inmates at private prisons were less likely to reoffend than comparable inmates at public prisons, this would be an important factor in any comparison of public and private prisons. Unfortunately, recidivism comparisons haven’t been very good either.

A study from the late 1990s by Lonn Lanza-Kaduce and coauthors reported that inmates released from private prisons were less likely to reoffend than a matched sample of inmates released from public prisons, and they had less serious offenses if they did reoffend. But this study has been critiqued on various grounds. First, not all the recidivism measures are significant: while various reoffense-related rates were found to be significantly lower in the private sector, and while the seriousness of reoffending was found to be significantly lower in the private sector, a time-to-failure analysis found that there was no significant difference in the “length of time that a releasee ‘survived’ without an arrest during the 12-month follow-up period.” Second, the public inmates seem to not really have been well matched to the private inmates; they only seemed so when their descriptive variables were described at a high level of generality (e.g., custody level vs. “the underlying continuous score measuring custody level,” whether inmates had two or more incarcerations vs. the actual number of incarcerations, etc.). Third, the authors seem to have made the questionable decision to assign an inmate to the sector he was released from, even if he had spent time in several sectors: thus, an inmate who spent years in public prison and was transferred to private prison shortly before his release was classified as a private prison releasee. Fourth, a private releasee who reoffended could take longer to be entered in the system than a public releasee, so the truly comparable number of private recidivists may well have been larger than reported.

A later study by David Farabee and Kevin Knight that “corrected for some of these deficiencies” found no comparative difference in the reoffense or reincarceration rates of males or juveniles over a three-year post-release period, though women had lower recidivism in the private sector. However, this study may still suffer from the problem of the attribution of inmates who spent some time in each sector, as well as possible selection bias to the extent that private prisons got a different type of inmate than public prisons did.

Another study by William Bales and coauthors, even more rigorous, likewise found no statistically significant difference between public-inmate and private-inmate recividism.

A more recent study, by Andrew Spivak and Susan Sharp, reported that private prisons were (statistically) significantly worse in six out of eight models tested. But the authors noted that some skepticism was in order before concluding that public prisons necessarily did better on recidivism. Populations aren’t randomly assigned to public and private prisons: that private prisons engage in “cream-skimming” is a persistent complaint. Recall the case in Arizona, where the Department of Corrections made “an effort to refrain from assigning prisoners to [the private Marana Community Correctional Facility] if they [had] serious or chronic medical problems, serious psychiatric problems, or [were] deemed to be unlikely to benefit from the substance abuse program that [was] provided at the facility.” But the phenomenon can also run the other way. One of the authors of the recidivism study, Andrew Spivak, writes that while he was “a case manager at a medium-security public prison in Oklahoma in 1998, he noted an inclination for case management staff (himself included) to use transfer requests to private prisons as a method for removing more troublesome inmates from case loads.”

Moreover, recidivism data is itself often flawed. Recidivism has to be not only proved (which requires good databases) but also defined. Recidivism isn’t self-defining—it could include arrest; reconviction; incarceration; or parole violation, suspension, or revocation; and it could give different weights to different offenses depending on their seriousness. Which definition one uses makes a difference in one’s conclusions about correctional effectiveness, as well as affecting the scope of innovation. The choice of how long to monitor obviously matters as well: “[m]ost severe offences occur in the second and third year after release.” Recidivism measures might also vary because of variations in, say, enforcement of parole conditions, independent of the true recidivism of the underlying population.

The study of the comparative recidivism of the public and private sector could thus use a lot of improvement.

The Limits of Comparative Effectiveness

After having read the foregoing, one should be fairly dismayed at the state of comparative public-private prison research. In fact, it gets worse. An overarching problem is that most studies don’t simultaneously compare both cost and quality. It is hard to draw strong conclusions from such studies, even if they are state-of-the-art at what they are examining.

If we find that a private prison costs less, how do we know that it did not achieve that result by cutting quality? (This is the standard critique of private prisons.) If we find that a private prison costs more, how do we know that it did not cost more because of the fancy and expensive educational or rehabilitative programs it implemented? (According to Douglas McDonald, this was exactly the problem with the cost comparison of the Silverdale Detention Center in Hamilton County, Tennessee.)

Our goal should be to determine the production function for public and private prisons; this is the only way we will find out whether privatization moves us to a higher production possibilities frontier or merely shifts us to a different cost-quality combination on the existing frontier. Realizing this allows us to throw out a lot of studies from the outset.

At least people are taking more seriously the need to develop valid comparisons. Governments need to mandate, by regulation or by contract, that the information necessary to do valid comparisons become available, even if collecting these extra data would add to private facilities’ cost. Until we get a better handle on what works, public and private prisons should be required to live up to the same standards to facilitate comparisons. Private prisons should get the same types of inmates as public prisons—neither better nor worse—and they should be restricted in whom they can transfer out.

Having spent so long bemoaning the paucity of good comparative effectiveness studies, I should note that there’s more to life than comparative effectiveness. Even ignoring any differences between the public and private sectors, privatization can have systemic effects, altering how the public sector works.

For one thing, privatization can, for better or worse, change the public sector as well. Suppose private prisons are better than public prisons but competitive pressures lead public prisons to improve as well. A comparative study may not be able to find any difference between the two sectors, and yet one can still say that privatization was a success. (Indeed, one study does suggest that for prisons, privatization might drive public agencies to be more efficient, though the statistical significance of this effect seems highly sensitive to the precise specification, and selection bias is a confounding issue.) Similarly, if private prisons really do cost less, and therefore allow for greater increases in capacity, thus relieving overcrowding across the board, that effect will not show up in a comparative study. Likewise if best practices migrate from one sector to another through a process of cross-fertilization: Richard Harding calls this “the paradox of successful cross-fertilization—that regimes progressively become more similar than dissimilar to each other.”

Alternatively, what if privatization leads to a race to the bottom? If private prison cost-cutting is harmful, and if public prisons have to cut costs to stay competitive, we may have lower quality, including higher recidivism, across the board.

In either of these two cases, good empirical evaluations are necessary, though detecting such dynamic, systemwide effects will require before-and-after studies, not comparative snapshots.

Finally, to step back a bit from the privatization debate, regardless of what comparative effectiveness analysis shows, both sectors may fall short of the ideal, so this exercise should not blind us to the continuing need to reform the whole system. I will add that, even if the public and private sectors are equivalent, one can argue against privatization on the grounds that—assuming it costs less—it enables greater expansion of the prison system and therefore may increase incarceration and hinder the search for alternative penal policies.

In tomorrow’s post, I’ll discuss why it would be a good idea to use performance measures.

Sasha Volokh lives in Atlanta with his wife and three kids, and is an associate professor at Emory Law School. He has written numerous articles and commentaries on law and economics, privatization, antitrust, prisons, constitutional law, regulation, torts, and legal history.

opinions

volokh-conspiracy

Success! Check your inbox for details. You might also like:

Please enter a valid email address

See all newsletters

Comments
Show Comments
Most Read National

opinions

volokh-conspiracy

Success! Check your inbox for details.

See all newsletters

Next Story
Eugene Volokh · February 25, 2014