This month, more than 20 million households in the United Kingdom will open their mailboxes to a rather unusual letter: an individualized, item-by-item tally of how the U.K. government spent their tax money in the past year. For example, the U.K. government currently spends around 20 percent of its budget on health. So a household that paid £1,000 in taxes will see that roughly £200 went to health programs.

Mailed each of the past two years by Her Majesty’s Revenue and Customs (HMRC), Britain’s version of the IRS, these “taxpayer receipts” (or tax summaries, as they’re called in the U.K.) are meant to increase government transparency and, in turn, create a more informed electorate. As one MP argued, such transparency should make government more accountable.

And to the delight of some conservatives, the receipts also highlight the costs of government. As British Chancellor George Osborne put it, the receipts show “how hardworking taxpayers have to pay for what governments spend.”

The view on this side of the Pond is quite different. In 2011, the White House put a similar version of the receipt on its Web site. As in Britain, the primary goal was informing the electorate. “The more people know, the more effectively we can govern,” President Obama said. But the political calculus was entirely reversed. Taxpayer receipts, the president argued, turn “out to be good politics, too, because we think we got the facts on our side in a lot of these debates.” Like conservatives in the U.K., liberals in the U.S. believed that the receipt would benefit their side.

For either side to be correct, however, taxpayers actually had to read—and remember—these summaries. There was ample reason to doubt that this would be the case.

Will citizens learn new political information?

Even though civil society groups and governments champion the kind of transparency policies that the summaries represent, social scientists have long been pessimistic about the public’s political knowledge, and the possibility of increasing it. The seminal work on the subject describes political knowledge levels as “astonishingly low,” as citizens have trouble remembering basic political facts and prove stubbornly resistant to new ones.

The mood about public knowledge has been especially dark lately, as evidence has emerged that, when presented with new factual information, people are apt to double down on their prior misbeliefs. To top it off, a 2012 YouGov survey experiment on the taxpayer receipt by The Monkey Cage’s own John Sides found no discernible effects on attitudes toward government spending.

To understand whether receipts could shift political attitudes and political knowledge, we studied last year’s distribution in the U.K. While the U.S. initiative was limited in scope—taxpayers first had to know about the Web site and then had to access it directly—the U.K. initiative was an unprecedented effort at government transparency. The receipts were sent directly to households, and no one could opt out.

According to our analysis, the receipts did indeed increase knowledge about government. To be sure, the baseline level of political knowledge was low. But the increase was real—we estimate that the receipts increased knowledge by roughly 10 percent.

Our evidence comes from a nationally representative panel that, with the assistance of the tax authorities, we surveyed before and after the receipts were mailed. (The research was funded by Omidyar Network.) Following a standard approach in survey research, we randomly encouraged half of the individuals in our panel to be on the lookout for their tax receipts and provided extra incentives to pay attention to the details. The other half received no such reminders. We then asked participants a series of questions about the U.K. budget and compared the responses of those encouraged to read the receipt to the responses of those who did not receive an encouragement. This approach was designed with an eye toward isolating the receipt’s effects on knowledge from any unrelated changes.

To see whether the receipts changed knowledge, we asked our panel how much the government spent in four categories: overseas aid, national defense, health, and welfare and pensions. We asked for precise estimates, not ballpark figures, and then marked responses as accurate if they fell within 10 points of the true number.

Overall, participants gave accurate estimates on a little over one in four categories, and were often off by an order of magnitude, especially on overseas aid. However, those encouraged to read the taxpayer receipt actually answered more correctly than the control group—1.2 correct answers vs. 1.1 correct answers. We even observed a significant uptick in correct estimates of the amount the government had spent on spent on overseas aid, with those who received a random encouragement providing significantly more accurate estimates than those who did not.

The above figure both shows how political knowledge changed over time, as well as how much the receipts themselves changed knowledge. As you can see, political knowledge among all our subjects increased. Yet among those who were randomly encouraged to read their receipts, the increase in knowledge was greater. While we can’t be sure what caused the increase in knowledge in the control group, we can be confident that the larger increase in the treatment group was owed exclusively to the encouragement to read the receipt. Subsequent analyses have confirmed that these differences weren’t due to chance or other statistical quirks.

What about Osborne’s expectation that the receipts would lead to more conservative attitudes? Or Obama’s conflicting expectation that transparency would win the public over to the progressive side? We find no evidence of ideological movement. Of course, different taxpayers will respond differently to reading tax receipts. But it’s not surprising that small changes in political knowledge don’t lead to big changes in public opinion.

The public’s level of political knowledge may indeed be “astonishingly low.” And individual transparency policies, such as the receipts, will not radically transform the public’s level of political knowledge. Yet, if our results are any indication, there is some hope.

Not all efforts to increase political knowledge are fated to be futile.

Lucy Barnes is a lecturer in quantitative politics at the University of Kent (UK) and associate member of Nuffield College, Oxford. Avi Feller is assistant professor at the Goldman School of Public Policy at UC Berkeley. Jake Haselswerdt is a Robert Wood Johnson Scholar in health policy research at the University of Michigan and assistant professor in the Department of Political Science and the Truman School of Public Affairs at the University of Missouri. Ethan Porter is a Ph.D. candidate at the University of Chicago.