(Courtesy: Transportation Security Administration FOIA office)

The government publishes from 2,500 to 4,500 new regulations every year, from routine rule changes to policies with profound effects on the American economy. But according to a new analysis, few are ever tracked for how well they’re working.

Despite multiple executive orders from the Obama administration to review regulations that are outmoded, ineffective or simply a burden on taxpayers, federal agencies seem to have no plans or mechanisms in place to get this done, the study concludes.

“While agencies often provide a wealth of information on the anticipated effects of their rules, they seldom return to a rule to evaluate whether the benefits and costs they anticipated actually materialized,” the study, released this week by the Regulatory Studies Center at George Washington University, found.

While agencies are supposed to say how they are going to measure whether a new regulation actually works when they issue what are called final rules, this pretty much never happens, the study said.

Many regulations have big economic effects, and at a time when taxpayers are demanding results and accountability from public programs, tracking their effectiveness is crucial, it found.

In 2011, Obama started issuing a series of executive orders to get agencies to do what are called “retrospective analyses” of rules “that may be outmoded, ineffective, insufficient, or excessively burdensome, and to modify, streamline, expand, or repeal them in accordance with what has been learned.”

The orders were followed by guidance from administration officials to make regular assessments of important regulations a way of doing business.

But the GWU study, which looked at 22 “economically significant” rules proposed last year (meaning that they had a positive or adverse effect of $100 million or more on productivity, jobs, competition, the environment, public health and safety or local governments) found very few plans to measure their effectiveness.

The regulations ranged from enhanced tank car standards for high hazard flammable trains to minimum wages for federal contractors. They included the Environmental Protection Agency’s widely publicized reductions to carbon pollution emissions for power plants.

Just two-thirds of the regulations stated the problem the new rules were intended to address. A little more than a third included any metrics to evaluate the rules’ success, and even fewer explained how they planned to collect information to track success. None included either a time frame for review or a discussion of how the rule would have a particular outcome.

“Of the 22 rules we examined, not a single one included a plan for review,” the study by senior policy analyst Sofie E. Miller said.

“While many agencies successfully identified a problem that their regulation was intended to address,” she wrote, “in many cases the problem identified was not related to the rules the agency proposed.”

For example, some of the Energy Department’s proposed standards for energy efficient products identified “inadequate or asymmetric information” about potential energy savings as the problem to be addressed. But the study points out that what the rules really do is ban certain products from the marketplace.

“In such cases, either DOE has identified the wrong problem, or DOE’s problem is not addressed by its standards,” the study says, concluding that this makes it harder to figure out if the rules are working.

Many agencies use horizons as far as 30 years out to tally a rule’s benefits and costs.

Not surprisingly, the study recommends that agencies “always” identify measurable goals of their rules and figure out how to measure whether they are being met.