The way Metro measures the on-time performance of its trains has long seemed dubious to a lot of riders. Data published by the transit agency sometimes show that the subway ran efficiently in a given quarter — and commuters will scratch their heads, recalling how many times they were late for work in those months.
As Metro’s chief performance officer, Andrea Burnside, put it in a report to be delivered Thursday to the agency’s board, “Customers have expressed disbelief in the measure because the results often do not match their experiences.”
Maybe the polite ones say it like that.
Now, however, Metro is using a new method to gauge efficiency, or lack thereof, becoming the nation’s first transit system to measure subway performance by passenger “travel time,” Burnside said in an interview.
Rather than simply monitoring the flow of trains — which are supposed to arrive in stations at specified time intervals — Metro this month began using data from hundreds of thousands of SmarTrip cards to record how long it takes for riders to travel between stations, compared with how long the trips ideally should take.
“The movement of the trains and the movement of the people are really two different things,” Burnside said. “We may say, for example, that the trains are 80-something percent on time. Then our customers, when we ask them how often they think they were on time, they’ll tell us it was 60-something percent.”
In the spring, when Metro publishes its next quarterly performance report, covering the first three months of 2016, she said, commuters will see “a more transparent” accounting of how well or poorly the subway operated. And customers with registered SmarTrip cards will find a wealth of new online data measuring their personal experiences.
For beleaguered Metro, plagued by safety and financial woes, especially in the past year, it’s not always easy to convince riders that efficiency statistics issued by the agency are legitimate. Metro officials, for instance, might report that on-time performance was 85-plus percent. And commuters sneer incredulously.
Metro’s longtime method of calculating on-time performance “is consistent with similar measures at other transit properties and the data used for the calculations is accurate,” Burnside wrote in her report. However, “customers have repeatedly commented that they are more concerned about the overall amount of time it takes to complete a trip and about the reliability and predictability of their travel time.”
The old method was based on what are called “headways.”
During rush hours at some stations in downtown Washington, for example, a train is scheduled to arrive every three minutes. Elsewhere in the system, at other hours, headways are scheduled for six minutes or 12 minutes.
Whenever a train arrives within its designated headway — plus or minus a few minutes, depending on the station and the time of day — it is considered “on time.” But that can have little to do with whether riders get where they’re going in a reasonable time.
Because of a slowdown in the subway, resulting from an infrastructure or mechanical problem somewhere in the system, a train packed with passengers might hold at a platform for several minutes before it is cleared to move. As long as the next train pulls into the station within its headway, the sequence is recorded as on time.
Another familiar scenario: A train arrives in a station so jammed with passengers that some customers on the platform can’t squeeze aboard. So they wait for the next train, which is also stuffed. So they wait again. And again. If the crowded trains keep showing up within their headways, they are deemed on time. Even by that loose measure, Metro has struggled.
“On-time performance fell below 80 percent this quarter,” Burnside’s office reported in the fall, referring to July, August and September. That was the worst three-month performance since Metro began publishing such statistics five years ago.
Under the new method, when a rider uses a SmarTrip card to enter a station, Metro’s software records the time. Then it records the time when the same card is used to exit a station. And it adds a few minutes of “buffer time,” as Burnside put it, meaning the estimated time that the rider spends “navigating” in and out of the station on foot. These buffer times vary depending on the station, she said.
The rider’s total travel time is measured against the ideal travel time between the two stations. She said hundreds of thousands of calculations are being made daily, accounting for all trips in the nation’s second-busiest subway.
“Metrorail will become the first U.S. system to have such a measure,” Burnside wrote in her report. “In a research effort, staff has learned of only one other subway system (London Underground) that measures their customer experience in such a way.”
This spring, she said, commuters who have registered their SmarTrip cards on Metro’s website will be able to go online to review their “individual travel summaries” for the two stations they travel between most often. The summaries will note their slowest trips in terms of time, their fastest trips and their average travel times, with information about why a trip on a particular day might have taken longer than normal.
“We haven’t really had the tools before to do this,” Burnside said. “Now, with business intelligence tools and computer capabilities, we’re able to take a much better look at performance from a customer point of view.”