Stan Humphries is chief economist at Seattle-based Zillow and architect of the Zestimate. This commentary is in response to criticism raised in an opinion piece by David Howell, executive vice president and chief information officer at McEnearney Associates. To read Howell’s criticism, click here.
When we started Zillow, our goal was to give people free, easy access to previously hard-to-find real estate information so they could make smarter home-related decisions. The starting point for many is the Zestimate — a free, unbiased, estimated value of a home, both currently and over time, based on millions of publicly available and user-submitted data points.
Today, we produce Zestimates on 100 million U.S. homes. Our median error rate, nationwide, is currently 6.9 percent, which means half of all estimated values are within 6.9 percent of the final sale price. By comparison, when we launched in 2006, we offered a Zestimate on 50 million U.S. homes and had a median error rate of 13.6 percent. In eight years, we have doubled our footprint and cut our error rate in half. And we continue to improve.
We appreciate McEnearney Associates’s comparison, and we agree it’s important to look at accuracy — which is why our local and national accuracy is published prominently on our Web site, and updated every quarter. It’s important to note, Zillow accuracy statistics are based on all closed sales in a market, not just a small sample of homes in a multiple listing service.
What is missing in McEnearney’s analysis, however, is context. In its report, McEnearney disparages the fact that less than half of Zestimates are more than 5 percent off from sales prices as “wildly inaccurate and inconsistent,” without much context as to how that level of accuracy compares to other opinions of value.
One way of providing context is to compare Zestimate accuracy to the accuracy of initial listing prices set by real estate agents themselves. We have conducted such a study in the past for all homes nationwide, and recently conducted a similar analysis for the Washington, D.C., metro, based on 38,438 closed sales over the past year. We found that initial Zestimates — the Zestimate at the time the property was first listed for sale — were within 5 percent of the ultimate sale price 46 percent of the time in the D.C. area. Initial list prices set by agents in and around D.C. came within 5 percent of the final sale price 76 percent of the time. The median error rates for initial list prices and initial Zestimates was 2.2 percent and 5.5 percent, respectively, in the D.C. metro. The margin narrows to 3.5 percent and 5.5 percent, respectively, when you remove listings that may have potentially underpriced the home.
What do all these numbers tell us? Mostly, they tell us that predicting with 100 percent certainty what a home will sell for on the open market is almost impossible, for people and computers alike. Is a well-informed human better at pricing an individual home than a computer? The answer is yes, of course. But it’s closer than you might think.
On its own, the Zestimate allows users to quickly answer a host of important questions, for free: Am I likely underwater on my mortgage? How much value has a homeowner gained or lost since buying? What is the value of this home, relative to a similar home in a different part of town?
The Zestimate is also designed to be used in conjunction with many other pieces of information because we know there are decisions when sometimes you need more than a Zestimate. In these cases, we always recommend supplementing the Zestimate with professional advice. Great agents provide services far beyond just pricing a home: assistance in negotiation, help with marketing and priceless peace of mind. A computer will never replace that.
The reality is we don’t live in a black and white world in which agents’ price opinions are infallible, and computer-generated data is inherently untrustworthy. There is ample room for both, and both help the consumer in search of an honest deal.