A helicopter arrives to take people away from the Mount Everest base camp, post avalanche. (6summitschallenge.com)

It hasn’t even been a week since Nepal’s massive earthquake killed thousands and destroyed businesses, homes, roads and hospitals across the country. But already, the United Nations has called for $415 million in aid; more than $50 million has been pledged by 53 countries and foundations for immediate relief. Private donors, foundation and businesses will likely promise millions more.

Outsiders were similarly generous after the earthquake in Haiti, the Indonesian tsunami and Hurricanes Katrina and Sandy. This money is important — it enables emergency response teams like the ones I’ve been on to restore essential services and provide water, shelter and food.

But are these teams spending this money effectively? Are we doing the best we can to reach the most people as quickly as possible? Nobody knows.

In medicine, we collect data about what works and what doesn’t. We run experiments; we use scientific evidence to continuously improve our practice and treatments. But in a disaster response, we are so focused on delivering services that there’s hardly any time to collect data.  Did we meet the needs of the affected population? Did the way we delivered food in Nepal work better than the way we did it in Haiti? Have we figured out how to shelter people more quickly, or ensure clean access to water? We can’t say. And as a result, we throw money at a country, hoping to alleviate suffering but unsure of whether it really will.

[posttv url="http://www.washingtonpost.com/posttv/world/asia_pacific/aid-slow-to-reach-remote-villages-in-nepal/2015/05/02/f084fbac-f0a1-11e4-8050-839e9234b303_video.html" ]

A few organizations, like Active Learning Network for Accountability and Performance and the Johns Hopkins Center for Refugee and Disaster Response (which I direct), are beginning to collect data and establish best practice protocols.  But the field is in its infancy. Eventually, we hope to “grade” the response to each event so that we can finally compare the management of responses and focus on what works.

As an example — It has been five years since the earthquake in Haiti devastated Port-au-Prince and killed hundreds of thousands. I spent weeks there in the ensuing chaos, providing emergency services with limited supplies and equipment to people sleeping outdoors with little food, sanitation and clean water. We went back a year later to see whether people had received what they needed from the vast outpouring of money.

They had not. Though countries and aid groups spent $2 billion on Haiti during the first yearour studies showed that 75 percent of the families had not recovered to their prior standard of living. More than 1 million people were still living in camps and tents.  Even now, the country is still struggling to resurrect businesses and even basic services.

Another example is the massive floods in Pakistan in 2010, with more than 20 million people affected.  Six months after the floods, we measured not just the impact on people and how well they have recovered, but also if they felt that their lives were the same or better than before the floods. More than half reported that they still had unmet basic needs and only 20 percent were satisfied with the relief they received.

According to the 2014 Global Humanitarian Assistance Report, we spent $22 billion in relief efforts in 2013. The American public and companies donated billions of dollars after hurricane Katrina and the Haitian earthquake. But we have no way of knowing whether that money was squandered or efficiently used. And we are not devoting money to the research and data collection that will help find out.

Disaster response management lacks some of the basic information used to run any business, such as standard quality assurance methods to improve products and services. I’ve seen this problem again and again, and I’ve seen the benefits that objective data can bring in helping us to find the greatest needs and distribute money and services the right way after a disaster. Often a large ‘emergency needs assessment’ is conducted to set the general direction for the response management (unfortunately sometimes, like in Haiti, this takes months to complete), but then there are usually no follow up assessments to check progress. Ideally management data would be collected systematically and repeatedly throughout the response and used in real time to improve ongoing activities. To do this will require that people dedicated only to collecting, analyzing and sharing data be part of every disaster response team. Their task would be to simply share the management data with the leaders of the response, and to ultimately use the information to measure the successes and failures of the response. Simple questions such as ‘did we provide the right thing, at the right time, to the right people?’, or ‘what percent of the people who needed food actually received rations?’

As a physician, I know how the tools of “evidence-based medicine” improve my practice and help me provide the best possible care. During those first difficult weeks in Haiti, I wish we had some kind of data available to help us identify and meet needs and best practices. Every disaster is different, but so is every patient and the response to both should be based on information, not best guess. Today, the term “evidence-based medicine” is understood and practiced globally; what we need is a similar understanding of the importance of “evidence-based disaster response.”

Only by using objective data can we bring the right help to the right people, the right away.