How to fix the flaws in the Global Terrorism Database and why it matters

August 11

START, a U.S. Department of Homeland Security’s Center of Excellence located at the University of Maryland, recently responded to our critique that its Global Terrorism Database (GTD) inaccurately represents trends on suicide terrorism.   As we learned on 9/11, the data on suicide terrorism matters. In the 1990s, no government thought to track the incidence of suicide attacks around the world, making it impossible to see that this deadly form of terrorism was rising rapidly.  Today, both the public and policy makers base their sense of security and national security policies in part on the trends in terrorism provided by major government databases.  Accordingly, it is crucial that the main DHS database accurately portray the trajectory of terrorism.

This is why it is especially disappointing that START indicates no effort to fix the major flaw that we identified.   To be clear, according to the GTD data today, there were over 70 percent more suicide attacks in 2013 (619) than the previous peak in 2007 (359) during the Iraq war.

However, as a comparison with The University of Chicago Project on Security and Terrorism (CPOST)‘s database on suicide attacks reveals, this apparent dramatic rise is mainly a function of a methodological change in how GTD collects data and not an accurate portrayal of reality. The CPOST database, which has maintained a consistent collection methodology over time, records 521 suicide attacks in 2007 and 423 in 2013 — not the spectacular rise the GTD reports. 

The media is widely reporting that terrorism is now at an unprecedented high, relying heavily on GTD data to support this picture.  START has actively encouraged this perception among policy makers.   In February, START’s director used the flawed data to describe 2012 as “the most active year of terrorism on record” in testimony to the House Armed Services Committee.  The danger is that START’s record of global terrorism become the basis for policy decisions, even though by START’s own admission its record is flawed.

Rather than addressing the issue, START’s response evades the key methodological flaw by focusing mainly on similarities between GTD and CPOST, defending their commitment to rigor and transparency, and slightly modifying their recent data.  START’s reply says:

“In fact, although different organizations have been responsible for GTD primary data collection, the quality control function of GTD data collection has been continuous.   GTD researchers, past and present, have ensured that the entire database uses the same standards for inclusion and is as comprehensive as possible.”

At the same time, however, START laments that consumers of GTD data do not “heed our warnings” about temporal comparisons, as if it bears no responsibility for how others connect the trajectory of data over periods with incommensurate data collection methods. In other words, START is taking the odd position that it laments that its own warnings about temporal comparisons are not heeded while in effect suggesting that such warnings are not needed.

No wonder consumers are confused. START seems to be shifting the burden to figuring out the methodological messiness in the data onto the consumers of its data, issuing warnings that are ambiguous, conflicting, and confusing, especially when START itself does not listen to its own warnings.

Under these circumstances, warnings alone are not sufficient. The GTD is presented as a seamless record of terrorism spanning many decades: its very selling point is to see and study trends in terrorism over time.  Everything about the way the database is presented, the interface for accessing the data online, GTDs own infographics, the very name of the database, reflect a tacit (if not explicit) endorsement of the data as an accurate record of global terrorism.  It is therefore no surprise that GTD’s warning against drawing inferences from their data on trends in global terrorism are ignored. How serious could the problem be if, in spite of its warnings, the GTD uses its data to make claims about the trajectory of violence in the world, and makes its analyses based on its data available to the media and policy makers?   If GTD truly believes that using its data to analyze and study trends in global terrorism is flawed enough to warrant a warning, then it should do more to prevent its data from being used to represent such trends — until such time as the problem has been fixed.

In sum, what START has not done is present a plan or even a commitment to resolve the underlying incongruity in collection methods across time that is painting a misleading picture of terrorism trends to the world.

So, what should START do to fix the problem?   In the short term, START should undertake the following three steps:   (1)  Break the GTD data into four distinct datasets, each based on a consistent collection method, requiring users to download different files for each dataset; (2) quantify the impact that changes in GTD data collection methodologies have on the number of events in the database, which would be accomplished by collecting data using both the new and old methods in a given year; and (3) alter all trend graphics to display and delineate the time-periods of specific collection methods that significantly influence the number of attacks collected so that all users would see the discontinuity across time.    Below is an example of a trend graphic that would display the different collection methodologies using their most recent data.

Ultimately, however, the U.S. government should fund a true correction of the flaw.  The eventual solution is for all the data to be collected using the same methodology.    Undoubtedly, this would be expensive.    But, American policy makers and the public deserve the best data available on terrorism, one of the most important national security issues of our time.

Robert A. Pape is a professor at the University of Chicago where he heads the Chicago Project on Security and Terrorism.  Keven Ruby, Vincent Bauer and Gentry Jenkins are Research Director and Research Analysts at CPOST, respectively.  

Comments
Show Comments
Most Read Politics
Next Story
John Sides · August 11