Amidst all the debate about campus sexual assault — with everyone from President Obama to freshmen talking about the issue — there are still questions about the scope of the problem. Two graduate students at Stanford suggest a way to better determine its prevalence.
Emma Pierson is a Rhodes Scholar and PhD student in computer science at Stanford University who writes about statistics at Obsession with Regression. Shengwu Li is a PhD student at Stanford who studies economic theory and behavioral economics.
By Emma Pierson and Shengwu Li
How common is sexual assault on college campuses? A recent massive survey by the American Association of Universities (AAU) attempted to answer this question definitively, and universities including MIT and Brandeis have also conducted independent surveys.
All of these surveys have reported that at least 15 percent of female undergraduates experience some form of sexual assault by the time they graduate.
And all of these surveys have met with the same criticism: The percentage of students responding to the survey is very low.
The AAU survey had only a 19 percent response rate; the MIT and Brandeis surveys, 35 percent.
These low response rates are a big problem because the people who choose to respond may well be a biased population, which will produce incorrect estimates of the rate of assault. As the AAU survey noted, “Certain types of estimates may be too high because non-victims may have been less likely to participate.”
Sexual assault on campus is a serious problem, but it is difficult to study accurately.
Many current studies, such as the AAU survey, use the following technique: They invite the entire student body to participate in the survey, but offer little incentive to participate, so many students do not bother to respond.
We have a better solution: Invite only a smaller sample of students, selected at random from the student body. Use the cost savings to offer students a lot of money for participating, so that almost every invited student responds.
As every statistician knows, you do not need to survey the entire population to accurately estimate the rate of assault. A smaller sample will do — provided that the sample is not biased but selected at random from the population.
Rather than paying a huge number of students a tiny amount of money and getting a biased sample because few students respond, it is much better to pay a smaller number of students a large amount of money and get an accurate answer because almost all the students respond.
Many universities in the AAU survey offered 6,000 students $5 each to fill out the survey, yielding a total potential cost of $30,000. But if you were willing to pay $30,000, it would be far better to pay 600 randomly selected students $50 — a $200 hourly rate for a 15-minute survey, giving students a much larger incentive to respond.
It is important to survey a reasonably large number of students so your results do not vary too much due to random chance. 600 students is enough to yield reliable results: If 20 percent of students have been assaulted, your survey has a 95 percent chance of yielding a result between 17 percent and 23 percent.
If you combine results across the 27 campuses the AAU studied, your survey has a 95 percent chance of yielding a result between 19.4 percent and 20.6 percent. These uncertainties are small, and more importantly, they are far easier to quantify than the uncertainty caused by a low response rate, which leaves you blind about the students who do not respond.
There are a few potential objections to our suggestion.
First, why do we think it is possible to increase the survey response rate? Several studies of sexual assault have reported much higher response rates — for example, a University of Michigan survey had a 67 percent response rate, a Stanford University survey had a 59 percent response rate, and a Department of Justice survey had an 86 percent response rate.
(Importantly, these surveys with high response rates also imply that sexual assault is a serious problem. The Michigan study found that 12 percent of undergraduate women had experienced non-consensual penetration within the past year; the Stanford survey found that 12 percent had experienced attempted or completed non-consensual penetration since arriving at Stanford; the Department of Justice found that 3 percent had experienced rape or attempted rape within the last 7 months.)
If the low response rate is mostly due to apathetic students who have not experienced sexual assault, then offering $50 for 15 minutes of their time should get their attention.
Universities could further boost response rates by ensuring that everyone on campus was aware of the survey, and that the invited students actually saw the survey (rather than ignoring a survey e-mail) by, for example, requiring invited students to decide whether or not to take the survey prior to enrolling in courses.
It may not be possible to get 100 percent of invited students to respond. For example, some students who have been assaulted may not want to talk about it regardless of how much money is offered (and universities should of course make it clear that they can opt out).
But even if the response rate is not 100 percent, it allows us to place a lower bound on how many students have been assaulted. For example, if 90 percent of students respond to the survey, and 20 percent of them say they have been assaulted, we can infer that at least 90 percent x 20 percent = 18 percent of students have been assaulted, even if none of the students who failed to respond were.
You might also ask whether it is possible to correct for a low response rate using statistical techniques.
While there are methods for doing so, their validity can be debated and they do not always yield consistent conclusions. Furthermore, there is at this point so much skepticism about “campus sexual assault hysteria” that studies are subject to heavy criticism unless they are rock-solid.
In this politically charged climate, a survey with a high response rate is easier to understand and more likely to persuade people than complex statistical corrections for bad data.
Even if our method failed to achieve a perfect response rate, it would demonstrate to skeptics that a perfect response rate is in fact impossible to achieve and not a sign that a survey has a political agenda.
If you wanted to look at subgroups of students — men and women, undergraduates and graduates — you might have to increase sample size, increasing expense.
But $30,000 is a bargain price for a university to pay for more reliable insight into a massive problem — less than a year’s tuition for a single student at a private college, less than a fifth of the estimated victimization cost of a single rape, and literally less than a millionth of Harvard’s endowment.
Universities who survey their entire student body, but make no effort to achieve a high response rate, misunderstand the basic statistical goal: not to get as many students as possible to respond, but to get an unbiased sample of reasonable size.
If universities care about sexual assault, they should invest the resources to study it as accurately as possible.