Go back

Academia loves surveys—but it needs to get more choosy

Image: alubalish, via Getty Images

Addiction to scattershot polling yields worse information for more bureaucracy, says Marcus Munafò

It sometimes feels like barely a week goes by without an invitation to complete a survey of some kind—either institutional, disciplinary or sectoral. Institutional staff surveys are a prime example; they typically take 15 or 20 minutes to complete, but funders, publishers and other organisations also often canvass opinion on a range of topics.

Policymaking tools

Surveys have become a central tool of management and policymaking in academia, intended to capture information that can tell us where we are, or what we need to do, in areas from staff wellbeing through to the broader direction of travel of the sector. But do they achieve that aim? And could they be more efficient?

As any epidemiologist or pollster will tell you, the representativeness of your sample doesn’t always matter. Smoking, for example, will cause lung cancer in any population you care to look at. 

But sometimes it matters a lot. For estimating prevalence—the proportion of people who think or act a certain way—it matters a great deal. If a poll on whether people smoke only asked academics, the results would not reflect wider society. Opt-in polls, where a survey is sent out into the world to see who responds to it are, by definition, not representative. They collect the views of those with the time, the resources and the level of interest (or obsession) to engage.    

Even if an opinion poll were simply sent out to everyone in the country in the hope of achieving a high response rate, the results would almost certainly be wildly inaccurate. Those who took the time to complete the survey would not reflect the average person. It is more likely that it would miss some of the most valuable and informative responses, such as from people struggling with day-to-day life to a degree that precludes completing a survey.

Some opt-in polls offer rewards in an effort to boost and broaden sample size—but these can backfire. In March, the US-based Pew Research Centre reported that this method is vulnerable to a high rate of “bogus respondents” who complete the survey insincerely and with minimal effort to obtain the reward. Bogus respondents, the Pew found, also tend to report more extreme views. 

For these reasons and others, polling companies work hard to ensure that their samples are representative, adjusting their sampling methods to account for factors such as differing response rates in different age groups. And the result is that, after some high-profile failures—such as predictions that the 2015 general election would be much tighter than it actually was, for example—polls of voting intention predict election outcomes pretty reliably.

More carefully designed and targeted polling has another benefit that speaks to the concerns around bureaucracy and workload in academia. The samples are tiny relative to the population they are intended to capture information on—a couple of thousand people, at most, can give an idea of how the whole country is thinking.

Representative sampling

So representative sampling gives us better information more efficiently. What’s not to like? And yet I have never seen an approach like this taken for any of the surveys I have been asked to complete. They are simply sent to everybody in the hope of achieving a high, and therefore representative, response rate, which they almost never do. So we end up with more bureaucracy, more workload and worse information.

Of course, there are challenges to using more sophisticated polling in higher education. Using a single standing panel—a representative group willing to be polled time and again—would probably not work given the ever-changing nature of the academic workforce, the wildly differing types of survey and the demands on panel members.

Things to consider 

But there are other approaches worth considering, such as randomly selecting individuals across career stages, department, institutions, and so on, and incentivising them to complete the survey (although this needs to be done carefully, given the Pew report).

At the very least, it would be useful to run an experiment to see what difference collecting a representative sample makes. When an institution is ready to conduct another survey, it could combine the conventional opt-in approach with a more targeted approach aimed at collecting data on a much smaller but more representative sample, and see how the results compared—including whether the representative sample is indeed more representative.

I suspect the results will be striking. 

Marcus Munafò is professor of biological psychology and associate pro vice-chancellor, research culture, at University of Bristol

This article also appeared in Research Fortnight and a version appeared in Research Europe