How to help surveys have more impact

Features

  • Author: Catherine Saunders
  • Date: 16 Oct 2015
  • Copyright: Image appears courtesy of Getty Images

Surveys are used in evaluation and research across exceptionally diverse contexts. In health services research, the reason that doctors most frequently give for not acting on the findings of patient experience surveys—questionnaires that ask people questions about their healthcare—is that they just don’t believe that the findings are correct or valid. In international evaluation contexts, the quality of survey data is often cited as a challenge to the validity of the work, the reason not to act on the findings. In epidemiological research, concerns have been raised about surveys with sample sizes from under 10 respondents to over a million.

thumbnail image: How to help surveys have more impact

Why do surveys provoke so much cynicism?

A dose of skepticism is healthy when surveys have high-stakes uses, such as when findings are linked to financial incentives, or major programme changes and reviews. With good reason, appropriate caution is required when using findings in these contexts.

Skepticism is also appropriate because survey methods can be flawed. Surveys do have limitations, and researchers, like pollsters, can get things wrong. The recent UK general election is a case in point. Pre-election polls consistently got the results wrong, and post-election reviews suggested that perhaps polling respondents weren’t representative of voters as a whole, or that the surveys didn’t necessarily ask the right questions. Once lost, trust can be hard to regain.

However, it is also possible that policymakers, stakeholders, and other users of the research challenge survey findings for reasons beyond purely methodological ones. And emotion may sometimes play a part. When a survey is difficult to understand, for instance, one reflexive reaction can be to challenge the survey itself. Challenge can often be appropriate when findings run counter to received wisdom and current practice, but rejecting findings, just because they are unexpected, is no reason to be dismiss them altogether. Disliking the findings is, of course, unacceptable as well: Objective research can reveal inconvenient truths. Not knowing what to do about survey findings can also explain why they might be challenged—the practical response required may feel impossible or overwhelming.

How can researchers and the users of survey research move from a knee-jerk dismissal of survey findings to a situation where the majority of stakeholders are comfortable trusting and acting on the findings of the research? Likely ways forward include emphasizing the methodology, triangulating the evidence, acknowledging the limitations, and developing good communication and trust between parties.

Get the survey methods right
Researchers developing and running surveys need to ensure good practice, that the survey is well-designed, and that the data collection is well-implemented in the field. There is an abundance of resources out there, across research areas. The STROBE statement, which stands for Strengthening the Reporting of Observational Studies in Epidemiology, has been developed by an international collaborative of researchers; it provides a good starting point for researchers planning and designing a survey, and for commissioners, practitioners and end-users of the research who are reviewing the work. Its guidelines were developed to improve the quality of the reporting of epidemiological and observational studies, and the criteria laid out identify the key methodological challenges that should be assessed. These include assessment of study design, sampling, missing data, measurement and selection bias, confounding, and generalizability.

Context-specific issues also may need to be considered. In international and development contexts, for example, survey designers need to engage with research commissioners and local staff with understanding of the particular research setting before the survey is carried out. Doing so can help identify site-specific issues that can center around ethnicity, cultural taboos and beliefs, religious practice, language and power structures—all of which have the potential to create bias and should be taken into account at the design stage.

Look for inconsistencies, and triangulate the findings with other sources
Ask, and try to answer, “Does this result make sense?” and “Is this finding consistent with other evidence?” Exploring the internal consistency of survey findings, and the external validation of findings against other sources when available, can both help provide evidence of a more complete picture to inform policy and practice.

Clearly acknowledge pragmatic choices sometimes get made

Researchers and stakeholders may have to acknowledge that pragmatic choices are sometimes made. For example, a snowball-sampling scheme (when people are asked to recommend other eligible respondents) is very good for a survey of hard-to-reach groups, but the results will be more susceptible to bias. Resource-poor settings or areas where data collection may be too difficult or dangerous can also present specific challenges; for example, ensuring the safety of local data collection staff can limit areas where research is carried out. Survey research needs to be ethical in every context, and this may lead to methodological limitations in the survey design that also need to be clearly accepted and acknowledged. Acknowledging these tradeoffs and clearly describing the reasons, strengths and limitations of the approaches taken can help increase confidence in the overall methodology of the work.

Communicate the findings clearly
In addition to conducting the survey properly, researchers need to communicate the findings well, describe the context of the survey, acknowledge and explore the survey’s limitations, and clearly explain how the limitations may impact the findings. Simply explaining the methods clearly may be enough to make non-statisticians engage with the results.

Next steps
Another approach may be needed if stakeholders, policymakers or other survey end-users are still struggling to engage with findings, even after methodological review, clear communications, and appropriate triangulation and caveating of results. In the healthcare field, doctors like hard evidence, so in examining patient-experience surveys, a steady body of work has built up from academic researchers over the past decade that looks at each methodological challenge to survey findings and explores whether they hold up. Stakeholders might also want to ask themselves what specific concerns they have, and why they are so concerned, when questioning survey methods.

A need for teamwork and trust

Survey researchers, stakeholders and research commissioners are fundamentally on the same side and have the same motivations across all contexts: figuring out the best outcomes for vulnerable people across the world, improving the service delivered, or understanding how care or service quality can be improved. The research community also wants to enhance and maintain its professional reputation. Surveys often involve a partnership, and both researchers and commissioners of research need to take shared responsibility for the work that has been developed and implemented—and work together so that concerns about survey methods are not the barrier to impact.

Catherine Saunders is an analyst and statistician at RAND Europe.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.