Sensitivity analyses: shedding unexpected light on results

Features

  • Author: Katie Saunders
  • Date: 01 May 2015
  • Copyright: Image appears courtesy of Getty Images

As an applied statistician, sensitivity analyses are among the less glamourous parts of my job. A researcher asks “could you just check that…” and umpteen figures and tables later we have the (usually unsurprising) answer that no, this or that particular issue, doesn’t change the findings of the work. But despite the lack of glamour, sensitivity analyses are an important part of research. This blog looks at a number of things. First, it examines how sensitivity analyses are used to explore assumptions that are made during statistical analyses. Second, it looks in more depth at one approach that can sometimes be helpful. Finally, it provides an example of a sensitivity analysis leading to additional, and surprising, insights in the overall research.

thumbnail image: Sensitivity analyses: shedding unexpected light on results

First, assume nothing

A sensitivity analysis allows you to test and explore empirically some of the assumptions that underlie the findings in a piece of work. These might be assumptions about a relationship, or about data that has been excluded, or about which part of an analysis is most important. The basic idea goes like this – change the analysis approach to make sure you get the same findings even with a different method, change the data to see if you get the same findings even if some feature were different, try something else out, and make sure you aren’t surprised by what you see.

By the time you get to the end, you may become more confident that the findings you are reporting are not just a quirk of some analysis decision made halfway through a project.

Missing data is an example of a flaw that may be exposed through sensitivity analyses. Missing data – due to incomplete survey responses, missing electronic records, lack of baseline information, or some other cause – are most usually dealt with in an analysis by only using the cases for which there is complete information. Under certain assumptions about how data are missing, multiple imputation is one approach to assessing possible biases that may occur.

Second, think about best and worst case scenarios

One important approach to sensitivity analyses (although only possible when data are binary, i.e. either yes or no, present or absent), is to change all the data, or at least all the missing observations, to either “yes”, or “no” responses for a look at how far research findings change in the most extreme alternative cases possible.

This is what we did to explore the importance of missing data in some research on variation in the length of time a patient takes to see a doctor (the patient-interval) when they first have symptoms that suggest cancer. As well as sensitivity analyses exploring the cut points for how a “prompt” patient-interval was defined (i.e. waiting up to two weeks, or up to four weeks, before going to see the doctor), we also explored best and worst case scenarios for missing data. For the three out of 18 cancer diagnoses with most missing data (leukemia, prostate cancer and melanoma) we found that the findings of the research might change if all patients with missing information for the length of time they waited before going to see their doctor had delayed (worst-case), or prompt (best-case), intervals. Broadly, however, our findings, even in these most extreme scenarios, were consistent with the main results of the work.

Finally, results are sometimes surprising

Occasionally the last sensitivity analysis in a long series that shows no changes to the overall conclusions of a piece of work, can suddenly throw new insights on an old analysis. Some of our recent work provides an example of this.

Using data from the General Practice Patient Survey, a national survey of more than a million patients from all general practices in England, our research found that people with multiple long-term health conditions reported poorer experiences in primary care than people with a single long-term condition, or people without health problems.

With this result, we then explored the assumptions underlying it. First, we explored whether the type and combinations of long-term health conditions that people reported changed these findings. And in this set of sensitivity analyses we found that no, people with more long-term health conditions report poorer experiences in primary care than people with fewer or no conditions, regardless of the combinations of conditions that they report. The strength of the relationship changes a bit across conditions, and across age groups – and when different conditions (for example dementia, where carers rather than patients themselves may have completed the survey) were excluded – but the main findings themselves are consistent when all the different options have been taken into account.

It may be that people with multiple long-term health conditions have more interactions with primary care than those with a single or no long term conditions, and that this higher frequency of interactions leads to more variation in reported experiences (both very good, and poor). Or alternatively, it could be that the complex needs of those with long-term conditions lead to an increase in both the best and worst experiences. 

We then explored whether missing data could have influenced our findings, and again we found that no, the sensitivity analysis results were consistent with the main findings.

In the last sensitivity analysis we tested our assumptions about the definition of a “positive” patient experience. Instead of using “good” or “very good” responses to define a “positive” experience of care, we used “very good” responses only. And here we found that yes, our findings changed; people with long-term conditions are in fact more likely to report “very good” experiences, while at the same time being more likely to report negative care experiences.

So what does it mean when a sensitivity analysis throws up some unexpected results? In this work, comparing the sensitivity analysis to our main findings it would appear that people with multiple long-term conditions may be more polarised (i.e. both very good and poor) in reporting their experiences.

It may be that people with multiple long-term health conditions have more interactions with primary care than those with a single or no long term conditions, and that this higher frequency of interactions leads to more variation in reported experiences (both very good, and poor). Or alternatively, it could be that the complex needs of those with long-term conditions lead to an increase in both the best and worst experiences. Even sensitivity analyses that do not give the expected result can shed unexpected additional light on what is actually going on.

Conclusion, always read to the very end

The long list of sensitivity analyses or the online supplemental appendices for a piece of research are always worth a read. Why? For one thing they tend to undergo less stringent copy-editing than the main body of a paper and unexpected insights into what the researchers really think can sometimes sneak into the public domain. But mostly, it is because the detail of all the assumption checking, including of that umpteenth sensitivity analysis, occasionally does reveal something that wasn’t known before.


Katie Saunders is an analyst at RAND Europe.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.