Number Needed to Treat for Recurrent Events – Event- or Patient-Based?

Features

  • Author: Dr Jennifer Rogers
  • Date: 06 Jul 2015
  • Copyright: Image appears courtesy of Getty Images

The number needed to treat (NNT) has become an increasingly adopted tool for assessing the benefit of a new treatment [1]. There are obvious limitations associated with only presenting relative measures of treatment effects as they do not reflect the control event rate and so give no tangible idea as to the actual reduction in numbers, making clinical judgement difficult. Absolute risk differences give the difference in the risk of events under two treatment strategies and their inversion give the number of patients that would require treatment with the experimental treatment rather than the control in order to prevent, on average, one fewer event in the treatment group. It should be noted that NNT analyses are certainly not without their criticism [2], but nevertheless their popularity has not waned.

thumbnail image: Number Needed to Treat for Recurrent Events – Event- or Patient-Based?

As an expert in statistical analysis of recurrent events, I was recently asked to undertake a NNT analysis of a heart failure clinical trial where recurrent heart failure hospitalisations were the outcome of interest. Computation and interpretation of a NNT where there is a single binary outcome over fixed, equal follow-up for all patients is straightforward and well documented [3]. Trials of this kind, however, are rare. In practice, trials suffer from unequal follow-up and/or multiple events. When presented with a trial which exhibits both of these qualities, there are actually a number of different routes which can be taken, with each raising interesting questions about validity and interpretation [4].

Broadly speaking, there are two options available for calculating NNT type quantities for recurrent events: event-based (or patient-time-based) and patient-based. When a study involves multiple events within a patient, the frequency of the event can be presented as an incidence rate and there a number of routes available for the calculation of this. The Poisson distribution is commonly used to assess whether event rates in two groups differ and is simply calculated as the number of events in each treatment group divided by the total amount of follow-up in that group [5]. The problem with the Poisson distribution, however, is that it assumes that all events are independent. Alternatively, the negative binomial distribution naturally accommodates the fact that events within an individual may be related to each other [6]. The negative binomial can therefore be used to obtain estimated incidence rates in the treatment and control groups (IR1 and IR0 respectively), and the inverse of the quantity IR0 - IR1 then represents the NNT. At this point it may seem natural to interpret this NNT as the number of patients needed to be treated for a given period of time to prevent one event in that period. This type of interpretation, however, is wrong as this NNT does not represent patients treated for a time period, but rather person-moments of treatment, and instead the NNT should be interpreted as the number of person-moments needed to be treated to prevent one event. This distinction is important as 1 person being followed for 12 months on treatment to prevent an event represents 12 months of person-time, which could equally be expressed as 12 patients followed for 1 month on treatment to prevent one event.

This person-time-based interpretation has been criticised on the basis that it deviates from the original meaning of the number needed to treat to prevent a patient with an event and instead considers the number needed to treat to prevent an event [7]. If one would like to instead consider patient-based NNTs, it is straightforward to calculate the necessary cumulative incidence rates (i.e. the proportion of patients with at least one event) either directly or using the incidence rates in the following way:

CI = 1 – exp(-IR×t).

CI is the cumulative incidence of the event up to time t and IR is the incidence rate of the event measured in the same time units as t. This formula, however, requires the somewhat strong assumption that events are independent and so direct computation of the cumulative incidence of the first event to occur during follow-up may be more suitable. However the cumulative incidence in the treatment and control groups (CI1 and CI0 respectively) are calculated, the formulation of the NNT follows as before in that it is simply the inverse of CI0 - CI1. The resulting NNT can then be interpreted as the number of patients needed to be treated for a given period of time to prevent one person with an event in that period.

If events are of primary interest, as is most probably often the case, then interest will most commonly lie in the reduction of events regardless of whether it is a first event in a new individual or a secondary event in an individual who has already presented with a first. When this is the case, event-based NNTs are clearly more suitable, albeit with proper interpretation

These two different methods can give fairly different answers and so which is correct? Well neither is right or wrong, but rather giving answers to different questions. If events are of primary interest, as is most probably often the case, then interest will most commonly lie in the reduction of events regardless of whether it is a first event in a new individual or a secondary event in an individual who has already presented with a first. When this is the case, event-based NNTs are clearly more suitable, albeit with proper interpretation.

To conclude let me finish with one further complication in the analysis of NNTs for recurrent events. In the case of heart failure, an increase in hospitalisations for worsening heart failure is associated with an increased risk of death. There are methods that exist for the analysis of recurrent events in the presence of dependent censoring [6] and it may then be tempting to treat subsequent incidence rates in the same way as described here and calculate the associated NNTs using the inverse of the difference in rates. This is far too simplistic however and care must be taken as it is important to notice that one way in which the incidence of recurrent events can be reduced is through an increase in mortality. It is therefore important that the consequences of treatment on both the recurrent event and mortality rates are considered jointly [1].

References

[1] Cook RJ. Number needed to treat for recurrent events. Biometrics and Biostatistics 2013;4(3):167.
[2] Hutton JL. Number needed to treat and number needed to harm are not the best way to report and assess the results of randomised clinical trials. British Journal of Haematology 2009;146:27-30.
[3] Laupacis A, Sackett DL and Roberts RS. An assessment of clinically useful measures of the consequences of treatment. New England Journal of Medicine 1988;318:1728-1733.
[4] Suissa D, Brassard P, Smiechowski and Suissa S. Number needed to treat is incorrect without proper time-related considerations. Journal of Clinical Epidemiology 2012;65:42-46.
[5] Glynn RJ, Buring JE. Ways of measuring rates of recurrent events. BMJ 1996; 312:364-367.
[6] Rogers JK, Pocock SJ, McMurray JJV, et al. Analysing recurrent hospitalisations in heart failure: a review of statistical methodology, with application to CHARM-Preserved. European Journal of Heart Failure 2014; 16:33-40.
[7] Aaron SD and Fergusson DA. Exaggeration of treatment benefits using the “event-based” number needed to treat. Canadian Medical Association Journal 2008;179(7):669-671.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.