Recurrent events analysis, not so straightforward!

Features

  • Author: Dr Jennifer Rogers
  • Date: 28 Jan 2014
  • Copyright: Image appears courtesy of iStock Photo

In randomised controlled trials for heart failure, composite endpoints have long been the preferred choice for a primary outcome and there are two main reasons for their popularity [1]. Firstly, sometimes investigators have no obvious choice for a single primary endpoint and may wish to assess the effect of treatment on many outcomes. Composite endpoints combine multiple outcomes into one, analysing whichever of the outcomes occurs first (typically as time to event), which avoids the multiplicity issues surrounding multiple testing. Secondly, combining multiple outcomes into one increases the event rate, thus requiring smaller sample sizes or shorter follow up, or both. Composite endpoints, however, are not without their disadvantages, which are well documented [2].

The task of interpreting results is tricky, as the measure of an estimated treatment effect can be mistakenly attributed to an outcome that exhibits no effect, or conversely, the measure of a treatment effect can be diluted by being combined with an outcome that shows no evidence of benefit. Additionally, each of the contributing outcomes is assumed to be of equal importance, with only the first occurring event included in the analysis. A further consequence of adopting composite endpoints as a primary outcome in clinical trials is that repeat, non-fatal events within individuals are ignored in analyses. It is this thinking that led to me delving into the world of recurrent events analysis!

thumbnail image: Recurrent events analysis, not so straightforward!

Heart failure is a chronic disease that is characterised by recurrent hospitalisations. These hospitalisations are distressing for patients and care givers and are a major driver of the enormous cost of heart failure to health care systems, and yet analyses have typically considered a composite of first heart failure hospitalisation or cardiovascular death as the primary outcome [3]. This approach ignores repeat heart failure hospitalisations within individuals and so analyses that consider all events are desirable. Standard methods for analysing recurrent events are well developed [4]. A simple measure of the number of admissions to hospital for worsening heart failure is the event rate. The Poisson distribution is commonly used to test for differences in event rates between two treatment groups, but this method assumes that all hospitalisations are independent and ignores heterogeneity amongst patients within the same treatment group. The Andersen-Gill model analyses inter-event times and is a generalisation of the Cox proportional hazards model whereby each gap time independently contributes to the partial likelihood (as opposed to each individual). Robust standard errors can be used to accommodate heterogeneity. Alternatively, the negative binomial distribution naturally accommodates differing frailties amongst individuals by assuming that each individual has their own Poisson event rate, and that event rates are assumed to vary according to a gamma distribution. Application to data from large scale clinical trials in heart failure have shown that these methods typically result in larger estimated treatment effects than the conventional time to first event analysis, with associated clear gains in statistical power [4,5].

So is that it, problem solved? Well, not quite. Heart failure hospitalisations are associated with an increased risk of cardiovascular death, so if an individual dies during follow-up, this isn’t necessarily independent of the event process of interest. Dependent censoring needs to be accounted for in any analyses that are carried out and this renders standard methods as unsuitable. One simple, ‘quick fix’ strategy for incorporating mortality into the outcome is to consider the composite of recurrent heart failure hospitalisations and cardiovascular death, where mortality is another event in the event process. This outcome, however, suffers from many of the same problems as our original composite endpoints, so we turn to joint modelling to try and solve the problem.

Heart failure hospitalisations are associated with an increased risk of cardiovascular death, so if an individual dies during follow-up, this isn’t necessarily independent of the event process of interest. Dependent censoring needs to be accounted for in any analyses that are carried out and this renders standard methods as unsuitable.

Joint modelling techniques are well established in longitudinal trials of repeated measures at pre-specified times and time to event data [6]. These strategies include random effects models that induce an association between the repeated measures and time to event within individuals via an unobserved latent variable. Joint models analyse recurrent heart failure hospitalisations whilst accounting for their associated mortality risk. The model specifies distributions for recurrent events and time to death conditional on a random effects term, so that the two processes are conditionally independent. We denote the heart failure hospitalisation count for individual \( i \) by \( X_{i} \) and let \( T_{i} \) be their associated, possibly censored, time to cardiovascular death. The individual-specific frailty term, \( v_{i} \), represents the effect of unobserved factors on both hospitalisation and death. The joint distribution of \( X_{i} \) and \( T_{i} \) then takes the form: \[ \begin{align*} f_{X,T}(x_{i}, t_{i}) = \int_{v} f_{X|v}(x_{i} | v_{i}) f_{T|v} (t_{i}|v_{i})f_{v}(v_{i})dv_{i}. \end{align*} \]

A convenient parameterisation of this model is to assume Poisson and exponential distributions for the heart failure hospitalisations and time to cardiovascular death respectively, conditional on the frailty terms, with individual frailties assumed to follow a Gamma distribution. Thus, heart failure hospitalisation rates follow a negative binomial distribution and times to cardiovascular death follow a lomax distribution. Models of this kind are intuitively appealing as they can give a tangible interpretation that an individual’s frailty term measures their unobserved, underlying severity of illness, which proportionately affects both their heart failure hospitalisations and their time to cardiovascular death. Additionally, these models allow distinct treatment effects to be estimated for the processes, whilst taking account the association between the two.

Great! Now have we solved all our problems? Unfortunately not. One major limitation of the joint model in its current form is that the individual-specific frailties are assumed to have the same proportional effect on the rate of heart failure hospitalisations and the hazard rate for time to cardiovascular death. This is a very strong assumption that may not always hold, but it is an assumption that can very easily be relaxed so that each individual has two random effect terms (one for each event process) that are correlated in some way. Additionally, when a patient is hospitalised for worsening heart failure, their prognosis worsens and they suddenly have a much higher risk of repeat heart failure hospitalisations. All the methods considered here assume that conditional on individual-specific frailties, heart failure hospitalisations within an individual are independent and any clustering of events is ignored. This is a tricky issue and how to go about solving it is not obvious. What is obvious, is that recurrent events analysis is not straightforward and there are many things that need to be thought about when considering these as outcomes. The methods presented here address some of those issues, but there are many other problems that are still to be tackled!

References

[1] Skali H, Pfeffer MA, Lubsen J, Solomon SD. Variable impact of combining fatal and nonfatal end points in heart failure trials. Circulation. 2006;114:2298 –2303.
[2] Neaton, JD, Gray G, Zuckerman BD, Konstam MA. Key issues in end point selection for heart failure trials: composite end points. Journal of Cardiac Failure. 2005;11(8):567-575.
[3] Conard MW, Heidenreich P, Rumsfeld JS, Weintraub WS, Spertus J. Cardiovascular Outcomes Research Consortium. Patient-reported economic burden and the health status of heart failure patients. Journal of Cardiac Failure. 2006;12:369 –374.
[4] Rogers JK, Pocock SJ, McMurray JJV, Granger CB, Michelson EL, Östergren J, Pfeffer MA, Solomon S, Swedberg K, Yusuf S. Analysing recurrent hospitalizations in heart failure: a review of statistical methodology, with application to CHARM-Preserved. European Journal of Heart Failure. 2014;16:33-40.
[5] Rogers JK, McMurray JJV, Pocock SJ, Zannad F, Krum H, van Veldhuisen DJ, Swedberg K, Shi H, Vincent J, Pitt B. Eplerenone in patients with systolic heart failure and mild symptoms: analysis of repeat hospitalizations. Circulation 2012;126(19):2317-2323.
[6] Diggle PJ, Sousa I, Chetwynd AG. Joint modelling of repeated measurements and time-to-event outcomes: The fourth Armitage lecture. Statistics in Medicine 2008; 27:2981-2998.

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.