Biometrical Journal Special Issue on Novel Aspects in Biostatistics

Biometrical Journal has just published a special issue on Novel Aspects in Biostatistics.

This special issue is based on peer‐reviewed manuscripts that were presented at the 40th Annual Conference of the International Society for Clinical Biostatistics (ISCB) in Leuven (Belgium) held in the period 14–18 July 2019 and chaired by Tomasz Burzykowski (U Hasselt, Belgium). As conference theme, Novel Aspects in Biostatistics was chosen. The conference attracted 775 participants from 44 countries, and was one of the largest attended ISCB conferences ever. The scientific program, put together by the Scientific Program Committee chaired by Emmanuel Lesaffre (KU Leuven and U Hasselt, Belgium), consisted of eight preconference courses, two keynote presentations, eight invited sessions, 224 contributed talks, and 274 posters. A call was organized to submit high‐quality manuscripts for a special issue in this journal of any subject relevant in biostatistics, but there was a focus on the following five topics: causal inference and mediation analysis, surrogate marker research, new developments in Bayesian clinical trial methodology, high dimensional biostatistical data, and recent developments in survival models.

There were 22 submissions evaluated by five special guest editors (in alphabetic order): Ariel Alonso Abad (KU Leuven, Belgium), Hélène Jacqmin‐Gadda (Université de Bordeaux, France), Theis Lange (University of Copenhagen, Denmark), Emmanuel Lesaffre (KU Leuven and U Hasselt, Belgium), Gary Rosner (Johns Hopkins University), and Roula Tsonaka (Leiden University Medical Center, The Netherlands). Thirteen manuscripts were accepted after peer review with first authors from nine different countries.

A tribute to Doug Altman: An enthusiastic visionary biostatistician and a warm personality

At the meeting, a special session was organized honoring Doug Altman, who sadly passed away on 3 June 2018. Doug was a regular visitor of ISCB meetings and inspired many fellow biostatisticians, but also clinical researchers. Colleagues and friends of Doug were invited to write a tribute to him. Willi Sauerbrei and six long‐standing friends and colleagues of Doug Altman summarized parts of Doug’s contributions to regression modeling, reporting, and prognosis research, as well as some more general issues. Of course, it is impossible to cover in one paper the whole spectrum of the methodological output of this visionary leader who drove critical appraisal and improvements in the quality of methodological and medical research during the last 40 years.

The remaining contributions to this special issue deal with popular topics presented at ISCB meetings in the last decade. Namely, topics on the design and analysis of clinical trials, models in survival analysis with a focus on joint modeling, the treatment of missing data, and meta‐analyses.

Novel Bayesian developments in clinical trials

Yan et al. discuss the practical issue of designing a pilot study to aid developing a full‐scale sequential multiple assignment randomized trial (SMART). The authors focus on the precision of the effect of a dynamic treatment regime as the objective. They consider different outcome types and use the half‐width of a confidence interval as their measure of precision for the purpose of study design, rather than traditional power. They provide formulas allowing computation of sample sizes in a two‐stage SMART and demonstrate its performance by simulations.

Cantagello and colleagues propose a new measure of treatment effect in a clinical trial involving competing risks. It is well known that competing risks complicate quantification of differences between treatment groups, because efficient test methods building on cause‐specific hazards do not directly provide an easily interpretable effect measure. The novel methods proposed by Cantagallo et al. fill this gap in methodology. The methods and their performance are explored using simulations. The paper includes an illustration in oncology studies.

In precision medicine, a common problem is drug sensitivity prediction from cancer tissue cell lines. Such a problem entails modeling multivariate drug responses on high‐dimensional molecular feature sets in typically >1000 cell lines. Munch et al. propose to model the drug responses through a linear regression with shrinkage enforced through a normal inverse Gaussian prior incorporating external information. Model parameters are estimated using an empirical‐variational Bayes framework. Their approach is applied to publicly available Genomics of Drug Sensitivity in Cancer data.

Recent developments in survival analysis

A popular topic in survival analysis is the joint analysis of survival times and longitudinal data. Joint modeling of the two sources of information can better deal with time‐varying covariates and with missing‐not‐at random mechanisms. Spreafico and Ieva describe a joint modeling approach exploiting time‐varying covariates for dynamic monitoring of the effects of adherence to medication on survival in heart failure patients. Their approach to study treatment adherence is different from a classical approach that considers adherence as time‐fixed variables. The novelty is that it allows real‐time monitoring of patient adherence and individual prediction of health outcomes. The second contribution to joint modeling is by Böhnstedt and colleagues who model interval counts of recurrent events and death. Their joint frailty model is useful to account for possible dependent censoring when the terminal event and the recurrent processes are associated or to investigate the relationship between the two processes. With a piecewise constant baseline risk, they estimate regression coefficients and model parameters by marginal likelihood and provide a score test to evaluate the association between the two processes. The third contribution in survival analysis is by Syriopoulou, Rutherford, and Lambert. The authors propose a generalization of classic mediation analysis to include relative survival with a special focus on cancer research. The proposed method does not need any new causal/structural assumptions compared to a traditional mediation analysis, but does permit more clinically relevant interpretations.

The statistical analysis of data in the presence of missing data

For many decades, the analysis of data in the presence of missing data has intrigued statisticians, and this is not different at ISCB meetings. De Silva et al. conduct a comparison of multiple imputation methods implemented in STATA for handling missing values in longitudinal studies with sampling weights. The authors focus on how to incorporate sampling weights or design variables and compare imputation methods by simulations. They recommend multivariate normal imputation with the design stratum as a covariate in the imputation model. Faucheux et al. present methodology for clustering with both missing and left‐censored data with an unknown number of clusters. The methodology was evaluated by means of simulations, while considering various missing‐data handling methods, including multiple imputation, single imputation, and complete‐case analysis.

Meta‐analyses and network meta‐analyses

The special issue ends with four contributions on meta‐analyses. Hamaguchi et al. look at the frequentist performance of Bayesian prediction intervals for random‐effects meta‐analysis. The authors consider 11 noninformative prior distributions for the between‐study variance in their simulation study and in their analyses of eight published meta‐analyses. They conclude that the frequentist coverage is not well maintained with predictive intervals when there are fewer than 10 studies in the meta‐analysis. Verde suggests a new Bayesian hierarchical model, called the bias‐corrected meta‐analysis model, to combine different study types in meta‐analysis. The model is based on a mixture of two random effects distributions, where the first component corresponds to the model of interest and the second component to the hidden bias structure. His approach addresses the hurdle that, when combining disparate evidence in a meta‐analysis, one not only combines results of interest but also multiple biases. The novel model is illustrated on a meta‐analysis to assess effectiveness of vaccination to prevent invasive pneumococcal disease and on the effectiveness of stem cell treatment in heart disease patients. His results show that ignoring internal validity bias in a meta‐analysis may lead to misleading conclusions. Sofeu, Emura, and Rondeau propose a method for the meta‐analytic validation of failure‐time surrogate endpoints in clinical trials. The meta‐analytic approach for the evaluation of surrogate endpoints requires fitting of complex hierarchical models. These models are often fitted in two steps, which leads to estimation issues. When both the surrogate and true endpoints are failure times, the presence of censoring adds to the complexity of the problem. The authors suggest a one‐step method based on a joint frailty‐copula model to overcome the issues encountered with previous approaches. Their model includes two correlated random effects for treatment‐by‐trial interaction and a shared random effect associated with the baseline risks. At the individual level, the joint survivor functions of failure‐time endpoints are linked using copula functions. Estimation is based on a semiparametric penalized likelihood approach. Finally, the contribution of Rücker, Schmitz, and Schwarzer deals with network meta‐analysis. Such a meta‐analysis usually requires a connected network. In case of a disconnected network one may add evidence from nonrandomized comparisons, using propensity score or matching‐adjusted indirect comparisons methods. However, such nonrandomized comparisons may be associated with an unclear risk of bias. Rücker, Schmitz, and Schwarzer present a re‐analysis of a network meta‐analysis performed by Schmitz et al. on treatments for multiple myeloma. These authors used single‐arm observational studies for bridging the gap between two disconnected networks. Here, a component network meta‐analysis (CNMA) is proposed entirely based on RCTs. Such an approach makes use of the fact that many of the treatments consisted of common treatment components occurring in both networks. The authors argue that researchers encountering a disconnected network with treatments in different subnets having common components should consider a CNMA model.