WIREs Computational Statistics

Model exploration using conditional visualization

Early View

Abstract Ideally, statistical parametric model fitting is followed by various summary tables which show predictor contributions, visualizations which assess model assumptions and goodness of fit, and test statistics which compare models. In contrast, modern machine‐learning fits are usually black box in nature, offer high‐performing predictions but suffer from an interpretability deficit. We examine how the paradigm of conditional visualization can be used to address this, specifically to explain predictor contributions, assess goodness of fit, and compare multiple, competing fits. We compare visualizations from techniques including trellis, condvis, visreg, lime, partial dependence, and ice plots. Our examples use random forest fits, but all techniques presented are model agnostic. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Statistical Graphics and Visualization Statistical Learning and Exploratory Methods of the Data Sciences > Exploratory Data Analysis Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods

Related Topics

Related Publications

Related Content

Site Footer


This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.