## On central matrix based methods in dimension reduction

### Journal Article

#### Abstract

Dimension reduction for regression analysis has been one of the most popular topics in the past two decades. It sees much progress with the introduction of the inverse regression, centered around the two key methods, sliced inverse regression (SIR) and sliced average variance estimation (SAVE). It is well known that SIR works poorly when the inverse conditional expectation $E\left(X|Y\right)$ is close to being nonrandom. SAVE and its many generalizations, which do not suffer from this drawback, lag behind SIR in many other circumstances. Usually a certain weighted hybrid of SIR and SAVE is necessary to improve overall performance. However, it is difficult to find the optimal mixture weights in a hybrid, and most such hybrid methods, as well as SAVE, require the restrictive constant (conditional) variance condition. We propose a much weaker condition and a new accompanying algorithm. This enables us to create several new central matrices that perform very favourably to existing central matrix based methods without referring to hybrids. The Canadian Journal of Statistics 41: 421–438; 2013 © 2013 Statistical Society of Canada

View all

View all

## Site Footer

### Connect:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.