## On central matrix based methods in dimension reduction

### Journal Article

#### Abstract

Dimension reduction for regression analysis has been one of the most popular topics in the past two decades. It sees much progress with the introduction of the inverse regression, centered around the two key methods, sliced inverse regression (SIR) and sliced average variance estimation (SAVE). It is well known that SIR works poorly when the inverse conditional expectation $E\left(X|Y\right)$ is close to being nonrandom. SAVE and its many generalizations, which do not suffer from this drawback, lag behind SIR in many other circumstances. Usually a certain weighted hybrid of SIR and SAVE is necessary to improve overall performance. However, it is difficult to find the optimal mixture weights in a hybrid, and most such hybrid methods, as well as SAVE, require the restrictive constant (conditional) variance condition. We propose a much weaker condition and a new accompanying algorithm. This enables us to create several new central matrices that perform very favourably to existing central matrix based methods without referring to hybrids. The Canadian Journal of Statistics 41: 421–438; 2013 © 2013 Statistical Society of Canada

View all

View all