# Testing the heteroscedastic error structure in quantile varying coefficient models

## News

• Author: Irène Gijbels, Mohammed A. Ibrahim and Anneleen Verhasselt
• Date: 04 January 2019

In mean regression the characteristic of interest is the conditional mean of the response given the covariates. In quantile regression the aim is to estimate any quantile of the conditional distribution function. For given covariates, the conditional quantile function fully characterizes the entire conditional distribution function, in contrast to the mean which is just one of its characteristic quantities. Regression quantiles substantially out‐perform the least‐squares estimator for a wide class of non‐Gaussian error distributions. In an article written for The Canadian Journal of Statistics, we consider quantile varying coefficient models (VCMs) that are an extension of classical quantile linear regression models, in which one allows the coefficients to depend on other variables.

Exploring relationships in data, and in particular answering questions about which factors (variables) influence a variable of interest (the response) is among the key questions in statistics. A linear relationship is often too simplistic to describe the real influence of the predictive factors on the response variable(s). With varying coefficient models one steps away from this too simplistic relationship, by allowing the regression coefficients (the intercept and slope say) to vary over time, with another variable, with several predictive variables etc. With varying coefficient models one thus enters a more flexible but feasible modeling framework, that will be more capable to describe adequately the complexities in the data.

Many studies only look into the mean influence of the predictive factors on the variable of interest. These factors however can have an impact on whole the distribution and not only on its mean. Since a quantile function entirely determines the whole distribution, the study of quantile regression in the context of varying coefficient models is of importance.

In applications the question rises whether the variability in the data is different for different values of the predictive factors. If this variability is constant, no matter what these predictive factor-values are one talks about homoscedasticity. Data often however show heteroscedasticity, and one needs to take this into account when analyzing the data. In this paper some interesting plausible heteroscedasticity structures are discussed within the context of varying coefficient models. A next question is then which of these structures is most appropriate for the data at hand. In this paper the authors develop tests for testing whether a specific heteroscedastic structure is appropriate or not.

View all

View all