# Layman’s abstract for Canadian Journal of Statistics paper on minimum Lq‐distance estimators for non‐normalized parametric models

Each week, we will be publishing layman’s abstracts of new articles from our prestigious portfolio of journals in statistics. The aim is to highlight the latest research to a broader audience in an accessible format.

The article featured today is from Canadian Journal of Statistics, with the full article now available to read in Early View here.

Betsch, S., Ebner, B. and Klar, B. (2020), Minimum Lq‐distance estimators for non‐normalized parametric models. Can J Statistics. https://doi.org/10.1002/cjs.11574

One of the classical problems in statistics is the estimation of the parameter vector of a parameterized family of probability distributions. In practical applications, these parametric models contribute a reasonable compromise between flexibility in the shape of the statistical model and meaningfulness of the inference drawn from the model. In statistical mechanics, image analysis, machine learning and many similarly advanced domains the appearance of ‘unnormalized’ models poses a problem as these models evade classical estimation methods like the maximum likelihood estimators. A statistical model is called unnormalized if the constants that scale the mathematical functions underlying the modeling probability distributions (for instance the density function) to a total probability of 1 cannot be calculated. In this new research contribution the authors provide a method which allows to estimate the parameters of unnormalized models describing some non-negative quantity (like wind speeds, sales, or household income). The approach is classified as a minimum distance method, comparing a function related to the parametric model at hand with a purely non-parametric quantity based on some given data, and minimizing their difference which ought to be small by theoretic considerations. These theoretical arguments are based on Stein’s characterizations, which feature very prominently in the probability theory of recent decades, and which guarantee that the minimum distance statistic does not depend on the normalization constant of the model, thus rendering the method applicable to unnormalized distributions. The authors investigate their new method concerning its theoretical properties and conduct several simulation studies to gain insights into the actual behavior in practice. This novel approach to estimation is compared with several other methods that were introduced in the machine learning community in recent years. Even though the probabilistic considerations in the realm of Stein’s method led to this publication, connections to machine learning research are drawn and the method is shown to be highly competitive within its range of applicability.