Every few days, we will be publishing layman’s abstracts of new articles from our prestigious portfolio of journals in statistics. The aim is to highlight the latest research to a broader audience in an accessible format.
Contreras‐Cristán, A., Lockhart, R.A., Stephens, M.A. and Sun, S.Z. (2019), On the use of priors in goodness‐of‐fit tests. Can J Statistics, 47: 560-579. doi:10.1002/cjs.11512
Many statistical procedures rely on assumptions about the statistical behaviour of the data. In particular assumptions are sometimes needed concerning the precise mathematical distribution of some variable — normal, uniform, or other distributions may be assumed. Goodness-of-fit tests are methods to check the appropriateness of such assumptions. They are thus used to verify that other statistical methods can safely be applied to some given data set. Over the years many such tests have been developed but the process of developing the tests is generally quite ad hoc. The authors introduce a principled method for choosing an appropriate goodness-of-fit test. To do so the two main philosophical approaches to statistics, Bayesian methods and Neyman-Pearson frequency methods, are combined. A Bayesian prior is introduced but used in such a way that the long-run average behaviour of the resulting test can be computed and controlled. The bulk of the paper consists of careful mathematical analysis of approximations which permit these ideas to be implemented in a useful way. The paper also identifies a particular class of potential tests, ones which are based on objects called U-statistics, which arise very naturally as a source of what the authors call ‘optimal’ tests. Here optimal means as sensitive as possible while controlling the specificity of the test (also called the Type I error rate) at whatever level is desired by the data analyst.