# International Statistical Review

## A Tutorial on the Practical Use and Implication of Complete Sufficient Statistics

### Summary

Completeness means that any measurable function of a sufficient statistic that has zero expectation for every value of the parameter indexing the parametric model class is the zero function almost everywhere. The property is satisfied in many simple situations in view of parameters of direct scientific interest, such as in regression models fitted to data from a random sample with fixed size. A random sample is not always of a fixed, a priori determined size. Examples include sequential sampling and stopping rules, missing data and clusters with random size. Often, there then is no complete sufficient statistic. A simple characterisation of incompleteness is given for the exponential family in terms of the mapping between the sufficient statistic and the parameter, based upon the implicit function theorem. Essentially, this is a comparison of the dimension of the sufficient statistic with the length of the parameter vector. This results in an easy verifiable criterion for incompleteness, clear and simple to use, even for complex settings as is shown for missing data and clusters of random size.

This tutorial exemplifies the (in)completeness property of a sufficient statistic, thereby illustrating our proposed characterisation. The examples are organised from more classical, simple examples to gradually more advanced settings.

View all

View all