Lay abstract for Stat article: Design criteria for model discrimination in factorial experiments with potential failing runs

Each week, we publish lay abstracts of new articles from our prestigious portfolio of journals in statistics. The aim is to highlight the latest research to a broader audience in an accessible format.
 
The article featured today is from Stat with the full article now available to read here.   
 
Motavaze, M., & Talebi, H. (2023). Design criteria for model discrimination in factorial experiments with potential failing runsStat121), e536. https://doi.org/10.1002/sta4.536

The primary aim of model-based experimentation is the detection of active factors and the interaction between them. Using a fraction of factorial design, mainly main effect plans, to screen inactive effects may lead to biased detection. This forces us to insert the active interactions in the underlying model; however, they are not known a priori. Considering all possible sets of interactions along with the main effects is a model discrimination problem; for which a distance-base quantity is used to measure the model discrepancy. There are several criteria in the related literature that are mostly design-dependent and can be used for design superiority detection. However, even with a good criterion but a poorly designed experiment may weakly perform in identifying the correct model. The present paper has tackled the issue through a design strategy in factorial experiments using a specific economy size fractional factorial design so-called main effect plus k plan (MEP.k). The models which are considered to be discriminated are non-nested models consisting of the main effects, in common, and have a set of k different interactions from a whole interest set of interaction effects, e.g. 2-factor interactions. Meanwhile, the occurrence of missing observations could inflate the complexities in model identification and parameter estimation. That is, the superiority of a design may not be preserved in the event of a missing observation. It means that the existing criteria are inefficient in determining the superior design which could be robust to the failing runs.  

In this regard, the paper considers the missing probabilities of the observations at the planning stage of analysis and aims to seek powerful designs with desirable properties against potential failing runs before experimenting. The missing observation is at random and due to not knowing which runs will be failed in advance, a failing probability is attributed to each run. The probability depends on the factors through the combination of the levels.  

Indeed, the paper connects the classical ideas from optimal design theory and missing data models that result in interesting achievements in this context. This led to the criterion that consists of the missing probability and is design-dependent. The finding allows us to assess the robustness of rival designs to the failing runs and determine the superior one in this respect. 

More Details