Each week, we select a recently published Open Access article to feature. This week’s article comes from Journal of the Royal Statistical Society Series A (Statistics in Society) and examines the use of inter-rater-reliability to assess the peer review process.
The article’s abstract is given below, with the full article available to read here.
Erosheva, E.A., Martinková, P. and Lee, C.J. (2021), When zero may not be zero: A cautionary note on the use of inter‐rater reliability in evaluating grant peer review. J R Stat Soc Series A. https://doi.org/10.1111/rssa.12681
Considerable attention has focused on studying reviewer agreement via inter‐rater reliability (IRR) as a way to assess the quality of the peer review process. Inspired by a recent study that reported an IRR of zero in the mock peer review of top‐quality grant proposals, we use real data from a complete range of submissions to the National Institutes of Health and to the American Institute of Biological Sciences to bring awareness to two important issues with using IRR for assessing peer review quality. First, we demonstrate that estimating local IRR from subsets of restricted‐quality proposals will likely result in zero estimates under many scenarios. In both data sets, we find that zero local IRR estimates are more likely when subsets of top‐quality proposals rather than bottom‐quality proposals are considered. However, zero estimates from range‐restricted data should not be interpreted as indicating arbitrariness in peer review. On the contrary, despite different scoring scales used by the two agencies, when complete ranges of proposals are considered, IRR estimates are above 0.6 which indicates good reviewer agreement. Furthermore, we demonstrate that, with a small number of reviewers per proposal, zero estimates of IRR are possible even when the true value is not zero.