Abstract:The reliability of manuscript peer review (MPR) has long been a concerning issue for the scientific community. In this paper, two widely-used inter-rater reliability indicators, namely the intra-class correlation coefficient (ICC) and Cohen’s kappa coefficient (κ), were selected to measure the reliability of MPR. A meta-analysis was conducted to provide a quantitative summary of 49 empirical studies published between the years 1974 and 2022 focusing on inter-rater reliability in the context of MPR, in order to expand the knowledge and understanding of the quality of MPR. After that, a series of subgroup analyses were carried out to investigate the effects of ten factors belonging to two categories, namely situational and procedural factors, on the reliability of MPR as well as how these effects differ between both indicators. The analytical results show that on the whole, the reliability of MPR is far from satisfactory (ICC=0.361, κ=0.195). More importantly, the reliability of MPR is significantly affected by the evaluation subjects, academic disciplines, acceptance rates (i.e., three situational factors), and blind policies (i.e., one procedural factor). In addition, when different indicators are used to measure the reliability of MPR, the effect pattern exhibited by the same factor can be quite different.