Quality

Author
Affiliation

V.A Traag

Leiden University

Version Revision date Revision Author
1.0 2024-12-06 First draft V.A. Traag

Description

Quality is a very complicated concept, and in the context of academic work, is very challenging to measure. To start with, it should be clarified what object is being considered, which could range from data to peer review. In most such cases, quality cannot be defined on the basis of any easily measurable data, and rather require some form of manual assessment.

For the most traditional academic output, a scholarly publication, such a manual assessment is typically provided through peer review (Bornmann 2011). Peer review is much discussed in science studies, and there are discussions about its reliability (Cole, Cole, and Simon 1981) and its biases (Lee et al. 2013), but also about its positive effects (Goodman et al. 1994) and complementaries (Goyal et al. 2024).

Quality is typically considered to be a multidimensional concept (Aksnes, Langfeldt, and Wouters 2019), composed of various other concepts. For instance, in peer review of manuscripts submitted to journals, it is common to assess the novelty and the rigour of the manuscript. Yet even if quality is considered a multidimensional concept, in practice, quality is sometimes still considered to be unidimensional. For example, in the UK REF research articles are assigned a number of stars, varying from “recognised nationally” (1 star) to “world-leading” (4 stars).

In the context of exercises such as the UK REF there have also been discussions about the possibility to use citations as a proxy for quality. Indeed, there are substantial correlations between peer review results and citations, but this depends on the level of aggregation. At the individual paper level the correlation is typically low, yet at higher levels, such as institutional, the correlations are substantially higher (Traag, Malgarini, and Sarlo 2023). Overall, as summarised in the reputable “Metrics Tide” report (Wilsdon et al. 2015, viii), “Metrics should support, not supplant, expert judgement.”, and this is particularly relevant at the individual paper level.

References

Aksnes, Dag W., Liv Langfeldt, and Paul Wouters. 2019. “Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories.” SAGE Open 9 (1): 215824401982957. https://doi.org/10.1177/2158244019829575.
Bornmann, Lutz. 2011. “Scientific Peer Review.” Annual Review of Information Science and Technology 45 (1): 197–245. https://doi.org/10.1002/aris.2011.1440450112.
Cole, Stephen, J R Cole, and G A Simon. 1981. “Chance and Consensus in Peer Review.” Science 214 (4523): 881–86. https://doi.org/10.1126/science.7302566.
Goodman, Steven N, Jesse Berlin, Suzanne W Fletcher, and Robert H Fletcher. 1994. “Manuscript Quality Before and After Peer Review and Editing at Annals of Internal Medicine.” Ann. Intern. Med. 121 (1): 11. https://doi.org/10.7326/0003-4819-121-1-199407010-00003.
Goyal, Navita, Ivan Stelmakh, Nihar Shah, and Hal Daumé III. 2024. “Causal Effect of Group Diversity on Redundancy and Coverage in Peer-Reviewing.” arXiv. https://doi.org/10.48550/arXiv.2411.11437.
Lee, Carole J., Cassidy R. Sugimoto, Guo Zhang, and Blaise Cronin. 2013. “Bias in Peer Review.” Journal of the American Society for Information Science and Technology 64 (1): 2–17. https://doi.org/10.1002/asi.22784.
Traag, V. A., M. Malgarini, and S. Sarlo. 2023. “Metrics and Peer Review Agreement at the Institutional Level.” arXiv. https://doi.org/10.48550/arXiv.2006.14830.
Wilsdon, James, Liz Allen, Eleonora Belfiore, Philip Campbell, Stephen Curry, Steven Hill, Richard Jones, et al. 2015. “Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management.” Higher Education Funding Council for England. https://doi.org/10.13140/RG.2.1.4929.1363.

Reuse

Open Science Impact Indicator Handbook © 2024 by PathOS is licensed under CC BY 4.0 (View License)

Citation

BibTeX citation:
@online{apartis2024,
  author = {Apartis, S. and Catalano, G. and Consiglio, G. and Costas,
    R. and Delugas, E. and Dulong de Rosnay, M. and Grypari, I. and
    Karasz, I. and Klebel, Thomas and Kormann, E. and Manola, N. and
    Papageorgiou, H. and Seminaroti, E. and Stavropoulos, P. and Stoy,
    L. and Traag, V.A. and van Leeuwen, T. and Venturini, T. and
    Vignetti, S. and Waltman, L. and Willemse, T.},
  title = {Open {Science} {Impact} {Indicator} {Handbook}},
  date = {2024},
  url = {https://handbook.pathos-project.eu/sections/2_academic_impact/quality.html},
  doi = {10.5281/zenodo.14538442},
  langid = {en}
}
For attribution, please cite this work as:
Apartis, S., G. Catalano, G. Consiglio, R. Costas, E. Delugas, M. Dulong de Rosnay, I. Grypari, et al. 2024. “Open Science Impact Indicator Handbook.” Zenodo. 2024. https://doi.org/10.5281/zenodo.14538442.