Citation Impact

Author
Affiliation

V.A Traag

Leiden University

Version Revision date Revision Author
1.0 2023-11-23 First draft V.A. Traag

Description

The citation impact of publications reflects the degree to which they have been taken up by other researchers in their publications. There are long-standing discussions about the interpretation of citations, where two theories can be discerned (Bellis 2009): a normative theory, proposing citations reflect acknowledgements of previous work (Merton 1973); and a constructivist theory, proposing citations are used as tools for argumentation (Latour 1988). Overall, citation impact seems to be most closely related to the relevance of the work for the academic community and should be distinguished from other considerations of scientific quality, where the relationship is less clear (Aksnes, Langfeldt, and Wouters 2019).

Although it is out of scope to provide suggestions for causal inference of all possible Open Science aspects on citation, we discuss one case on the effect of Open Data on citations.

Metrics

Citations are affected by two major factors, that we expect to be irrelevant for considerations of impact: the field of research, and the year of publication1. That is, some fields, such as Cell Biology, are much more citation intensive than other fields, such as Mathematics. Additionally, publications that were published in 2010 have had more time to accumulate citations than publications published in 2020. Controlling for these factors2 is resulting in what are often called “normalised” citation indicators (Waltman and Eck 2019). Although such normalised citation indicators are more comparable across time and field, they are sometimes also more opaque. For that reason, we explain both normalised metrics and “raw”, non-normalised, citation metrics.

In addition, we can distinguish between two approaches to calculating metrics based on citations. We can count the citations somehow, and provide a metric based on those citation count. Alternatively, we can use citations to identify which publications are highly cited (Waltman and Schreiber 2013). The reason for doing this is because citation counts themselves are typically very skewed (Radicchi, Fortunato, and Castellano 2008), and statistics based on them might be less robust. For example, when taking the average, it might be affected by a single publication that is very highly cited.

When calculating the citation impact of a set of publications it might be necessary to consider that many publications involve collaboration. When a paper is a collaboration between multiple authors, institutes or countries, the citation impact for these authors, institutes or countries might be affected by this collaboration. For instance, a publication that is a collaboration between institute A and institute B will then be counted both for institute A and institute B. In particular, since collaborative publications tend to be more highly cited, this leads to an artificial inflation of citations, sometimes called the “full counting bonus”. This is especially relevant with normalised citation indicators. For these reasons, it is sometimes advisable to use what is called “fractional counting”. This means that we can consider fractions, or weights, for all publications, based on the “fraction” of their authorship. For instance, if a publication has three authors: each has a fraction of 1/3. If two of the authors are affiliated with a single institution, say institution A, that institution will have a weight of 2/3. If, in addition, the third author would have two affiliations, one with the aforementioned institution A, and one with institution B, we could count that author as belonging to institution A for ½, bringing the total to 5/6.

Avg. / Total Citations (MCS / TCS)

When counting the total number of citations of publications, we can aggregate this citation count in different ways. We call the count of citations for the individual papers the citation score. Whatever the set of publications is, we can calculate the mean of the individual citation scores, which we call the mean citation score (MCS), or we can calculate the sum of the individual citation scores, which we call the total citation score (TCS).

This metric provides a reasonable indicator for citation impact. The benefit of this metric is that it is very simple, and people will be able to understand this relatively easily. Moreover, if people are acquainted with a particular research field and citation practices in such fields, they might have a reasonable understanding of how to interpret this metric.

However, since there are differences across fields and publication year, this metric cannot be compared across fields and years. If comparison across fields and years is important, or if they are relevant in other ways, such as when they may be confounding factors, it might be preferred to use normalised citation indicators (see section …).

Measurement

The basis of this metric is counting citations. Counting citations implies that references of publications need to be linked to the publication for which we need to count its citations. There are some challenges and limitations involved with counting citations. This is described in more detail in the data source section. Here, we assume that we somehow have obtained citations counts for all papers of interest.

In more formal notation, let \(c_i\) be the citation score of a paper \(i\), and let \(S\) be the set of papers of interest with \(n = |S|\) papers in total. Then the total citation score (TCS) can be defined as

\[\text{TCS} = \sum_{i \in S} c_i\]

and the mean citation score (MCS) can be defined as

\[\text{MCS} = \frac{1}{n} \sum_{i \in S} c_i\]

If we use fractionalisation then each paper \(i\) is assigned a weight \(w_i\). We can then define the TCS as the weighted sum

\[\text{TCS} = \sum_{i \in S} w_i c_i\]

and similarly define MCS as the weighted average

\[\text{MCS} = \frac{1}{w} \sum_{i \in S} w_i c_i,\]

where \(w = \sum_{i \in S} w_i\).

Indeed, if we set \(w_i = 1\), this definition is consistent with the non-fractionalised (i.e. full counting) definition. The MCS is simply the TCS divided by the total weight \(\text{MCS} = \frac{1}{w} \text{TCS}\).

Number / % of Highly Cited Publications

Although the actual number of citations to a publication might be of interest, it might also be of interest whether a publication is “highly cited”. To implement this metric, a threshold in terms of number of citations needs to be specified, above which a publication would be classified as highly cited. There is no universal definition of such a threshold, and any threshold is a bit arbitrary, and one could set this for example to 10, 20, 50 or 100 citations, depending on how stringent the metric should be.

This metric provides a reasonable indicator of citation impact. The number or percentage of highly cited publications provides a simplified view of the number of citations. It might sometimes be easier to interpret than the actual number of citations. Moreover, citations can sometimes be very skewed, and a single very highly cited publication may greatly affect the average. The number or percentage of highly cited publications is more robust to such outliers.

The simplification of citations to a binary distinction between highly cited or not also entails some limitations. It may limit the extent to which different publications can be compared. This is especially relevant for very small sets of publications, for which this metric might perhaps be too coarse grained. Another limitation is that the threshold for may not be equally meaningful across fields and years. The same threshold might be relatively difficult to reach in some fields, or may take many years before the threshold is reached. Hence, this metric cannot be compared across fields and years. If comparison across fields and years is important, or if they are relevant in other ways, such as when they may be confounding factors, it might be preferred to use normalised highly cited publications (see section …).

Measurement

The basis of this metric is counting citations. Counting citations implies that references of publications need to be linked to the publication for which we need to count its citations. There are some challenges and limitations involved with counting citations. This is described in more detail in the data source section. Here, we assume that we somehow have obtained citations counts for all papers of interest.

In more formal notation, let \(c_i\) be the citation score of a paper \(i\), and let \(S\) be the set of papers of interest with \(n = |S|\) papers in total. Let \(c^*\) be the threshold set for identifying a highly cited publication. Then we can define whether a publication is highly cited or not as follows

\[h_i = \begin{cases} 1 & \text{if~} c_i > c^* \\ 0 & \text{otherwise} \\ \end{cases} \]

The number of highly cited publications can then be defined as

\[\text{NHCP} = \sum_{i \in S} h_i\]

and the proportion can be defined as

\[\text{PHCP} = \frac{1}{n} \sum_{i \in S} h_i\]

If we use fractionalisation then each paper \(i\) is assigned a weight \(w_i\). We can then define the number/percentage of highly cited publications as

\[\text{NHCP} = \sum_{i \in S} w_i c_i\]

where \(w = \sum_{i \in S} w_i\), and similarly

\[\text{PHCP} = \frac{1}{w} \sum_{i \in S} w_i h_i\]

Indeed, if we set \(w_i = 1\), this definition is consistent with the non-fractionalised (i.e. full counting) definition. The percentage of highly cited publications is simply the number of highly cited publications divided by the total weight \(\text{PHCP} = \frac{1}{w} \text{NHCP}\).

Avg. / Total Normalised Citations (MNCS / TNCS)

One limitation of citation counts is that they are not comparable across fields and publication years. This is, in part, because different fields have different citations practices: for instance, publications in Cell Biology can easily have 60 references, while in Mathematics publications typically only have about 20 references. In addition, publications that were published longer ago have had longer time to accumulate citations. Neither of these two factors is considered relevant to citation impact. Getting rid of these informative differences may make the citation counts more informative about citation impact and may render them more comparable across fields and publication years.

We normalise citations by dividing the actual number of citations received by the expected number of citations (Waltman and Eck 2019). How to calculate the expected number of citations can be done in various ways. Typically, this is based on the publication year and the field of publication. Sometimes other factors, such as document types, are also included in the base for the expected number of citations.

When counting the total number of citations of publications, we can aggregate this citation count in different ways. We call the normalised count of citations for the individual papers the normalised citation score (NCS). An NCS above 1.0 is above average, and an NCS below 1.0 is below average.

For a given set of publications is, we can calculate the mean of the NCS of all publications, which we call the mean normalised citation score (MNCS). An MNCS of above 1.0 is above average, and an NCS below 1.0 is below average. We can also calculate the sum of the NCS, which we call the total normalised citation score (TNCS).

This metric provides a good indicator for citation impact. The benefit of this metric is that it is comparable across fields and publications years. When working with large sets of publications that include publications from heterogeneous fields, and publication years, normalisation is especially relevant. In these cases, differences in citations counts might arise because of differences in composition in research fields and publication years. For that reason, it is usually recommended to use normalised citation counts to correct for differences in fields and publication years when working with large heterogenous sets of publications.

The disadvantage of normalised citation counts is that they are more opaque. People might be less familiar with these types of statistics and may have more difficulty in interpreting them. For that reason, it is advisable to work with “raw”, unnormalised, citation counts when sets of publications are more homogeneous or when they are relatively small. At the individual paper level, a normalised citation count is most likely less informative to most people than the “raw” citation count (see section …)

Measurement

The basis of this metric is counting citations. Counting citations implies that references of publications need to be linked to the publication for which we need to count its citations. There are some challenges and limitations involved with counting citations. This is described in more detail in the data source section. Here, we assume that we somehow have obtained citations counts for all papers of interest.

In addition, this metric requires to calculate expected citation counts. Calculating expected citation counts can be challenging, and requires access to many other publications, typically all publications in a database. For this reason, reason, calculating normalised indicators can present considerable challenges. We will here explain the calculation of the expected citation counts for publications in the same field and in the same year, where we assume publications are assigned to exactly one field only3. Note that the normalisation hence depends on the field classification used. See our indicator of fields for more details.

In more formal notation, let \(c_i\) be the citation score of a paper \(i\), let \(f_i\) be the field of paper \(i\) and let \(y_i\) be the year of publication of paper \(i\). The expected number of citations \(e_{fy}\) for a publication in field \(f\) and year \(y\) is then calculated as the average number of citations for publications in the same field and the same year. Let \(S_{fy} = \{i \mid f_i = f \text{~and~} y_i = y\}\) be the set of publications in the same field and year. Let \(n_{fy} = |S_{fy}|\) be the number of such publications. Then, the expected number of citations in the same field and year can be defined as

\[e_{fy} = \frac{1}{n_{fy}} \sum_{S_{fy}} c_i.\]

The normalised citation score \(c'_i\) can then be defined as

\[c'_i = \frac{c_i}{e_{f_i,y_i}}.\]

Let \(S\) then denotes the set of papers of interest with \(n = |S|\) papers in total. Then the total normalised citation score (TNCS) can be defined as

\[\text{TNCS} = \sum_{i \in S} c'_i\]

and the mean normalised citation score (MNCS) can be defined as

\[\text{MNCS} = \frac{1}{n} \sum_{i \in S} c_i\]

If we use fractionalisation then each paper \(i\) is assigned a weight \(w_i\). We can then define the TNCS as the weighted sum

\[\text{TNCS} = \sum_{i \in S} w_i c_i\]

and similarly define MNCS as the weighted average

\[\text{MNCS} = \frac{1}{w} \sum_{i \in S} w_i c_i,\]

where \(w = \sum_{i \in S} w_i\).

Indeed, if we set \(w_i = 1\), this definition is consistent with the non-fractionalised (i.e. full counting) definition. The MCS is simply the TCS divided by the total weight \(\text{MCS} = \frac{1}{w} \text{TCS}\).

The MNCS indicator is also implemented in several data sources under different names. In InCites, based on Web of Science, this indicator is known as the Category Normalised Citation Impact (CNCI). In SciVal and Scopus, this indicator is known as Field-Weighted Citation Impact (FWCI). In Dimensions, this indicator is known as the Field Citation Ratio (FCR). In OpenAlex, this indicator is not (yet) available.

Number / % of Highly Normalised Cited Publications

One limitation of the Highly Cited publications metric is that it is not comparable across fields and years. A single threshold is used, while different fields and publications years might require different thresholds. In order to normalise this indicator, one could therefore choose different thresholds for different fields and publications. To have comparable thresholds across contexts, the thresholds are usually defined in terms of the percentile of interest. That is, “Highly Cited” is then operationalised as belonging to the top x% of their field and publication year in terms of citations. Different thresholds of the top x% can be set, with common choices being top 50%, 10%, 5% or 1%. For this reason, this metric is sometimes also known as the “Top x%” indicator.

This metric provides a good indicator of citation impact. The number or percentage of highly normalised cited publications provides a simplified view of citation impact. It might sometimes be easier to interpret than the normalised citation directly. Moreover, citations can sometimes be very skewed, and a single very highly cited publication may greatly affect the average. The number or percentage of highly cited publications is more robust to such outliers. In addition, this metric is comparable across fields and publications years. When working with large sets of publications that include publications from heterogeneous fields, and publication years, normalisation is especially relevant. In these cases, differences might arise because of differences in composition in research fields and publication years. For that reason, it is usually recommended to use normalisation to correct for differences in fields and publication years when working with large heterogenous sets of publications.

The simplification of citations to a binary distinction between highly cited or not also entails some limitations. It may limit the extent to which different publications can be compared. This is especially relevant for very small sets of publications, for which this metric might perhaps be too coarse grained.

Measurement

The basis of this metric is counting citations. Counting citations implies that references of publications need to be linked to the publication for which we need to count its citations. There are some challenges and limitations involved with counting citations. This is described in more detail in the data source section. Here, we assume that we somehow have obtained citations counts for all papers of interest.

In more formal notation, let \(c_i\) be the citation score of a paper \(i\), and let \(S\) be the set of papers of interest with \(n = |S|\) papers in total. Let \(c^*\) be the threshold set for identifying a highly cited publication. Then we can define whether a publication is highly cited or not as follows

\[h_i = \begin{cases} 1 & \text{if~} c_i > c^* \\ 0 & \text{otherwise} \\ \end{cases} \]

The number of highly cited publications can then be defined as

\[\text{NHCP} = \sum_{i \in S} h_i\]

and the proportion can be defined as

\[\text{PHCP} = \frac{1}{n} \sum_{i \in S} h_i\]

If we use fractionalisation then each paper \(i\) is assigned a weight \(w_i\). We can then define the number/percentage of highly cited publications as

\[\text{NHCP} = \sum_{i \in S} w_i c_i\]

where \(w = \sum_{i \in S} w_i\), and similarly

\[\text{PHCP} = \frac{1}{w} \sum_{i \in S} w_i h_i\]

Indeed, if we set \(w_i = 1\), this definition is consistent with the non-fractionalised (i.e. full counting) definition. The percentage of highly cited publications is simply the number of highly cited publications divided by the total weight \(\text{PHCP} = \frac{1}{w} \text{NHCP}\).

Ties

When counting citations there are often multiple publications that have exactly the same number of citations. Defining whether a publication belongs to the top x% can then be challenging, if we want to ensure that overall, exactly x% of the publications are in the top x%. Consider for example the following publications and citations:

Publication Citations
A 0
B 1
C 2
D 2
F 3

In total, there are five publications, and if we want to determine the top 50%, we have to assign 2.5 publications to the top 50%. Clearly, this is impossible, if we only consider a publication to be in the top 50% completely. The solution again lies in fractionalisation. Let us illustrate this on the example. Publication F should clearly be in the top 50%. Publications C and D cannot both be fully in the top 50%, because then the top 50% would cover 60% of the publications. If they do not belong to the top 50%, the top 50% would only cover 20% of the publications. The solution therefore is that publications C and D both “fill up” the remaining 30%, or 1.5 (=30% of 5) publications. Hence, if we assign publications C and D both for a fraction of 0.75 to the top 50%, the (weighted) sum of the publications in the top 50% is then 2.5, so totalling to 50% of the publications.

More formally, let \(c^*\) be the threshold such that publications with citations strictly higher than \(c^*\) are less than \(x\%\), while if we add publications with citations equal to \(c^*\) that the number of publications is then higher than or equal to \(x\%\). The number of publications with citations strictly higher than \(c^*\) is denoted by \(n_>\) while the number of publications with citations equal to \(c^*\) is denoted by \(n_=\), while the total number of publications is denoted by \(n\). Then the threshold \(c^*\) should be defined such that \(n_> < x n\) while \(n_> + n_= \geq x n\), with \(x\) denoting the top x%. The fraction to count publications with citations equal to threshold can then be calculated as

\[w = \frac{x n - n_{>}}{n_=}\]

Note that indeed \(0 < w \leq 1\), so that we always assign a valid fraction of each publication to the top x%. For more details, please refer to Waltman and Schreiber (2013).

Datasources

OpenAlex

OpenAlex covers citations based on previously gathered data from Microsoft Academic Graph, but mostly relies on Crossref to index new citations. OpenAlex offers a user interface that is at the moment still under active development, an open API, and the possibility to download the entire data snapshot. The API is rate-limited, but there are options of having a premium account. Documentation for the API is available at https://docs.openalex.org/.

It is possible to retrieve the number of citations for a particular publication in OpenAlex, for example by using a third-party package for Python called pyalex.

import pyalex as alx
alx.config.email = "mail@example.com"
w = alx.Works()["W3128349626"]

c = w[“cited_by_count”]

Doing this for multiple publications allows to calculate citation indicators for larger sets of publications, as explained in the previous section. When large amounts of data need to be processed, it is recommended to download the full data snapshot, and work with it directly.

At the moment, OpenAlex doesn’t make normalised citation counts or highly cited indicators available. Although these can be constructed based on the entire database, this requires more effort, and cannot directly be done by a single API call.

Dimensions

Dimensions is a citation database that takes a comprehensive approach to indexing publications. It offers limited free access through its user interface. API access and access through its database via Google BigQuery can be arranged through payments. It also offers the possibility to apply for access to the API and/or Google BigQuery for research purposes. The API is documented at https://docs.dimensions.ai/dsl. The (mean) normalised citation score (NCS), is called the Field Citation Ratio in Dimensions.

The database is closed access, and we therefore do not provide more details about API usage.

Scopus

Scopus is a citation database with a relatively broad coverage. Its data is closed, and citation scores are generally available only through a paid subscription. It does offer the possibility to apply for access for research purposes through the ICSR Lab. Some additional documentation of their metrics is available at https://www.elsevier.com/products/scopus/metrics, in particular in the Research Metrics Guidebook, with documentation for the dataset available through ICSR Lab being available separately. The (mean) normalised citation score (NCS) is called Field-Weighted Citation Impact (FWCI) in Scopus.

The database is closed access, and we therefore do not provide more details about API usage.

Web of Science

Web of Science is a citation database that takes a more selective approach to indexing publications. Its data is closed and citation scores are available only through a paid subscription. InCites, an analytical tool based on Web of Science, also offers normalised citation scores and the ability to identify highly cited publications. Its normalised citation score is termed the Category Normalised Citation Impact (CNCI).

The database is closed access, and we therefore do not provide more details about API usage.

Known correlates

As already clarified, citations are affected in general by field and publication year, and these are quite clearly causal effects. There are many other factors that correlate with citations (Onodera and Yoshikane 2015), for which most it is unclear whether the effect is causal. One factor that is consistently associated with more citations is collaboration (Larivière et al. 2015), which is potentially driven by network effects (Schulz et al. 2018). In addition, there is evidence for a clear causal effect of the journal where something is published on citations (Traag 2021).

References

Aksnes, Dag W., Liv Langfeldt, and Paul Wouters. 2019. “Citations, Citation Indicators, and Research Quality: An Overview of Basic Concepts and Theories.” SAGE Open 9 (1): 215824401982957. https://doi.org/10.1177/2158244019829575.
Bellis, Nicola De. 2009. Bibliometrics and Citation Analysis: From the Science Citation Index to Cybermetrics. Scarecrow Press.
Larivière, Vincent, Yves Gingras, Cassidy R. Sugimoto, and Andrew Tsou. 2015. “Team Size Matters: Collaboration and Scientific Impact Since 1900.” Journal of the Association for Information Science and Technology 66 (7): 1323–32. https://doi.org/10.1002/asi.23266.
Latour, Bruno. 1988. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press.
Merton, Robert K. 1973. The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.
Onodera, Natsuo, and Fuyuki Yoshikane. 2015. “Factors Affecting Citation Rates of Research Articles.” Journal of the Association for Information Science and Technology 66 (4): 739764. https://doi.org/10.1002/asi.23209.
Radicchi, Filippo, Santo Fortunato, and Claudio Castellano. 2008. “Universality of Citation Distributions: Toward an Objective Measure of Scientific Impact.” Proceedings of the National Academy of Sciences of the United States of America 105 (45): 1726872. http://www.pnas.org/cgi/content/abstract/105/45/17268.
Schulz, Christian, Brian Uzzi, Dirk Helbing, and Olivia Woolley-Meza. 2018. “A Network-Based Citation Indicator of Scientific Performance.” https://doi.org/10.48550/arXiv.1807.04712.
Traag, V. A. 2021. “Inferring the Causal Effect of Journals on Citations.” Quantitative Science Studies 2 (2): 496–504. https://doi.org/10.1162/qss_a_00128.
Waltman, Ludo, and Nees Jan van Eck. 2019. “Field Normalization of Scientometric Indicators.” Springer Handbook of Science and Technology Indicators, 281300. https://doi.org/10.1007/978-3-030-02511-3_11.
Waltman, Ludo, Nees Jan van Eck, Thed N van Leeuwen, Martijn S Visser, and Anthony F J van Raan. 2011. “Towards a New Crown Indicator: An Empirical Analysis.” Scientometrics 87 (3): 467481. https://doi.org/10.1007/s11192-011-0354-5.
Waltman, Ludo, and Michael Schreiber. 2013. “On the Calculation of Percentile-Based Bibliometric Indicators.” J. Am. Soc. Inf. Sci. Technol. 64 (2): 372379. https://doi.org/10.1002/asi.22775.

Footnotes

  1. The publication year can also be relevant still, even when we take similar so-called citation windows. For example, we can count citations for only 10 years after the year of publication. However, even then, publications published in 1990 (with counting citations until 2000) will show a different average number of citations from publications published in 2000 (with counting citations until 2010). This is because there has been a growth in the number of publications each year, and additionally an increase in the number of references in publications.↩︎

  2. Sometimes, normalisation also considers the “document type” of publications, differentiating for example between editorial letters, research articles or reviews. This would be reasonable if we expect the document type to be unrelated to the impact, as we expect for field of research and year of publication. Whether this is the case can be debated.↩︎

  3. That is, we assume that the used field classifications do not overlap. Some field classifications do overlap, in which case the normalisation becomes more complicated. One approach to this is to fractionalise publications per field, and then perform normalisation within each field separately, and then average across fields afterwards (Waltman et al. 2011).↩︎

Reuse

Open Science Impact Indicator Handbook © 2024 by PathOS is licensed under CC BY 4.0 (View License)

Citation

BibTeX citation:
@online{apartis2024,
  author = {Apartis, S. and Catalano, G. and Consiglio, G. and Costas,
    R. and Delugas, E. and Dulong de Rosnay, M. and Grypari, I. and
    Karasz, I. and Klebel, Thomas and Kormann, E. and Manola, N. and
    Papageorgiou, H. and Seminaroti, E. and Stavropoulos, P. and Stoy,
    L. and Traag, V.A. and van Leeuwen, T. and Venturini, T. and
    Vignetti, S. and Waltman, L. and Willemse, T.},
  title = {Open {Science} {Impact} {Indicator} {Handbook}},
  date = {2024},
  url = {https://handbook.pathos-project.eu/sections/2_academic_impact/citation_impact.html},
  doi = {10.5281/zenodo.14538442},
  langid = {en}
}
For attribution, please cite this work as:
Apartis, S., G. Catalano, G. Consiglio, R. Costas, E. Delugas, M. Dulong de Rosnay, I. Grypari, et al. 2024. “Open Science Impact Indicator Handbook.” Zenodo. 2024. https://doi.org/10.5281/zenodo.14538442.