Skip to main content Accessibility help
×

Journal-level metrics

This page explains the most common metrics used within scholarly publishing to measure impact at a journal level.

As a signatory of the San Francisco Declaration on Research Assessment (DORA) we believe it is important that researchers and research outputs are evaluated fairly and on their own merits, rather than on the basis of the journal in which the research is published. To help provide authors and readers with a richer, more nuanced understanding of journal performance, Cambridge University Press promotes a range of metrics on our website and in individual journals’ promotional materials.

We emphasise that none of these metrics is regarded as a perfect measure of a journal’s impact, nor should they be used to judge the impact of an individual article or author. We believe that more accurate and appropriate evaluation of research outputs will help to maximise the impact and benefits of research and benefit our authors and communities as a whole. All of the metrics discussed below apply at the journal level, and not at the article or individual author level. They should not be used as a proxy for the ‘prestige’ of individual authors or their articles.

Metrics are produced by a number of external companies, some of which are owned by publishers themselves, and should be considered in the context of how they are calculated or produced. Some may be valid for comparison between or among similar titles, but others are specific to titles’ data and should be compared only longitudinally for the single journal and not across titles.

The information below is just a starting point. Bibliometrics is considered an academic field in itself and has its own dedicated journals. To explore the rich literature that examines the relevance and limitation of the various metrics available, we recommend the Metrics Toolkit, HEFCE's Independent Review of Metrics and the University Ranking Watch blog as starting points.


Metrics provided by Clarivate Analytics

Clarivate publish a range of metrics in their annual Journal Citation Reports, which are typically published in the middle of the calendar year. Clarivate take their data from their Web of Science.

1. Journal Impact Factor (JIF)

The JIF purports to provide an ‘average citation score per article’ in a journal, by dividing numbers of citations (to all material published by a journal) by numbers of “citable” articles (citable articles for this purpose do not include book reviews or editorial material.) Two versions of the JIF are published:

a. Two-year JIF. The number of citations garnered by the journal in year Y to all articles published in years Y-1 and Y-2, divided by the numbers of citable articles published in years Y-1 and Y-2.

Image displaying the formula for calculating the 2-year Journal Impact Factor

a. Five-year JIF. The number of citations garnered by the journal in year Y to all articles published in years Y-1 to Y-5, divided by the numbers of citable articles published in years Y-1 to Y-5.

Image displaying the formula for calculating the 5-year Journal Impact Factor

Clarivate rank journals in a number of subject categories on the basis of the JIF in each release of the Journal Citation Reports. However there are significant limitations to the usefulness of comparing journals based on JIFs.

  • Only journals included in Clarivate’s Science Citation Index (SCIE) and Social Sciences Citation Index (SSCI) are given a JIF. This limitation excludes thousands of journals for comparison purposes; there are many truly excellent journals that do not have a JIF.
  • Every journal publishes different numbers of articles in each year, within a differing range of article types, even if their aims and scopes are similar.
  • Journals publishing relatively small numbers of articles in a year may be subject to significant swings of the resultant JIF, especially the 2-year JIF.

A more useful comparison that JIFs may be used for is to compare the ‘performance’ of each journal longitudinally, over successive years.

The advantages and limitations of Impact Factor as a measure of research impact have been widely discussed. For more information and for an introduction to this controversial topic, see Impact Factors: Use and Abuse by Mayur Amin, and The Agony and the Ecstasy – The History and Meaning of the Journal Impact Factor by Michael Mabe and Eugene Garfield.

2. Journal Citation Indicator (JCI)

In 2021 Clarivate released a new metric – the Journal Citation Indicator (JCI). More information on this metric can be found on Clarivate’s blog.

The JCI is a field-normalized metric, representing the average citation impact for papers published in the prior three-year period. A number of factors are taken into account in the normalization process, including subject/field, publication type, and year of publication, which are intended to make it more reasonable to use the JCI to compare journals across disciplines.

A JCI value of 1.0 means that, across the journal, published papers received a number of citations equal to the average citation count in that journal’s category. However, because citation counts are not evenly distributed (most papers receive a small number of citations, and few gain more than average), most journals will not have a JCI value above 1.0. A value of 1.5 means that the journal’s papers have received 50% more citations than the average journal in its subject category.

JCI calculations include a wider span of material and citation years than the JIF. For example, the 2020 JCI includes citations made in 2017-2020 to articles published in 2017-2019. Clarivate also gives a JCI score to a broader range of journals than those receiving a JIF: all journals in Clarivate’s Emerging Sources Citation Index (ESCI) and Arts & Humanities Citation Index (ACHI), along with its SCIE and SSCI indexes, receive a score.

Unlike the JIF (which counts all citations), only citations to reviews and research material will count toward calculation of the JCI. Citations to editorial material will not be taken into account.

3. Eigenfactor

The Eigenfactor Score considers the number of times a journal has been cited in one year, based on the number of articles published in the previous five years, similar to the 5-year JIF. Using citation data from a five-year window smooths out sudden changes to some extent. Like the JIF, Eigenfactor Scores are only published for journals in Clarivate’s SCIE and SSCI indexes.

The major differences between the Eigenfactor Score and the JIF are that citations are weighted according to the relative value of the citing journal (similar to Google’s PageRank). Highly-cited journals are assigned a higher weight than poorly-cited journals, and self-citations (to the journal by the journal) are omitted.

An alternative way of considering what the Eigenfactor Score represents is to view it as measuring the ‘popularity’ of a journal within all of the citations counted in the Web of Science within a given year. All other things being equal, if one journal publishes twice as many articles as another, that journal will have an Eigenfactor Score twice as big as that of the smaller journal.

Another major difference from the JIF is that the scores are normalised, so that for all journals the sum total of the Scores in the full database adds up to 100. As a result, as Clarivate adds more journals to their database, if all other aspects remained unchanged the Eigenfactor Scores for individual journals would decrease with time.

Some FAQs about the Eigenfactor Score can be found here.

4. Article Influence Score

The Article Influence Score is linked to the Eigenfactor Score, and also only published for journals in Clarivate’s SCIE and SSCI indexes. From the description provided by Clarivate: the Article Influence Score determines the average influence of a journal's articles over the first five years after publication.

It is calculated by dividing a journal’s Eigenfactor Score by the number of articles in the journal, normalized as a fraction of all articles in all publications. This measure is roughly analogous to the 5-year JIF in that it is a ratio of a journal’s citation influence to the size of the journal’s article contribution over a period of five years. The mean Article Influence Score is 1.00. A score greater than 1.00 indicates that articles in the journal tend to have above-average influence. A score less than 1.00 indicates that articles in the journal tend to have below-average influence.

As with the Eigenfactor Score, the Article Influence Score uses all of the articles and journals in the Web of Science database for comparative analysis.

Metrics provided by Scopus

Scopus typically releases a new annual dataset in the middle of the calendar year, including a variety of metrics. Monthly cumulative CiteScores are also published for each journal, tracking how citations are accumulated during the year.

1. CiteScore

CiteScore is very similar to the JIF, in that it is calculated as an average citation per article. The CiteScore counts the citations received over a four-year period to articles, reviews, conference papers, book chapters, and data papers published in that same four year period, and divides this by the number of publications during that time. The key differences from the JIF are as follows:

  • The calculation covers a 4-year period, rather than 2 or 5.
  • Citations from all years are included, not just the current year.
  • All content types are considered citable items.
  • Material sources do not just include journal articles.

The full calculation can be found here, and the full dataset is publically available here. Scopus includes a larger number of journals in its CiteScore dataset than Clarivate.

2. Scimago Journal Rank (SJR)

The SJR expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years. A weighted citation score means that citations from a prestigious journal are scored more highly than those from a title that has a smaller citation network. More information can be found here.

3. Source Normalised Impact per Paper (SNIP)

This metric is calculated using a method developed by the CWTS in Leiden. The calculation is complex but can be described as: ‘the ratio of a source's average citation count per paper and the citation potential of its subject field.SNIP measures a source’s contextual citation impact by weighting citations based on the total number of citations in a subject field. It measures the average number of weighted citations to papers published in the previous three years. This time weighting is based on the number of citations within a field; if there are fewer total citations in a research field then citations are worth more in that field. This is therefore a metric where you can compare journals across different subject areas.

Metrics provided by Google Scholar

Google Scholar uses several metrics, updated annually, to accompany its indexing of journal articles and scholarly publications. Find out more about Google Scholar Metrics here.

1. h-index

The h-index of a publication is the largest number h, such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by 17, 9, 6, 2 and 1 other articles has an h-index of three. This is because three of the articles published – the ones cited 17, 9 and 6 times – were cited at least three times. This is a metric that is difficult to compare across subject fields because citation levels differ so much from field to field.

2. h-core

The h-core of a publication is a set of top-cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles – those cited by 17, 9 and 6.

3. h-median

The h-median of a publication is the median of the citation counts in its h-core. For example, the h-median of the publication above is nine. The h-median is a measure of the distribution of citations to the h-core articles.

4. h5-index, h5-core, and h5-median

Finally, the h5-index, h5-core and h5-median of a publication are, respectively, the h-index, h-core and h-median of the set of articles published by that publication in the last five complete calendar years.

Metrics provided by Altmetric

Altmetric evaluates impact through counting mentions on social media, blogs and mainstream news sites. Altmetric essentially provides an article-level metric but it is worth listing here because publishers increasingly report on how many mentions a journal has attracted overall on Altmetric, and compare those year on year.

1. Altmetric Attention Score (AAS)

The Altmetric Attention Score (AAS) assigns a score for individual articles based on the number of mentions that article receives from a variety of online sources. These include news articles, blogs, Twitter, Facebook, Sina Weibo, Wikipedia, policy documents (per source), Q&A, F1000, Publons, Pubpeer, YouTube, Reddit, Pinterest, LinkedIn, Open Syllabus, and Google+.

The AAS also takes into consideration the ‘quality’ of the source (for example news articles receive a higher score than a Facebook post), and who it is that mentions the paper (the weighting takes into account whether the author of a mention of a research output regularly posts about scholarly articles). It should be noted that the AAS is largely limited to outputs published from 2011 onwards and does not differentiate between attention that is positive or negative.

Altmetric donut

The Almetric donut displays the numerical AAS score in the centre of the donut, with colours surrounding the donut that reflect the mix of sources mentioning the article – blues for Twitter, Facebook and other social media; yellow for blogs; red for mainstream media sources, and so on.