Skip to main content
×
Home

Measuring impact


Journal and article bibliometrics

At Cambridge University Press, we understand that measuring the impact of published research is important to our authors (who want to see their work read, discussed and applied). Impact is equally important to institutions, libraries and publishers. While we recognise that 'impact' cannot simply be measured quantitatively, we nevertheless make every effort to extract as much data as possible about each article we publish in order to enrich our understanding of its reach and audience.

Bibliometrics is now considered an academic field in itself and has its own dedicated journals. We have assembled a short guide below to introduce the various metrics available for measuring impact at article and journal level. It is important to remember that none of these metrics is regarded as a perfect measure of journal or article impact. To explore the rich literature that examines the relevance and limitation of the various metrics available, see HEFCE's Independent Review of Metrics and the University Ranking Watch blog as a starting point.


Impact Factor

The Impact Factor is probably the most recognised measure of journal impact globally. It was established as the key measure of comparing journals on Clarivate Analytics' Journal Citation Reports® (JCR®). This first began publishing in 1975 as part of Clarivate Analytics' Science Citation Index® (SCI®) and Social Sciences Citation Index® (SSCI®).

The Impact Factor evaluates 'impact' by measuring the frequency with which the 'average article' in a journal is cited in a particular year. It does this by dividing the number of current year citations to articles published in a journal in the previous two years by the total number of items published in the journal in those two years.

How a journal impact factor is calculated


The JCR® publishes annual lists of journals divided by subject discipline and ranked by Impact Factor. Alongside the Impact Factor, the JCR® includes a growing number of other metrics that can be used for comparing journals; these include total annual citation numbers, cited half-life, immediacy index, a five-year Impact Factor and Eigenfactor®.

Although the Impact Factor provides a useful way of comparing journals, a key limitation is that the JCR® does not include or give an Impact Factor to every journal that exists. Clarivate Analytics is highly selective in its process for including additional journals on its lists and adds only a very small number each year. Authors and users should be aware that there could therefore be a truly excellent journal that does not yet have an Impact Factor. The citations that count towards a journal's Impact Factor are also taken from a limited pool of titles and an even more limited pool of books.

Another important limitation is that the Impact Factor does not take into account a range of factors that might influence an article's impact. These factors include: subject area (whether a field is pure or applied, general or specialist, typical citation habits and citation density, and a tendency towards single or multiple authored papers all have an impact on Impact Factor); type of journal (short papers and reviews tend to be cited faster); size of journal (just one citation can make a big difference for journals publishing less); speed to publication (altering the citation window); and amount of self-citation (this counts towards Impact Factor).

We suggest the Impact Factor is best considered in the context of all the factors mentioned above, across a range of years and only ever in relation to other journals within the same subject discipline. Impact Factors should also be looked at in conjunction with a range of other metrics.

The advantages and limitations of Impact Factor as a measure of research impact have been widely discussed. For more information and for an introduction to this controversial topic, see Impact Factors: Use and Abuse by Mayur Amin and Michael Mabe and Eugene Garfield's The Agony and the Ecstasy – The History and Meaning of the Journal Impact Factor.

Cambridge Core is proud to publish a large number of high Impact Factor journals.


Altmetric

Launched in 2011, Altmetric is an alternative metric that looks at the attention to a specific journal article on social media, blogs and mainstream news sites and assigns a resultant score. That score is based on the number of mentions the paper receives, the 'quality' of the source of the mention, and who is mentioning the paper. This means that news articles generally receive a higher score than Facebook posts, and Twitter mentions from a journal publisher score lower than mentions from an independent academic in the field.

Altmetric is integrated with most major publishers, including Cambridge University Press, which includes an Altmetric score next to most Cambridge-published journal articles.

Altmetric uses several icons to represent the Altmetric score, including the one seen most frequently on the Cambridge site: the Altmetric donut.

Altmetric donut

This shorthand visualisation displays the numerical score in the centre of the donut, with colours surrounding the donut that reflect the mix of sources mentioning the article – blues for Twitter, Facebook and other social media; yellow for blogs; red for mainstream media sources, and so on.

It is important to remember that Altmetric is an article-level metric, so the score represents only an individual article and does not reflect the other content in the journal.


Other important indices

As well as Impact Factor and Altmetric, there are a myriad of other indices that have appeared in recent years that seek to provide an alternative view of a journal or single article's 'impact'. Google Scholar and Scopus are probably the most widely known of these additional indices and we include a brief overview of these indices below.

For information on additional indices and metrics and more reading on this topic, the following article provides an examination of the wide proliferation of bibliometric indicators: Wildgaard et al's 'A review of the characteristics of 108 author-level bibliometric indicators', published in Scientometrics (2014) 101: 125-158.

Google Scholar Metrics

Google Scholar uses several metrics that accompany its indexing of journal articles and scholarly publication. Find out more about Google Scholar Metrics here.

h-index

The h-index of a publication is the largest number h, such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by 17, 9, 6, 2 and 1 other articles has an h-index of three. This is because three of the articles published – the ones cited 17, 9 and 6 times – were cited at least three times.

h-core

The h-core of a publication is a set of top-cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles – those cited by 17, 9 and 6.

h-median

The h-median of a publication is the median of the citation counts in its h-core. For example, the h-median of the publication above is nine. The h-median is a measure of the distribution of citations to the h-core articles.

h5-index, h5-core, and h5-median

Finally, the h5-index, h5-core and h5-median of a publication are, respectively, the h-index, h-core and h-median of only those of its articles that were published in the last five complete calendar years.

Scopus Journal Analyzer

Scopus® is a large abstract and citation database of peer-reviewed literature – indexing scientific journals, books and conference proceedings from more than 5,000 publishers. The metrics SJR and SNIP (see below) are calculated using the data gathered from Scopus's database. You can access Scopus Journal Analyzer here.

SSImago (SJR)

The SCImago Journal Rank (SJR) indicator expresses the average number of weighted citations received in the selected year by the documents published in the selected journal in the three previous years – i.e. weighted citations received in year X to documents published in the journal in years X-1, X-2 and X-3.


The SJR uses a weighted citation score meaning that citations from a prestigious journal are scored more highly than those from a title that has a smaller citation network. SCImago publishes journal ranking lists based on this SJR split into various subject categories. These ranking lists are increasingly viewed as an alternative to the Journal Citation Reports®.

Find out more about SSImago (SJR) here.

Source-Normalized Impact per Paper (SNIP)

SNIP measures average citations in year X to papers published in the previous three years and weights citations based on the number of citations in a field. If there are fewer total citations in a research field, then citations are worth more in that field.

Find out more about Source-Normalized Impact per Paper here.