Three numbers live on every journal profile. They mean different things, count different citations, and tell you different stories about the journal.
Every journal profile shows numbers. Most researchers glance at one, assume it reflects quality, and move on. That's a mistake — because the three major journal metrics measure different things, reward different behaviours, and tell you different stories about the same publication.
This guide explains CiteScore vs impact factor vs SJR in plain English: what each one is, how it's calculated, and when to trust which. By the end, you'll know which metric your institution genuinely cares about and how to read them without being misled.
CiteScore is Elsevier's metric, calculated from Scopus data. It measures the average number of citations received per document over a four-year window. Longer window, broader document set, more stable result.
Because CiteScore uses a four-year window, it tends to be more stable than the two-year Impact Factor. A single blockbuster paper won't inflate it as dramatically. That's a strength — and a weakness, depending on what you're measuring.
Journal Impact Factor (JIF) is the oldest metric of the three. It's maintained by Clarivate, published in the Journal Citation Reports each June, and calculated only for journals indexed in SCI, SSCI, and (from 2023) AHCI and ESCI.
Two things make JIF unique. First, the two-year window means it's more volatile — a single highly-cited paper can bump a journal's JIF noticeably. Second, the denominator counts only "citable items", which excludes editorials and commentaries but still attracts their citations in the numerator. This inflates JIFs for journals that publish lots of editorials.
SJR, or SCImago Journal Rank, is conceptually different. Instead of treating every citation as equal, it weights each citation by the prestige of the citing journal. A citation from Nature is worth more than a citation from a Q4 regional journal.
The algorithm borrows from Google's PageRank — citations from highly-ranked sources carry more weight, which in turn makes the cited journal's SJR higher. It's recursive prestige, tamed with maths.
"CiteScore counts citations. JIF counts recent citations. SJR counts who's citing you — and that's often the most honest signal of the three."
The practical effect: a journal with a modest CiteScore but a high SJR is cited selectively by prestigious outlets. A journal with a high CiteScore but a low SJR is cited often, but by less-regarded journals. For publication strategy, SJR often tells you more than raw citation count.
| Feature | CiteScore | Impact Factor | SJR |
|---|---|---|---|
| Database | Scopus | Web of Science | Scopus |
| Citation window | 4 years | 2 years | 3 years |
| Stability | High | Lower (more volatile) | High |
| Weights citations? | No | No | Yes — by prestige |
| Typical range | 0 – 80+ | 0 – 200+ | 0.1 – 20+ |
| Free to access? | Yes | Subscription | Yes |
No single metric is "best". The right one depends on what you're trying to decide and what your institution accepts.
You need a stable, free-to-access metric across a broad set of journals, or your institution asks for a Scopus-indexed publication without specifying a tier. CiteScore is the most widely available starting point.
Your institution, grant body, or tenure committee explicitly asks for JIF. Many science and engineering frameworks still anchor on it. JIF is also commonly required for SCI/SSCI promotions in China, India, and many European universities.
You want to judge genuine prestige rather than raw citation volume, or you're comparing journals in citation-light fields like niche humanities or regional sciences. SJR reflects the quality of citations, not just their count.
Many researchers track all three, but the one that counts for your paperwork is whichever your institution specifies. Check the written policy before you start metric-shopping.
A CiteScore of 4.0 is strong in sociology and modest in molecular biology. Always compare within the same subject category — the quartile already does this for you, which is why quartile often beats raw metric numbers.
All three metrics update annually. A journal's numbers this year are not a forecast for next year. Always check the most recent release before submitting.
A high-JIF journal whose scope doesn't match your paper is still going to reject you. Metrics filter; scope decides. For the full framework, read our complete guide to choosing a Scopus journal.
CiteScore, Impact Factor, and SJR are not interchangeable. CiteScore gives you stable Scopus data with a wide window. Impact Factor gives you the WoS standard your institution might require. SJR gives you prestige-weighted signal that raw citation counts can't.
Use whichever your institution mandates — and cross-reference the other two to sanity-check the story. And remember: every metric ranks journals within a subject category, so the quartile is almost always more useful than the raw number.
Our AI Journal Finder shows CiteScore, Impact Factor, SJR, and quartile for every match — alongside APC and timeline data. Paste your abstract and compare on your own terms.
Try the Journal Finder → Or get a personalised recommendation → Message a PhD editor