The impact factor and other measures of journal prestige
It is a well known fact that academics worldwide face pressure to publish in prestigious English language journals. And the
journal impact factor (IF) is the most widely recognized indicator of
journal prestige and influence. Accordingly, many people choose which journals to publish in based largely on the IF.
1
Calculation of IF
The IF is basically a ratio. The 2010 IF is calculated as follows:
2
IFX = All citations in 2010 to articles published in Journal X in 2009 and 2008
All citable articles published in Journal X in 2009 and 2008
As you might have guessed, IFs for 2010 become available only in 2011 and so on. Journal IFs are calculated yearly and disclosed in the Journal Citation Reports (JCR) published by
Thomson Reuters.
Use and misuse of the IF
1.
As an objective measure of journal prestige. There are a vast number of journals to choose from, and the journals* IF provides an objective measure of the overall quality of work published in that journal. As a general rule, the higher the IF value of a journal, the more prestigious it is considered to be.
2.
To select journals for libraries. There are tens of thousands of journal publications in existence. The IF provides library administrators with a tool to decide which journals to retain in their collections and which new ones to acquire for their libraries.
3.
Academic evaluation. The IF is often used in the process of academic evaluations of researchers for tenure, grants, funding, etc. However, this use is incorrect because the IF is only meant to indicate the quality of an entire journal, not the quality of individual articles published in the journal.
3
Beware while using the IF
When using the IF to compare or assess journals, be on the lookout for the following:
4,5
- The absolute value of a journal*s IF is meaningless. For example, a journal with an impact factor of 2 would not be very impressive in a subject like microbiology, but it would be in oceanography. Specialty journals - like disease-specific journals or journals focusing on disaster management - tend to have a low IF value because the journal articles are mostly read and cited by a small specialized audience.6
- Disciplinary trends are different. Citation habits vary across different research areas. Therefore, IFs should not be used to compare journals across disciplines. For example, citation frequency is much higher in medicine than in mathematics or engineering; therefore, medical journals have higher IFs than mathematical and engineering journals.7
- IFs are not very relevant in certain fields. For example, in computer science, conference proceedings are considered the principal form of scientific publication.
- Not having an IF doesn*t make a journal unworthy. Thomson Reuters calculates IFs based on their citation database. The database indexes roughly half of the approximately 25,000 peer-reviewed journals8 believed to be published. The coverage of their database is unevenly distributed as well, with some subject areas better indexed than others. In addition, although it indexes journals from 60 countries, there are not many publications from under-developed countries and only a small number of journals that publish in languages other than English.
Not so fun fact
This is one of the reasons that case reports, which are not frequently cited, are difficult to publish. Indeed, some journals have ceased publishing case reports all together, even though they can be highly useful to readers.
Changes in journal practice because of the IF
The IF is as important, if not more, for journal editors as for researchers. The IF is used to measure journal performance, and many journal editors are under pressure to increase their journal*s IF.
9 Further, IFs can be manipulated.
3,10-12 For example, review articles have been found to attract the most number of citations, so journals may try to publish more review articles to increase their IF. Journal editors may select articles on the basis of how likely they are to be cited. Journals may also ask authors to cite other papers from the journal (called ※self-citations§).
Alternatives to the IF
The IF ruled the roost for several decades. However, alternative indicators of journal prestige have been developed in recent years. It has been found that all of these indicators correlate closely with each other. In other words, journal rankings based on these indicators tend to be similar, though there may be differences in the absolute journal rankings. So researchers should feel free to use any one of the below indicators, and not limit themselves to the IF, when selecting journals to follow or publish in.
13-15
Scimago Journal Rank (SJR)
Data source: Scopus
Can be found at:
http://www.scimagojr.com/ free
How it*s calculated: Citations from prestigious journals are given more weight than citations from lower-tier journals (similar to Google*s PageRank algorithm). SJR for 2010 is calculated by counting 2010 citations to papers published in 2007, 2008, and 2009 (three-year period).
Why it*s useful: SJR indicates which journals are more likely to have articles cited by prestigious journals, not simply which journals are cited the most.
Journal Impact Factor (JIF)
Data source: ISI Web of Science
Can be found in: Journal Citation Reports subscription required
How it*s calculated: All citations are given equal weight. The IF is calculated over a two-year period.
Why it*s useful: It is the traditional and most widely accepted measure of journal prestige. Most people in the academic world know about and use the JIF.
Source Normalized Impact per Paper (SNIP)
Data source: Scopus
Can be found at:
http://www.journalindicators.com/ free
How its calculated: SNIP is computed so that citations are normalized by field. Thus, it eliminates variations found in JIF wherein the IFs are high in certain fields and low in others. They calculate several other metrics as well, like citation potential in the journal*s subject field.
Why it*s useful: SNIP is a much more reliable indicator than the JIF for comparing journals among disciplines. It is also less open to manipulation by journals.
16
Eigenfactor score (ES) and Article Influence Score (AIS)
Data source: ISI Web of Science
Can be found at:
http://www.eigenfactor.org/ free
How its calculated: ES is similar to SJR; it also gives greater weight to citations from prestigious journals. ES is calculated over a 5-year period. Like SNIP, it also normalizes citations by field. Finally, it tries to mathematically model the time that a researcher spends with each journal. The AIS is similar to the IF, except the AIS is calculated using the ES, making it a more robust calculation than the IF.
Why it*s useful: The evidence indicates that ES and AIS are more robust indicators of journal prestige and influence than the IF.
14
Conclusion
The journal impact factor is a very useful tool for the evaluation of journals, but it must be used wisely. The decision on which journal to send your manuscript to should not rest solely on the IF. It is especially important to remember that a journal with a narrow focus (e.g., Diagnostic Molecular Pathology) may have a lower IF than a more broad-based journal (e.g., Journal of Pathology). Finally, researchers should look up other indicators for journal quality, like SNIP and Eigenfactor score, to get a better idea of journal prestige and influence.
- Brischoux F & Cook TR (2009). Juniors seek an end to the impact factor race. BioScience, 59(8), 638-9. doi: 10.1525/bio.2009.59.8.2.
- Garfield E (1994). The Thomson Reuters impact factor. Last accessed: August 30, 2011. Available at: http://thomsonreuters.com/products_services/science/free/essays/impact_factor/
- For example, see Seglen PO (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314: 497-502.
- Neuberger J & Counsell C (2002). Impact factors: Uses and abuses. European Journal of Gastroenterology & Hepatology, 14(3), 209-11.
- Adler R, Ewing J, Taylor P (2008). Citation statistics: A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS). Joint Committee on Quantitative Assessment of Research. Available at: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf
- Sloan P & Needleman I (2000). Impact Factor. British Dental Journal, 189: 1. doi:10.1038/sj.bdj.4800583.
- Podlubny I (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95-99. doi: 10.1007/s11192-005-0240-0.
- House of Commons Science and Technology Committee (2011). Peer review in scientific publications Vol 1. House of Commons: London, UK.
- Smith R (2006). Commentary: The power of the unrelenting impact factor? Is it a force for good or harm? International Journal of Epidemiology, 35: 1129-30. doi: 10.1093/ije/dyl191
- Smith R (1997). Journal accused of manipulating impact factor. BMJ, 314: 461.
- Sevinc A (2004). Manipulating impact factor: An unethical issue or an editor*s choice? Swiss Medical Weekly, 134: 410.
- Falagas ME & Alexiou VG (2008). The top-ten in journal impact factor manipulation. Archivum Immunologiae et Therapiae Experimentalis, 56(4): 223-6. doi: 10.1007/s00005-008-0024-5.
- Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE (2008). Comparison of SCImago journal rank indicator with journal impact factor. The FASEB Journal, 22(8): 2623-8. doi: 10.1096/fj.08-107938.
- Rizkallah J & Sin DD (2010). Integrative approach to quality assessment of medical journals using impact factor, eigenfactor, and article influence scores. PloS One, 5(4): e10204. doi: 10.1371/journal.pone.0010204.
- Rousseau R, the STIMULATE 8 Group (2009). On the relation between the WoS impact factor, the Eigenfactor, the SCImago Journal Rank, the Article Influence Score and the journal h-index. Available from E-LIS archive, ID: 16448; http://eprints.rclis.org/16448/.
- Moed HF (2011). The source-normalized impact per paper is a valid and sophisticated indicator of journal citation impact. Journal of the American Society for Information Science and Technology, 62(1): 211-3. doi: 10.1002/asi.21424.
Contributors