
“Impact” is defined by the Oxford English Dictionary as “a marked effect or influence”, and most scientists would hope that their work will have a marked effect or influence on their field. In science, having “impact” has become quantifiable through the “impact factor”. The system of determining the impact of a journal was created in 1961 by Eugene Garfield, founder of the Institute for Scientific Information (ISI). The ISI is now incorporated into Thomson-Reuters’ Web of Knowledge which includes the hugely important Science Citation Index (SCI). The impact factor which Thomson-Reuters calculates has become ubiquitous.
The formula for calculating a journal’s impact factor is simple:
The reasoning behind this formula is simple: a journal publishes articles that are cited by others; therefore, the journal has a measurable impact on the field (Garfield, 2006). The higher the impact factor, the greater the impact. Impact factors are recalculated every year.
Criticisms of the impact factor are numerous (See Box 2.1). Fundamentally, some argue that “citations are a shallow measure of research quality or impact” (Lillis and Curry, 2010, p.)
Box 2.1 Criticisms of the Impact Factor
• A recent paper argues that the current metrics “undermine, rather than foster and reward, scholarship that matters” (Adler & Harzing, 2009, p. 73).
• The two-year basis for counting citations may disadvantage journals or disciplines with longer publishing timelines, so that an accurate reflection of the number of citations an article would accrue is not seen within two years.
• Journals that publish many more articles, and disciplines that have many more journals, can obtain higher impact factors. Consequently, big journals in big disciplines are advantaged over smaller and highly specialized journals, and this may not be a true reflection of the importance of a journal.
• Young researchers who are trying to build a tenure and promotion file may avoid the more specialized journals with lower impact factors even though the work “would be better appreciated, published more quickly, and perhaps have more impact if they were published in specialised journals….This [practice] ultimately slows the diffusion of ideas into the research literature and stifles academic dialogue” (Segalla, 2008, cited in Adler & Harzing, 2009, p. 75).
• Journal policies sometimes encourage authors to cite other articles published by that same journal which can interfere with an objective indication of impact.
• Impact factors are used for purposes that were not intended. For example, impact factors are sometimes used for evaluating individuals (for the purpose of hiring, tenure and grant entitlement) and academic departments and institutions.
Nonetheless, the impact factor is now well entrenched in the world of scientific publishing. The impact factor is stated on the individual journal webpages of the world’s four major journal publishers (Elsevier, Springer, Taylor & Francis and Wiley-Blackwell have well over 1000 journals each) (Ware & Mabe, 2009). The impact factor is calculated for the 16,000+ journals included in the Web of Knowledge, comprised of the Science Citation Index Expanded, Social Sciences Citation Index Expanded, Conference Proceedings Citation Index, and Arts & Humanities Citation Index. (Thomson Reuters, 2011). The field of bibliometrics, which has grown up around the measuring of impact, is central to the ranking of journals. For scientists to achieve maximum impact in the bibliometric system that is prevalent today, they are best to publish in the journals that are indexed in Thomson Reuters’ Web of Knowledge.
A journal’s high impact factor correlates with the prominence of citation indexes in today’s measurement-oriented, globalized world. Web of Knowledge indexes 16,183 journals—a
large number. Yet those journals constitute only 24% of the “academic/scholarly” journals published in all languages that are included in Ulrich’s Periodicals Directory (Lillis & Curry, 2010, p. 17). Thus three-quarters of the scholarly journals published around the world are not counted in key international rankings of institutions and nations. The Web of Knowledge indexes are “heavily biased” toward journals published in English from English-speaking countries (Lillis & Curry, 2010, p. 18). Only 11.6% of the journals included publish in a language that is not English (Brunner-Ried & Salazar-Mu`niz, 2012). More minor U.S. journals than minor European journals are included, and relatively few non-English journals are included. Organizations that rank research publication output, such as the Organisation for Economic Cooperation and Development (OECD) and the World Bank, rely almost exclusively on the journal, research article and citation data produced by the Thomson Reuters sources.
The statistics of production and attendant rankings reveal trends in world science, particularly when we take into account the limited number of journals counted in the Web of Knowledge. As shown in Table 2.1, the United States is now surpassed by the European Union in the share of world articles produced. However, papers published by American scientists continue to be cited in greater numbers than those by Europeans. Although China still ranks below Japan and the Asia-8, which includes India, it has produced a four-fold growth of papers
The research captured in the international indexes is also used to rank universities worldwide. Highly visible listings created by the Times Higher Education journal and the Shanghai Jiao Tong University determine university rankings by a mixture of criteria, including amount of research funds obtained, professor-student ratios, and number of degrees conferred. Significant weight in the determination is placed on “quality of staff” (i.e. number of highly cited researchers in a discipline) and research output as counted in publications and citations.
Citations counts are valuable in determining the importance of published work; these counts are used as a measure of the “quality” of the work. However, much as the Web of Science disproportionately favors journals published in English, so, too, do the citation counts that it generates. In contrast, Google Scholar is used in many parts of the world as a search engine, as access is free with internet connection. The impact of scholars’ work may create a very different picture when Google Scholar is consulted. For example, the impact of 36 well-established Latin American scholars who have each been publishing for more than 30 years was compared in the two databases, Google Scholar and Web of Science (Table 2.2) (Brunner-Ried & Salazar-Mu`niz, 2012). Based on the large citation counts, we can say that the impact of Latin American academics throughout the region is significantly more substantial than the Web of Science metrics indicate.
The emphasis on publication metrics has created new demands and incentives for scientists in many parts of the world (Qiu, 2010; Englander & Uzuner-Smith, in press). In China, scientists are awarded cash prizes, housing benefits or other perks for their publications in high profile journals. Practicing doctors at a major surgical hospital in China are now required to publish at least one research paper per year in order to maintain their medical privileges (Yongyan Li, personal communication, March, 2013). The pressure to “rack up publications” seems to encourage dubious research practices such as plagiarism, fabrication and falsification of data (Qiu, 2010, p. 142). There is a concomitant rise in the number of retractions of published work, although these were papers that all passed the peer review process. One biochemist expressed concern that “counting the number of publications, rather than assessing the quality of research, becomes the norm of evaluation,” (Qiu, 2010, p. 143). Thus individuals, institutions and nations all emphasize producing a high number of publications, even if those papers might be retracted at a later date.
In sum, metrics have become central in determining value within science today. The emphasis on metrics of the number of articles and the citations they accrue is highly visible in institutional, national and international rankings. The metrics that underlie the rankings are calculated using indexes such as those created by the Web of Knowledge. There, journals gain visibility, since they are the journals that the databases search when a scientist conducts a search for papers. The journals that are included provide the papers that are more likely to be consulted. Subsequently, those papers are more likely to be cited, raising the likelihood of obtaining or maintaining a high impact factor for the journal. The metrics very heavily favor publications in
English, and publishing in English is more likely to produce citations in subsequent English-language articles. The desire on the part of nations, institutions and scientists to rank highly in the measures of research output and impact can give scientists much reason to be cognizant of the impact factors of the journals in which they seek to publish (Englander and Uzuner-Smith, in press).