Journal Impact Factor to Lecturer Research Evaluation: Married incompatibles in Zimbabwe


The purpose of this study is to clarify the confusion among some academics over the journal impact factor and its’ uses. It was motivated by the observation that, while the impact factor was designed for use by librarians, it is being misused for the assessment of the quality of lectures’ research for tenure and promotion in some universities in Zimbabwe. The study was guided by qualitative research methodologies. Data was collected by documentary analysis of internet materials for the history, purpose, use and abuses of impact factor. The study revealed that, the words, “impact and quality” are not synonyms, hence journal impact factors cannot be used to assess the quality of research. The journal impact factor was initiated by two librarians, Gross and Gross (1927) who needed to identify scientific periodicals they could buy for their college libraries. Garfield (1960) improved its application and the Institute for Scientific Information commercialized it. Impact factor of a journal is a numerical measure reflecting the average number of citations to articles published in the journal within a period of two years. It is a useful measure of journal visibility within the literature of a particular discipline. It reflects journal influence in that field not research article quality (Baum, 2011). Critics of a journal impact factor point out that the mean is an inappropriate statistic for journal impact factor which is a skewed and not linear distribution. It can be manipulated by journal editors hence unreliable. Today the journal impact factor is being misused for ranking journals, evaluating research articles and lecturers’ research for tenure and promotion. Garfield (1998) denounces the use of impact factor to evaluate the quality of journal articles or researchers who publish in journals with low impact factors. The European Association of Science Editors issued a statement against evaluation of research using impact factor in 2007. The Joint Committee on Quantitative Assessment of Research disapproved it in 2008.  Higher Education Funding Council pointed out that, using the impact factor of journals as a surrogate for the impact of articles published in them is assessing science in a fundamentally unscientific way. Researchers describe the use of journal impact factor to evaluate the quality of research as “foolhardy” (Seglen, 1997), “dubious” (Amin and Mabe, 2000), and “bad scientific practice” (Brembs, Button and Munafo, 2013). Stephen (2012) concluded that, those using journal impact factor to rate the quality of research papers are “statistically illiterate”. This study concluded that, those research assessors calling for journal impact factors to evaluate the quality of the research articles and researchers are marrying incompatible partners by using a wrong tool. This study recommends that use of journal impact factor should be a tool for librarians, confined to its initial purpose of identifying journals with high readership. Those intending to use it for evaluating research should be educated through published research papers clarifying the issue of journal impact factor.  In fact, by using journal impact factor for research quality evaluation in 2014, when Garfield denounced it, 16 years ago in 1998, Zimbabwean academics at university level are using an expired drug.         

Keywords: Impact Factor, Incompatibles, Lecturer Research, Quality

Unique Article ID: BJE-151

Article Review Status: Review Completed - Accepted (Pay Publication Fee)

If your article’s review has been completed, please ensure you check your email for feedback.

Creative Commons Licence
This work by European American Journals is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License

  • Our Journal Publishing Partners