Articles

Platinum Priority – Editorial and Reply from Authors
Referring to the article published on pp. 435–439 of this issue

Impact Factor for Ranking Academic Urologic Institutions

By: Michael Froehner

European Urology, Volume 61 Issue 1, March 2012, Pages 440-441

Published online: 01 March 2012

Abstract Full Text Full Text PDF (89 KB)

Refers to article:

Academic Ranking Score: A Publication-Based Reproducible Metric of Thought Leadership in Urology

Alexander Kutikov, Boris Rozenfeld, Brian L. Egleston, Mohit Sirohi, Raymond W. Hwang and Robert G. Uzzo

Accepted 14 October 2011

March 2012 (Vol. 61, Issue 3, pages 435 - 439)

Although serious criticism surrounds the measurement of scientific activity of individuals or institutions by counting impact factors, it continues to enjoy great popularity in the scientific community and is difficult to resist [1] and [2]. In this issue, Kutikov and coworkers present an academic ranking system to afford quantification and comparability of the scientific output of leading institutions in the field of urology [3]. Rankings of the reputation of academic institutions or individuals are always interesting to read, particularly when oneself or one's own institution plays a prominent role. Providing objective and reproducible measures for such rankings could be an improvement. But do Kutikov et al. provide such a measure?

Large and well-funded institutions obviously produce more research papers than smaller institutions with less support. The time available for individual scientific activities may also differ considerably between institutions. In German academic urology, for instance, the number of patients treated per year and the physician recorded in the publicly accessible quality reports differ by a factor >2. Urologists with less patient contact may have more time available for research activities. The same applies to full-time scientists who are not equally distributed in the academic institutions. Information regarding the availability of financial and personal resources of academic institutions may be collected both by surveys of scientific authorities or by counting publications and adding impact factors. It is questionable, however, whether the latter method is more useful for the general public because it is one dimensional and neglects other relevant indicators such as patient volume, clinical spectrum, working conditions, available equipment, and the reputation of the institution.

Kutikov et al. do not really discuss the limitations of the impact factor in detail, and they do not refer to important parts of the related literature [1], [2], [4], [5], and [6]. The impact factor is valid to assess the quality of scientific journals but seems unsuitable for the quality assessment of individual papers or scientists [1], [5], and [6]. It measures the citation rate of the journal but not individual published articles. The citation rates of articles in the same journal may differ substantially, and the impact factor of the journal correlates poorly with the actual citation rate of individual articles [4] and [5].

Editors of scientific journals may try to manipulate impact factors by publishing frequently cited review articles instead of original research papers and comments that may be cited but are not counted in the denominator when the impact factor is calculated or by asking authors to cite more articles of that journal [1], [2], and [6]. To increase their personal impact factor, scientists may prefer writing invited reviews or comments that have a higher chance of being accepted in high impact factor journals than troublesome primary research articles.

The list of European institutions in the article by Kutikov et al. is problematic. It remains unclear whether these institutions are really the best scientific institutions in Europe or whether they were selected more or less arbitrarily. The authors also made no effort to investigate whether their method is valid to compare leading European institutions quantitatively with North American ones. What does it mean when a North American institution has a score that is 5–10 times higher than a comparable one in Europe? The research conditions may be better on the other side of the Atlantic, but by so much? The extreme size of these differences needs to be explained. They possibly reflect structural and volume differences, the known English-language bias of the impact factor, or the dominance of American publications in the related database [4], but it is hardly possible to estimate to what degree.

Kutikov et al. are convinced that their approach is more relevant than the subjective reputation ranking by a panel of individuals, but they do not provide convincing data to support their view. A somewhat more tentative interpretation of the data as stated in the take-home message would have been desirable.

Conflicts of interest

The authors have nothing to disclose.

References

  • [1] K. Simons. The misused impact factor. Science. 2008;322:165 Crossref.
  • [2] PLoS Medicine editors. The impact factor game. PLoS Med 2006;3:e291.
  • [3] A. Kutikov, B. Rozenfeld, B.L. Egleston, M. Sirohi, R.W. Hwang, R.G. Uzzo. Academic ranking score: a publication-based reproducible metric of thought leadership in urology. Eur Urol. 2012;61:435-439 Abstract, Full-text, PDF, Crossref.
  • [4] P.O. Seglen. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498-502
  • [5] T. Opthof. Sense and nonsense about the impact factor. Cardiovasc Res. 1997;33:1-7 Crossref.
  • [6] Dong P, Loh M, Mondry A. The “impact factor” revisited. Biomed Digit Libr 2005;2:7. Biomedical Digital Libraries Web site. http://www.bio-diglib.com/content/2/1/7.

Footnotes

Department of Urology, University Hospital “Carl Gustav Carus”, Technical University of Dresden, Fetscherstrasse 74, D-01307 Dresden, Germany

lowast Corresponding author. Tel. +49 351 4582447; Fax: +49 351 4584333.

Place a comment

Your comment *

max length: 5000