Intended for healthcare professionals

Editorials

Unscientific practice flourishes in science

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7137.1036 (Published 04 April 1998) Cite this as: BMJ 1998;316:1036

Impact factors of journals should not be used in research assessment

  1. Richard Smith, Editor
  1. BMJ

    Education and debate p 1079

    All around the world the scientific performance of individuals and research groups is being assessed using the impact factors of the journals in which they publish.1 Unfortunately the indisputable evidence that this method is scientifically meaningless is being ignored. Those who assess the performance of researchers seem to be bewitched by the spurious precision of a number that is available to several decimal places.

    Most researchers accept that research funds should be concentrated on those who perform well. Performance must therefore be assessed—which is not easy. Britain has developed a system that Gareth Williams, a professor of medicine, describes as gathering misleading data and assessing them unscientifically and unaccountably using an inefficient, expensive, and wasteful procedure (p 1079).2 The result is that limited resources may be misapplied and research distorted by researchers playing games to score highly in the assessment exercise.

    One part of the assessment is to score the researchers' performance by the impact factors of the journals in which they publish. The impact factor of a journal is in essence the number of times the articles it publishes are cited, divided by the number of articles that could be cited. 1 3 Impact factors are calculated annually by the Institute for Scientific Information in Philadelphia and published in the Science Citation Index. They are an imperfect measure even of the quality of a journal because they are biased towards American journals, strongly distorted by specialty, and vulnerable to technical problems.1 Moroever, and crucially, impact factors are meaningless as a measure of the performance of individual scientists or research groups for the simple reason that there is little correlation between the number of times that individual articles may be cited and the impact factor of a journal.1 This is because journal impact factors depend on a few articles that are highly cited.

    Eugene Garfield, the inventor of impact factors, has for many years warned those who want to assess the research performance of individuals and groups not to use impact factors. For example, he wrote in the BMJ in 1996: “Using the journal's average citation impact instead of the actual article impact … while expedient … is dangerous.”3 Per Seglen, a Norwegian professor, comprehensively demolished the use of impact factors in research assessment in the BMJ last year.1 Yet still the practice continues. It must stop.

    Acknowledgments

    Conflict of interest—The BMJ has an impact factor lower than that of the other big general medical journals but higher than that of most specialist journals.

    References

    1. 1.
    2. 2.
    3. 3.