Article Text

other Versions

Download PDFPDF
Honorary authorship epidemic in scholarly publications? How the current use of citation-based evaluative metrics make (pseudo)honorary authors from honest contributors of every multi-author article?
  1. Jozsef Kovacs
  1. Correspondence to Professor Jozsef Kovacs, Department of Bioethics, Institute of Behavioural Sciences, Semmelweis University, VIII. Nagyvarad ter 4, Budapest 1089, Hungary; kovjozs{at}

The current use of citation-based metrics to evaluate the research output of individual researchers is highly discriminatory because they are uniformly applied to authors of single-author articles as well as contributors of multi-author papers. In the latter case, these quantitative measures are counted, as if each contributor were the single author of the full article. In this way, each and every contributor is assigned the full impact-factor score and all the citations that the article has received. This has a multiplication effect on each contributor's citation-based evaluative metrics of multi-author articles, because the more contributors an article has, the more undeserved credit is assigned to each of them. In this paper, I argue that this unfair system could be made fairer by requesting the contributors of multi-author articles to describe the nature of their contribution, and to assign a numerical value to their degree of relative contribution. In this way, we could create a contribution-specific index of each contributor for each citation metric. This would be a strong disincentive against honorary authorship and publication cartels, because it would transform the current win-win strategy of accepting honorary authors in the byline into a zero-sum game for each contributor.

  • Scientific Research
  • Research Ethics
  • Professional - Professional Relationship
  • Professional Misconduct
  • Public Policy

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In today's highly competitive scientific climate, quantitative metrics are widely used to measure academic performance of individual researchers. The number of published papers, the cumulative impact factor (IF) of all the journals in which (s)he has published, and the number of citations of her/his papers are among the most frequently used methods to measure the research output of an individual scholar. In my paper, I would like to draw attention to the unfairness and discriminatory nature of the current use of citation-based metrics, because they are similarly applied to authors of single-author articles and to contributors of multi-author papers. I do not wish to support the use of these metrics. My premise is that if they are used at all, they should be used fairly, because performance appraisals of researchers based on these metrics may affect their promotion, hiring, tenure, salary and the allocation of research resources.1

Some well-known misuses of evaluative metrics

One of the most frequent misappropriations is to use the journal IF, which is the measure of the influence of a particular journal to assess the worth of a paper of an individual researcher published in that journal.2 ,3 This is doubly wrong. It is wrong, because it mistakes the influence of a journal for the influence of a particular paper. Even influential journals may publish papers with no influence, so much so, that Garfield mentions the 80/20 rule: 80% of citations refer only to 20% of articles even in reputable journals.4 A further problem is that individual researchers, as well as institutions and entire disciplines, tend to be evaluated by the cumulative IF score they produce, although it is well known that the comparison of different disciplines is not valid.5

Apart from these well-known misuses of metrics, however, there is one which is perhaps the most unjust, and is still widespread. This relates to the calculation of evaluative metrics of multi-author articles.

The concept of authorship

According to the International Committee of Medical Journal Editors (ICMJE) (Vancouver group) definition of authorship, authors should meet all of the following conditions6:

  1. substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data;

  2. drafting the article or revising it critically for important intellectual content; and

  3. final approval of the version to be published.

An honorary (guest) author is the person, who does not fulfil the criteria of authorship and yet is listed as an author in the byline, while a ghost author, by contrast, is one who fulfils the criteria of authorship and is not listed in the byline.

The absence of the concept of contributorship from contemporary citation metrics

An important turning point in terms of authorship has been marked by the article by Rennie et al,7 who called attention to the fact that in our era of multi-author articles, the concept of authorship should be replaced by that of contributorship. If only contributors exist in a multi-authored paper, they argued, then it should be made clear who contributed what in writing the article. This proposal has received quick and wide acceptance, and a number of reputed journals soon began to request contributors to declare publicly the nature of their contribution in the byline.8 The problem, however, is that this ‘movement’ of requesting the description of the specific contribution of each contributor has not gone far enough. If only contributors exist in a multi-authored paper, then contributors are authors of the article only to the degree of their contributions. This principle, however, is not realised when calculating the various citation-based metrics of contributors. These quantitative measures are counted, as if each contributor were the single author of the full article. In this way, each and every contributor is assigned the full IF score and all the citations that the article has received. This causes a highly distorted picture of the scientific performance of contributors. The more contributors an article has, the more undeserved credit is assigned to each of them. As the number of contributors decreases, so increases the discrimination against each of them in the current system, because they work relatively more and more, and still they get only the same citation-based metrics as contributors of an article with a much greater number of contributors. Single authors are those who are in the worst position in this system. No wonder that single authorship is becoming an almost extinct genre today.

A fierce competition exists among researchers, and their performance is measured mostly by using bibliometric indices. If one can multiply one's citation-based performance indicators by writing multi-author articles, one simply cannot afford to write single-authored papers. This becomes a sort of game-theoretic problem,9 like say, the arms race, where single authorship, like unilateral disarmament would be a losing strategy in this system. This is one explanation for the current trend to increase the number of contributors, causing ‘polyauthoritis’, as Borry et al called it.10

The multiplication effect of multi-author articles on individual researchers’ citation-based evaluative metrics

In a hypothetical case in 2007, author ‘A’ publishes a single article in a scientific journal, the IF score of which is 2.

In the same year, author ‘B’ contributes to six articles, each of which has six contributors, and each journal has an IF of two as well. Let us further suppose, for the sake of simplicity, that author ‘B’ wrote 1/6 part of each article.

Using the current system of counting, the cumulative IF of author ‘A’ in 2007 will be two, while that of author ‘B’ will be 12.

Thus, today’ s widely used citation metrics would show the performance of author ‘B’ to be six times greater than that of author ‘A’, when in fact they had practically the same performance in the given year. The multiplication effect of multiple authors on each and every quantitative measure is one cause that accounts for the multiplication of authors. One could argue, that writing one full article individually, and writing 1/6 part of six articles is not exactly the same either in effort or in performance. I readily accept this claim, but I am sure that the difference cannot be as great as is expressed by the sixfold difference in the citation-based metrics in the given example. Counting these metrics in the manner described above gives a highly distorting picture of individual performance.

The concept of pseudo-honorary authorship

This method of counting unwittingly makes (pseudo)honorary authors of every contributor of every multi-author article. Because each author is treated as if (s)he were the single author of the whole article, full credit is assigned to every one of them. The assignment of undeserved credit makes (pseudo)honorary authors from each honest contributor.

This phenomenon of pseudo-honorary authorship, however, should be distinguished from real honorary authorship because nobody is dishonest here. Only those are listed in the byline who substantially contributed to the article. It is the system that makes these contributors pseudo-honorary authors.

Voluntary honorary authorship as a consequence of the current system

Up to now, we started from the premise that each contributor is honest, and is listed in the byline only if (s)he fulfils all the criteria of authorship. Unfortunately, this is not always the case, and it is easy to see how this system entices authors to form ‘publication cartels’.11 If there are, say, 10 researchers working in the same field, each one can decide to write, for example, one full article every year and list the other nine researchers in the byline of her/his paper with the implicit agreement, that the others will do the same. In this manner, each researcher can multiply her/his scientific performance, measured by contemporary citation metrics by a factor of 10.

Although a publication cartel in the described form is clearly unethical, it could easily be made ethically acceptable, at least in the current system. One would not have to write one full article every year, and be inducted into the other nine articles as an honorary author. It would be enough for her/him to write 1/10 part of all the 10 articles and making sure that (s)he otherwise fulfils the criteria of authorship of the ICMJE. This would transform the publication cartel into an ethically correct form of collaboration, and would still assign the same credit to the individual researcher. The fact that with this simple manoeuvre, an ethically unacceptable practice can be transformed into a seemingly acceptable form of collaboration is proof of the highly flawed nature of the current use of bibliometric indices.

Coerced honorary authorship

Publication cartels are voluntary in the sense that one is free to join or abstain from them. Because of the power imbalance, however, senior researchers can coerce their junior colleagues to list them as co-authors in the byline even if they do not meet the criteria of authorship.12 Kwok called this abusive co-authorship ‘publication parasitism’.13

In the present system, there is no incentive from the part of the junior researcher to resist this (often implicit) request, because compliance will not reduce her/his citation-based metrics, and the goodwill of her/his supervisor can be bought in this way. What is more, the often well known name of the senior author in the byline may increase the chances of the article being published, and later to be cited.

Seemingly, this is a win-win situation, where every participant is benefited. That is one reason why it is so widespread. Analysing original articles published in the BMJ between 1975 and 1995, Drenth documented that the number of authors increased over these two decades disproportionately. The number of senior researchers in the byline grew more than the number of all other authors. This trend might partially be attributed to the widespread practice of gift authorship: senior researchers sign for authorship simply because they were supervisors of junior researchers who publish an article. Some departments even make it an official requirement for junior researchers to list their supervisors among the authors.14 It is easy to see that junior researchers are rarely in any position to resist these implicit or explicit demands.

Earlier studies demonstrated that about 20–25% of authors in peer-reviewed medical journals were honorary authors.15

A more recent study analysed three reputed medical journals and found that the number of articles with honorary authors was in the range of 4–60%, depending upon the journal, and the number of honorary authors was between 0.5% and 21.5%. Honorary authors were never the first authors, and were usually placed toward the end of the byline.16 This seems to endorse the view of the cynics who suspect that the final author, who is generally a senior researcher, is often an honorary author.17

The inherently discriminatory nature of the current use of citation-based metrics

Although many reputed journals today request the description of the nature of contribution of each contributor, this is not able to prevent false statements to that effect. The contributors have no incentive to prevent honorary authors being listed in the byline. On the contrary, forming publication cartels is a win-win strategy within the members of the cartel. It is highly discriminatory, however, against those outside the cartel who are motivated in this way to form similar cartels as a countermeasure. The whole phenomenon, as we have seen, becomes similar to the arms race. If someone stays away from such cartels, (s)he does it at her/his own peril, and will surely lose in the competition, the result of which is measured by evaluative metrics.

How to make the system fairer?

There is a relatively easy way, however, to get out of this system that rewards deception.

Quite simply, we have to draw the logical conclusions of the concept of contributorship. If there are only contributors in multi-author articles, then contributors’ performances are proportionate to their degree of contribution. This should be expressed numerically, as well, when calculating bibliometric indices.

My proposal is that in case of multi-author articles, authors should be required to assign a numerical value to their degree of contribution, which should be given in the byline, after their name. This should be expressed in decimal fractions, the sum of which must not be greater than 1.0. In this way, we could create a contribution-specific index of each contributor for each citation metric.

A byline realising this principle would be like this:

Embedded Image

The above numbers indicate the (percentage) contribution of each contributor to the article (40, 30, 20 and 10%), where the sum is 1 (or, 100%).

To assess each contributor's relative contribution, and to express them in percentages, will not be an easy task. Various scoring systems already exist,18 and new ones can be devised. It will be up to the contributors to decide their fractional credit, but no one else can do it better than they themselves.

Having numerically expressed the relative contribution of each contributor to the article, it would be easy to create a contribution-specific index of each contributor for each citation metric. For example,Embedded Image Embedded Image

For example, if author ‘A’ has 0.2=20% contribution to an article which was published in a journal with IF 5, then the contribution-specific IF of ‘A’ will be: 5×0.2=1.

If no numerical value has been assigned to individual contributors, the presumption should be that each contributed equally to the article. In this case, the contribution-specific bibliometric indices would be the following:Embedded Image Embedded Image

The advantage of this new system would be the elimination of discrimination against single-author papers, and against contributors to articles with a small number of contributors. A further advantage is that this method of counting would incorporate a disincentive in the system against honorary authors11 by changing the current ‘win-win’ strategy to a zero-sum game. If contributors were to accept an honorary author in their contribution, specific IF and citation indices would decrease. This would not totally eliminate the well-known practice of honorary authorship, but would serve as a strong disincentive against it. I see no other way to prevent honorary authorship, because the current system rewards it, and there is very small chance to denounce a publication cartel, because no one within the cartel is interested in exposing it, and members outside the cartel are rarely in a position to prove that the cartel exists at all.

A further minor advantage of the system would be to solve the much-debated problem of the order of authorship. A sort of chaos is prevailing here, with each group of collaborators and journals following their own rules, and no universal rule existing with which the reader could know whether this order reflects the relative contributions of the co-authors. My proposal would automatically solve this problem, because by assigning a percentage value to each contributor, the order of authors would become irrelevant.

Although the requirement to express each contributor's relative contribution could introduce tension among contributors, this tension already exists in the present requirement to determine the order of authors, which sometimes is presumed to reflect the order of relative contribution. Anyhow, such a requirement would make contributors more aware of the importance of this task, and could create more consciousness in regard to one's overall contribution to the full article.


One could argue that to calculate the cumulative IF score of individual articles or authors is wrong from the very beginning. It is a mortal sin, according to bibliometricians.2 If it should not be used at all for measuring individual research output, why then should it be used in a modified form? My answer would be that all citation-based evaluative metrics suffer from the same bias that I have described in my paper, and not just the cumulative IF. In spite of the fact that it is inadequate, the IF score is still so widely used to measure individual scholarly performance, that a less discriminatory form of using it would already be an advance from the point of view of fairness.

One could further argue, that when a particular researcher is being evaluated, the evaluators could take into account even now the number of co-authors that her/his article has, and divide all of her/his citation-based metrics with these numbers. This system is already being used in many places. This approach, however, is not totally fair, because contributors usually do not contribute to the articles equally. The order of authors in the byline mostly mirrors this fact, presuming that the first author contributed the most. There is no consensus, however, as we have seen, as to how this order should reflect the degree of contribution. Thus, it would be much fairer to directly express the fractional contribution of each contributor, and correct their citation-based metrics accordingly, when their research output is evaluated.

Another argument against my proposal is that a new problem could be created by its adoption. Instead of the present tendency to admit honorary authors in the byline, there would be a novel trend to exclude genuine contributors from the list of authors altogether, or to allocate unrealistically high relative contribution to powerful researchers in the byline. Mostly senior authors would be in a position to force their own view of their relative contribution on to other co-authors. Thus, the present problem of honorary authorship would be replaced by the new problem of ghost authorship, when the genuine authors of an article are not listed in the byline, or listed with smaller than real contribution.

An important difference exists, however, between the current tendency for honorary authorship and the possible tendency for ghost authorship. Admitting honorary authors, as we have seen, is a win-win strategy in the present system for everybody listed in the byline, irrespective of the fact whether they are real contributors or only honorary authors. This system is disadvantageous only for their competitors, for other (outsider) authors. Thus, those who know who are the honorary authors, are not interested in disclosing this, while those who would be interested in revealing possible honorary authors in another team, are outsiders who rarely know the details of a publication process sufficiently enough to do anything.

The expression of relative contribution of each contributor would change this win-win strategy into a zero-sum game for all contributors in the byline. Genuine authors would be greatly interested in resisting admitting honorary authors. If a genuine contributor is to be omitted from the list of authors, or would be allocated significantly smaller than real credit, it would be a great enough injustice for her/him to motivate her/him to seek redress. Additional safeguards could be implemented to protect would-be whistle-blowers from the possible revenge of powerful senior colleagues.

The history of such proposals

The proposal to create an authorship index which would express the degree of contribution by assigning a numerical value to each contributor of a multi-author article has already been proposed in 1969,19 and Rennie et al7 explicitly mentioned it. Furthermore, they themselves considered the idea of working out the relative contribution of each contributor and expressing it numerically as a fraction of the total work done so as to determine the order of collaborators. They, however, did not find it necessary to publish these numbers. Since then, there have been further proposals to alter currently used quantitative measures based on partial authorship,20 while others have called attention to the absurd consequences of the usual counting of bibliometric indices, and proposed solutions to eliminate the present unfair system.21 Things did not change, however.

How could the use of this flawed system survive?

One reason may be the current system of using bibliometric indices as the almost sole performance indicators in many places. Normally, senior researchers play a significant role in making the research possible at all by acquiring grants, by recruiting junior researchers, and by enlisting their cooperation. This is more an organisational than a scholarly activity, but still, it is indispensable for conducting research. In a healthier system of measuring performance, this support could simply be acknowledged in the acknowledgement section of the paper resulting from the research made possible by the organisational activity of the senior researcher. Such an acknowledgement in itself, however, is worthless in a world where scientific performance is expressed mostly by bibliometric indices. That may be one reason why instead of the worthless acknowledgment, senior researchers are often granted honorary authorships to acknowledge their important organisational support.

The problem with this solution is that it is rarely voluntary from the part of the real contributors when they accept an honorary author, and this solution, as we have seen, creates more injustice than it intends to remedy.

To demand the display of the fractional contribution of each contributor of every multi-author article would certainly make the system fairer. It is time scientific journals demanded the numerical expression of the relative contribution of each contributor of multi-author articles. This process has already begun by requesting the description of the type of contributions each contributor has made. It has to be completed.


I would like to thank three of my anonymous reviewers for their many useful comments. I used, and tried to reply to many of their counterarguments, which (I hope) made my paper better.



  • Contributors I am the only author of the paper, and nobody else contributed to its preparation. The contents of this manuscript are my original work and have not been published in whole or in part before.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

Other content recommended for you