Background: Breaches of publication ethics such as plagiarism, data fabrication and redundant publication are recognised as forms of research misconduct that can undermine the scientific literature. We surveyed journal editors to determine their views about a range of publication ethics issues.
Methods: Questionnaire sent to 524 editors-in-chief of Wiley-Blackwell science journals asking about the severity and frequency of 16 ethical issues at their journals, their confidence in handling such issues, and their awareness and use of guidelines.
Results: Responses were obtained from 231 editors (44%), of whom 48% edited healthcare journals. The general level of concern about the 16 issues was low, with mean severity scores of <1 (on a scale of 0–3) for all but one. The issue of greatest concern (mean score 1.19) was redundant publication. Most editors felt confident in handling the issues, with <15% feeling “not at all confident” for all but one of the issues (gift authorship, 22% not confident). Most editors believed such problems occurred less than once a year and >20% of the editors stated that 12 of the 16 items never occurred at their journal. However, 13%–47% did not know the frequency of the problems. Awareness and use of guidelines was generally low. Most editors were unaware of all except other journals’ instructions.
Conclusions: Most editors of science journals seem not very concerned about publication ethics and believe that misconduct occurs only rarely in their journals. Many editors are unfamiliar with available guidelines but would welcome more guidance or training.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Publication misconduct, such as plagiarism, data falsification or fabrication, is recognised as a form of unethical research behaviour sufficiently serious to warrant disciplinary action or sanctions from academic institutions and journals.1 2 Unethical practices such as redundant publication have also been shown to distort the scientific literature.3 Even forms of poor practice that might be regarded as lesser misdemeanours, such as non-legitimate authorship, have been described as “moral pollutants at a time when honest scientists and editors are trying to clean up the temple of science”.4 The Committee on Publication Ethics (COPE) states that “editors have a prime duty to maintain the integrity of the scientific record” and that journal editors therefore have a “duty to do their utmost to identify publication misconduct in submitted or published articles”.5
However, editing a scholarly journal also involves a wide range of other responsibilities, most academic editors fit their editing duties around other, often full-time, research or clinical commitments, and few editors receive specific training in publication ethics.
The Blackwell Best Practice Guidelines on Publication Ethics were published in late 2006. They were made available on the company’s website, published in a peer-reviewed journal6 and promoted at meetings for editors in 2007. They have since been promoted among all Wiley-Blackwell journals. Following feedback obtained while the guidelines were being developed and after they were published, we decided to survey editors of peer-reviewed academic journals to discover their levels of concern about publication ethics issues, their confidence in handling ethical issues and what resources they were using, or would like to have, to help them tackle ethical issues. The survey was designed to examine journal editors’ perceptions about a range of publication ethics issues rather than to monitor their journals’ responses to these. It was not intended as an audit of the journals’ policies or practices.
Surveys were sent (by post) to the 612 editors-in-chief of all medical, healthcare, life science and social science journals published by Wiley-Blackwell in June 2007. The questionnaire could be completed online or on paper and could be returned anonymously. One reminder was sent (by email).
Results were analysed in October 2007 using the Statistical Package for the Social Sciences (SPSS) version 12.0.
The survey gathered information about the journal (eg, location of the editorial office, topic and ownership) and how long the editor had been in post. The main part of the questionnaire asked about 16 publication ethics issues, such as plagiarism, fabricated data and authorship problems, which have been identified as areas of concern by organisations such as COPE, the Council of Science Editors (CSE), the World Association of Medical Editors (WAME) and the International Committee of Medical Journal Editors (ICMJE).5 6–9
Editors were asked about the severity of the problems for their journal (ranked from 0, “not a problem”, to 3, “a very serious problem”); their confidence in handling the problem if it occurred (from 0, “not at all confident”, to 3, “highly confident”); the frequency with which the problem occurred at their journal (from 0, “never”, to 3, “very often (at least once a month)”); and whether this frequency was changing (from 1, “decreasing a lot”, to 5, “increasing a lot”). For all questions, a response of “I do not know” was also possible.
The editors were also asked about what resources they were aware of, and had used, to provide guidance about ethical issues.
Of the 612 questionnaires sent out, 88 were returned as undeliverable and 231 responses were returned, giving an effective response rate of 44% (231/524). Of the 231 responses, 138 (60%) were submitted via the website and 93 (40%) were returned by post; 58 of the questionnaires (25%) were signed, while 173 respondents (75%) chose to remain anonymous. The respondents edited journals on medicine/healthcare (111, 48%), non-medical sciences (112, 49%), social sciences (7, 3%) or other topics (1). Their editorial offices were in Europe (115, 50%), North America (83, 36%), Australasia (14, 6%), Japan (11, 5%) or other regions (6, 3%). The respondents were generally representative of the total population in terms of location and journal topic, although the proportion of responders was lower for healthcare than for other science journals.
Almost half the journals (105, 46%) were published on behalf of an academic society; most of the others (101, 44%) were owned by the publisher. Most of the journals (155, 67%) carried display advertising and 41% (95) published sponsored supplements.
The editors had been in post from 8 months to 37 years, with a median of 5 years; 34% of the respondents had edited a journal for 1–3 years, 24% for 4–6 years, 18% for 7–9 years, and 24% for at least 10 years. For some analyses, the editors were divided into those with less (⩽5 years, n = 117) and more (>5 years, n = 105) experience.
Level of concern/perceptions of severity of problems
The general level of concern about the 16 ethical issues was low. The mean severity scores for all but one of the issues was <1 (where 0 = “not a problem” and 1 = “a minor problem”) (table 1). Only “redundant publication” (ie, overlapping, or “salami”, publications) gave a mean score above one (1.19), resulting from 115 editors (51%) who considered it to be a “minor problem”, 59 (28%) who considered it to be a “significant problem” and 6 (3%) who considered it to be a “very serious problem”. The next most serious problems were identified as undisclosed author conflicts of interest (which 24 editors (13%) considered “significant” and two (1%) considered “very serious”) and plagiarism (22 (11%) “significant” and 3 (2%) “very serious”). For all the other topics, <10% of the respondents considered the issue to be of more than minor severity.
The issue of least concern was editorial interference from the journal owner, which 91% of the respondents (200) identified as “not a problem”.
The response from editors was consistent across disciplines. Mean severity scores given by the healthcare and other editors were similar (eg, the mean score for redundant publication, the highest-scoring issue, was 1.19 for all editors and 1.20 for healthcare editors). The healthcare editors’ ordering of issues in terms of severity was broadly similar to that for the whole group, with redundant publication being the top-rated issue in both cases. However, the healthcare editors ranked unethical research design and undisclosed commercial involvement as being more serious problems than the total group (the total rankings for these issues were 10th and 11th, respectively, and healthcare rankings were 6th and 5th).
In addition to the questions about severity, the editors were asked to list the three issues causing them most concern and 159 responded to this question (69% of respondents). The issues most commonly listed among the “top three” were redundant publication (69), plagiarism (47), authorship problems (32), unethical research and problems with research ethics approval (25), data falsification or fabrication (25), reviewer conflicts of interest or bias (24), duplicate submission (21) and author conflicts of interest (19). Ethical issues mentioned by the editors as causing them most concern that had not been included in the specific survey questions included the welfare of experimental animals (11), biased analyses (10), papers from China (unspecified issues and problems with research ethics approval) (8) and selective reporting (7).
Level of knowledge and completeness of response about severity
Most respondents completed all questions about the severity of ethical problems, with only 4–10 (2–4%) missing data. All the questions included the option “don’t know”. The proportion of editors who did not answer the question or selected “don’t know” for the severity questions ranged from 13 (6%) for editorial interference by journal owners to 71 (31%) for “ghost” authorship (omitted authors). Other issues for which a considerable proportion of editors stated that they did not know the severity were “gift” authorship (undeserved authorship) (25%), undisclosed author conflicts of interest (20%), undisclosed reviewer conflicts of interest (20%) and data fabrication (19%).
Level of confidence in tackling issues
The majority of editors felt “quite confident”, “confident” or “highly confident” in handling the specified ethical issues. The proportion who reported being “not at all confident” was less than 10% for nine of the 16 issues, 10–15% for six issues and >15% for one issue. Editors felt least confident in handling gift authorship (22% “not at all confident”), followed by ghost authorship (16% “not at all confident”), and felt most confident in handling editorial interference from the journal owner (50%). For other issues, the proportion of editors considering themselves “very confident” ranged from 17–36%.
Frequency of problems
Most respondents considered that ethical problems either did not occur (which gave a score of 0) or occurred only rarely (ie, less than once a year, giving a score of 1) in their journal. Only three issues (redundant publication, duplicate submission and gift authorship) received mean scores of >1 (table 1). In the case of redundant publication, about one-third of respondents (34%) considered that this occurred “sometimes” (more than once a year but less than once a month) and the same proportion (34%) reported that it occurred “rarely” (less than once a year), while 17% considered that it never occurred in their journal. The issues most often reported as never occurring at the editors’ journals were ethical concerns about advertising (62%), problems with sponsored supplements (59%), editorial interference from journal owners (58%), inappropriate image manipulation (47%) and data fabrication (35%). However, 36% of the journals did not publish advertisements and 73% did not publish sponsored supplements, so this could have affected the number of editors who reported that these “never” caused a problem for them. Yet 76% of the editors of journals that carried advertising and 48% of those that published sponsored supplements stated that these never caused concern. Even for the other issues, a notable proportion of editors believed that these never occurred in their journal: the proportion of editors reporting that these problems never occurred was 32% for reviewer misconduct, 30% each for ghost and gift authorship, 28% for authorship disputes, 22% for undisclosed commercial involvement, 22% each for undisclosed reviewer and author conflicts of interest, 19% for plagiarism, 18% for unethical research, 17% for redundant publication and 9% for duplicate submission.
Level of knowledge and completeness of response about frequency
Although most respondents completed all the questions about the frequency of ethical problems, the level of missing data was higher than for the severity questions (7–32%, mean 14% missing data). The proportion of editors who did not answer or selected the “don’t know” option for frequency questions ranged from 13% (2 “don’t know”, 25 missing) for authorship disputes to 47% (63 “don’t know”, 34 missing) for ghost authorship. For 11 of the 16 issues, >10% of editors stated that they did not know their frequency. The issues with the greatest uncertainty about frequency were ghost authorship (30%), gift authorship (23%), undeclared author conflicts of interest (19%), undeclared reviewer conflicts of interest (19%) and data fabrication or falsification (18%).
Editors who stated that a problem occurred at their journal were asked whether they thought the problem was increasing, decreasing or occurring to the same extent as before. For all issues except ghost writing, the majority of editors (54–84% of those responding to this question and excluding those who selected “don’t know”) considered that the frequency was not changing. The next most commonly selected category was “increasing slightly”, except for editorial interference, which three editors (16% of those expressing a view about frequency trends) considered to be “decreasing slightly”. The problems most often identified as increasing were plagiarism (25/57, 44%), redundant publication (40/94, 44%) and ghost writing (15/35, 43%). However, responses were restricted to editors who had stated that the problems occurred at their journals, and, as with the other questions, also included a “don’t know” option, so the actual number of editors expressing a view about frequency trends was low (n = 19–93, ie, less than half the total number of responses).
The more experienced editors (defined as those with >5 years’ experience) were significantly more likely to consider that undisclosed author conflicts of interest, undisclosed commercial involvement and ghost authorship were increasing (table 2). There were no important differences between the views of the healthcare journal editors and those of the other editors.
Revising journal instructions to contributors
Editors were asked how recently they had revised their guidelines to authors or electronic submission forms in relation to publication ethics issues. Of the 212 who responded to this question, 30% (64) had revised these within the last 6 months, 26% (54) in the last 6–12 months, 21% (45) 1–2 years ago and 23% (49) more than 2 years ago.
Registering clinical trials
Of the 121 journals that published clinical trials, 32 (26%) required that all trials must be registered, 48 (40%) strongly encouraged registration although it was not mandatory and 41 (34%) did not require or encourage registration.
Awareness and usefulness of guidelines and resources
Awareness and use of guidelines and other resources on publication ethics was generally low (tables 3,4). Even the most familiar resource (other journals’ instructions) had been used by only 44% of the editors. Awareness was highest for other journals’ instructions and the Blackwell Best Practice Guidelines, but the large majority of respondents (64–87%) stated that they were unaware of the other resources listed in the questionnaire. Since several of the listed guidelines and organisations related particularly to medical journals, we analysed responses from the healthcare editors alone. Awareness of the guidelines was generally slightly higher among this group, but, even so, the majority of the medical editors were unaware of all except the Blackwell Best Practice document and guidelines published by journals other than their own.
Editors who had used the listed resources were asked to rate their usefulness (table 4). However, given the low levels of use, the number of responses to many of these questions was extremely small and so must be interpreted with caution. The mean scores for seven of the resources was at least 2, suggesting that most editors who had used them found them useful; the other three resources (including the most widely used, other journals’ instructions) had mean scores of just under 2. There were only five instances of an editor rating any resource as “not useful” (one each for COPE, CSE, EMWA, GPP and WAME). The resources with the highest proportion of ratings of “very useful” (>30%) were WAME, the Blackwell helpdesk, ICMJE, COPE and the Blackwell Best Practice Guidelines. Usefulness scores from the healthcare editors were similar to those for the whole group. There were no differences between more and less experienced editors in their ratings of the resources.
Desire for other resources
Editors were asked how useful various new resources would be to help increase their confidence in handling publication ethics issues (on a scale from 0, “not useful”, to 3, “very useful”). The highest scores were allocated to a practical manual (mean 2.16), more published guidelines (1.90), a newsletter (1.79), networking opportunities (1.79) and case studies (1.77). Less experienced editors generally rated all the options more highly than more experienced editors, and the differences in ratings between these subgroups for more published guidelines, networking opportunities and a listserv or blog reached statistical significance (p = 0.01 in each case). Healthcare editors also generally considered the suggested resources would be more helpful than the entire group (giving mean scores of 2.27 and 2.09 for a practical manual and more published guidelines, respectively).
The questionnaire included space for comments. Three of the editors commented that they would appreciate training in publication ethics, two commented that publishers should make relevant guidelines available to editors. One commented, “I was not aware … that so many ethics resources existed”. Three respondents noted that plagiarism detection software would be helpful. Two editors noted how changes to their journals’ submission systems had reduced ethical problems. One noted, “Our declaration of interests have been transformed since adjusting our statements and introducing mandatory ethical questions” on our electronic submission form. Another noted, “It’s my firm opinion that the reason our journal has fewer problems to iron out regarding authorship and originality is that we require all authors to sign an adapted copyright assignment form at submission rather than later in the process” and suggesting that this had a “valuable preventative function”.
Responses to this international survey suggest that science journal editors are not particularly concerned about publication ethics and generally do not consider problems such as plagiarism, redundant publication and data fabrication to be more than a minor problem for their journals. Editors generally feel quite confident in handling such issues, should they arise. However, most editors recognise that they probably do not know how often many forms of misconduct (such as inappropriate authorship, plagiarism and undeclared competing interests) occur, and yet a significant proportion of editors believe that such problems never occur at their journals. Respondents to this survey were generally poorly informed about guidelines and organisations that might provide assistance in handling ethical issues. Even among the editors of healthcare journals (for whom many of the guidelines were developed), only a minority had consulted resources such as the statements of COPE and the ICMJE. Nevertheless, according to the respondents, about 70% of them had revised their journal’s instructions to contributors or electronic submission system within the last 2 years in response to ethical concerns.
A survey such as this cannot avoid certain limitations. Although the sample of 231 editors was a reasonable size, it represented only 44% of those surveyed. The respondents were generally representative of the total population in terms of their location, although the proportion of responses from healthcare editors was lower than that for other science editors. It was not possible to know whether the respondents were representative of the total in terms of their experience (since data on the whole population were not available), but respondents included editors with a broad range of experience, and editors with less (⩽5 years) and more (>5 years) experience were equally represented among the respondents.
Those editors most interested in publication ethics may have responded disproportionately, so levels of knowledge and confidence in the total population may be even lower than those among respondents. Most respondents had been editors for at least 4 years, and seven had been editors for >20 years. The more experienced editors tended to be more pessimistic about trends in publication misconduct, with a higher proportion considering that problems were increasing than among the less experienced editors. However, only a small proportion of the respondents expressed an opinion about frequency trends, so these findings must not be overinterpreted, as the sample sizes for some responses were very small.
Another problem in designing such a survey, and administering it via the publisher, is that recipients might not wish to admit their ignorance, or might give “socially acceptable” responses and be unwilling to identify problems at their journals. In particular, the editors of journals owned directly by Wiley-Blackwell may have been reluctant to answer the question about editorial interference from the journal owner. Similarly, respondents may have felt constrained in criticising the Blackwell guidelines (or in admitting that they were unaware of them) in a survey organised by Wiley-Blackwell. One strength of the survey design was that the questionnaire could be returned anonymously by mail, or electronically, and respondents did not have to include their name, although this option was included to avoid sending reminders to those who had already responded.
However, if the respondents were not truthful about their level of knowledge and use of available resources, then our findings are perhaps even more remarkable, given that the majority of the editors claimed they were unaware of all except other journals’ guidelines and the Blackwell Best Practice Guidelines and fewer than one in five (20%) had used these resources. Even among the medical journal editors, awareness of guidelines and organisations specifically aimed at such journals was remarkably low, with 55% of the healthcare editors reporting being unaware of the ethical guidance produced by the ICMJE.
Few surveys of journal editors’ views about publication ethics have been published. Borkowski and Welsh surveyed accountancy journal editors and authors and found significant differences between these groups with regard to their views about the frequency of author, editor and reviewer misconduct.10 11 They also concluded that, while most authors felt that codes of publishing ethics were needed, the editors of accountancy journals did not. Yank and Barnes surveyed medical journal editors and authors to discover their views about redundant publication of clinical research.12 They found that while editors and authors agreed about the probable causes of redundant publication, they disagreed about the definitions of acceptable overlap and about how editors should respond to redundant publications. Interestingly, redundant publication was identified in our survey as the issue causing science editors the greatest concern and also as the most frequent ethical problem faced by journals.
Davis and Müllner surveyed editors of medical journals owned by professional associations about their perceived editorial independence.13 Of the 33 respondents, 70% (23) reported having complete editorial independence, and yet 42% reported having experienced at least some pressure from their association over editorial content in recent years. The authors concluded that strong safeguards are needed “because editors may have less freedom than they believe”.
One of the key findings of our survey is editors’ admission that they do not know how often many types of publication misconduct occur and, by implication, they may therefore be unaware of them. Responses to the “free text” questions and additional comments suggested that some editors are concerned about how to detect plagiarism or multiple submissions and would be willing to take a more proactive role—for example, using antiplagiarism software to screen submissions. Several editors commented on the difficulty of detecting inappropriate authorship (both gift authors and ghost authors), but one editor commented that authorship problems could be reduced by requiring detailed information about individuals’ contributions to a paper at the submission stage.
We conclude that most editors of science journals appear not to be very concerned about publication ethics issues and believe that misconduct occurs only rarely in their journals. A small but notable proportion of editors consider that issues such as plagiarism, redundant publication, inappropriate authorship and fabricated data never occur in their journals. A considerable proportion of editors, however, state that they do not know the severity and frequency of many issues.
Most journal editors feel reasonably confident that they can handle publication ethics issues. However, while some editors would appreciate more resources or training in publication ethics, most are unfamiliar with available guidelines and professional organisations for editors.
While it is difficult to estimate the true frequency of publication misconduct, most studies that have attempted to do this have revealed that it is disturbingly frequent. For example, Gardner and colleagues surveyed 350 authors of published clinical trials and found that almost 5% were aware of fabrication or misrepresentation in a study they had participated in over the last 10 years and 17% were aware of other cases of fabrication or misrepresentation that had not been investigated or corrected.14 Similarly, Geggie surveyed newly appointed UK consultant doctors. Of the 194 respondents, 56% reported observing some form of research misconduct and 6% admitted to having committed misconduct themselves.15 A recent analysis of articles published in Korean medical journals found that 6% constituted duplicate publication.16 An analysis of over 60 000 Medline abstracts using software to detect text similarity found a 1.35% rate of duplication and a 0.04% rate of suspected plagiarism.17 Journals that have introduced routine screening to detect plagiarism and inappropriate image manipulation have reported marked increases in the number of cases of unethical behaviour which are revealed by these techniques, suggesting that many problems go undetected.18
While many of the editors surveyed admitted that they were unsure of the frequency of ghost or guest authorship at their journals, recent high-profile cases involving major pharmaceutical companies suggest that these problems certainly exist and may be more frequent than editors suspect.19 20
Available evidence about the prevalence of publication and research misconduct suggests that all editors of science journals should take these issues seriously. However, while many editors are concerned about publication ethics and ensure that their journals adopt policies and systems likely to reduce ethical problems, the attitudes revealed by this survey indicate that at least some editors of science journals may be unaware of many of the potential ethical problems that may arise.
We thank Helena Korjonen (of University College, London) for reviewing the questionnaire and Ben Ulph (of Wiley-Blackwell) for coordinating the survey.
Funding: The survey was funded by Wiley-Blackwell (originally initiated by Blackwell). EW and IR received payment from Wiley-Blackwell for their work on the survey and its publication. SF, CG and AR are employees of Wiley-Blackwell.
Competing interests: None declared.
Contributors’ statement: EW contributed to the design of the survey, analysed some of the findings and wrote the first draft of the paper. SF, CG and AR contributed to the design of the survey and its implementation and to developing the paper. IR contributed to the design of the survey, did the main data analysis, wrote the initial report and commented on the paper.
Provenance and Peer review: not commissioned; externally peer reviewed.
Other content recommended for you
- Awareness, usage and perceptions of authorship guidelines: an international survey of biomedical authors
- Lessons from a case of overlapping publications
- Honorary and ghost authorship in high impact biomedical journals: a cross sectional survey
- Why unethical papers should be retracted
- Guest authorship as research misconduct: definitions and possible solutions
- Accessibility and transparency of editor conflicts of interest policy instruments in medical journals
- Committee on Publication Ethics: the COPE Report 1999
- Authorship ignorance: views of researchers in French clinical settings
- Research funding and authorship: does grant winning count towards authorship credit?
- Authorship policies of scientific journals