Article Text

Download PDFPDF

Ethics of generative AI
  1. Hazem Zohny,
  2. John McMillan,
  3. Mike King
  1. Bioethics Centre, The University of Otago, Dunedin, New Zealand
  1. Correspondence to Professor John McMillan, Bioethics Centre, University of Otago, Dunedin 9054, Otago, New Zealand; john.r.mcmillan68{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME.1–7 The development of AI technologies that can produce text that will pass plagiarism detectors8 and are capable of appearing to be written by a human author9 present new issues for medical ethics.

One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted work. That seems likely to be a general worry for secondary and higher education, as well as for all academic journals. Thus far generative AI chatbots do not seem to be able to produce a fully referenced and well-argued ethics article, but they probably could generate a blog or student essay that would be hard to detect after very minor edits. Many schools and universities have moved to online forms of assessment, and it seems likely that generative AI might cast doubt on the integrity of them and we might see a reversion to handwritten examinations as a solution.

As well as these immediate and perhaps obvious ethical concerns, generative AI highlights conceptual challenges that pose more profound ethical questions. JME is committed to publishing high-quality articles that further the ethical analysis of an issue within healthcare. Some of the content that the journal publishes reports empirical findings. An article that, for example, describes qualitative findings and then develops an analysis of some normative issues could not be solely authored by generative AI: it cannot do qualitative research. However, generative AI can find publicly available sources and produce ethical arguments and syllogisms. What does that imply about the nature of ethical analysis? If ethical analysis, fundamentally, involves assembling, organising and evaluating words, then perhaps generative AI could replace ethicists. At present, generative AI cannot produce the nuance, depth or originality of a quality ethics article, but it may be a matter of time before they pass a medical ethics version of the Turing test.

While those who rely on more analytic approaches to ethics might view this as an ethical apocalypse, there are more subtler ways in which generative AI might be used in authorship that are more positive.

For instance, if you experiment with ChatGPT9 for a while, you might find that when you know what you want to say in a paragraph or subsection, you can prompt it to form the argument you want to make and it will generate a fairly well-drafted paragraph. For authors and for postgraduate students, this might be useful for ‘throat clearing’ sections which are just laying the terrain for the reader before proceeding.

AI chatbots might also play a helpful devil’s advocate: once you make a point in a paragraph, you can ask it to generate a rebuttal. If you experiment with ChatGPT you might find that it can be helpful in that not only does it generate obvious counters, but it often raises things that you might not have immediately thought of. So perhaps generative AI has the potential to pose questions like those that might be raised at a seminar while a paper or book is being refined.

Much of the work involved in writing good, innovative and original ethical analysis involves wrestling with high level ideas. If generative AI can aid authors in drafting articles then perhaps they save intellectual effort that could be directed toward the ‘big picture’?

Many publishers, including the BMJ, are committed to encouraging authors from the Global South to write for journals where the authorship has been primarily from the Global North. Publishing in journals such as the JME can be more difficult when English is not an author’s first language. Because of the speed at which generative AI can present an author’s ideas in what is close to idiomatic English, they have the potential to significantly open up authorship, especially for humanities style journals like the JME. Journal and copy editors might also save time and effort when correcting articles.

These potential benefits will no doubt come with trade-offs. For instance, those who struggle to articulate a point in writing may find that to be part of a process that leads to a new insight. Perhaps generative AI runs the risk of making that part of the writing process too easy and lead to missing out on opportunities for insight. While that seems like a valid worry, it might be that this is analogous to the changes in writing that resulted from journals being readily available online. Those of us who are old enough to recall writing before online publishing will have spent time trudging between library stacks and searching hard copies of journals to find a paper and stumbling across other papers and journals, which may have led to new ideas. Complete reliance on online databases has probably reduced those moments but the trade-off clearly still favours their use.

Universities, publishers and journals are likely to be exercised about what this will mean for authorship. How can we know whether work was created by the author or by generative AI? While understandable, these worries might be overstated given what they can do at present. While they might find some publicly available sources to support claims, at present they cannot adequately cite, and the quality of the content they produce is wholly dependent on you asking the right questions or having the right ideas. At present, they can be seen as a very sophisticated thesaurus and we do not worry about authors using those.

Perhaps the biggest concern is that some predatory journals may feed off the speed by which low-quality manuscripts can now be generated with AI chatbots, and use the phenomenon to publish huge numbers of mostly useless or misleading ethical analyses. This could flood the journal market and undermine trust in research publications. This may be curbed by introducing the use of AI output detectors, though it may also encourage greater (and much needed) scepticism towards publications, with readers and news writers paying greater attention to where the paper they are reporting on was published, and what the publisher’s standards are.

Editorially, journals need to and will continue to be concerned with authorship, but our main focus is on the quality and originality of ideas. It seems likely that generative AI is here to stay and will develop, so journals will need to find ways of figuring out how to work with them.

Ethics statements

Patient consent for publication



  • Twitter @hazemzohny

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.