Article Text

Download PDFPDF

The concise argument
Free

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

SIZE MATTERS

The JME publishes papers across the whole range of methodology in medical ethics. We publish papers based on pure philosophical analysis and we publish papers reporting empirical studies, and you will find both kinds of papers in this issue. It is, nevertheless, rare that we publish studies with a sample size of 3959 respondents. The empirical studies we publish are usually somewhat smaller. We are therefore happy to publish the paper by Swartling et al analysing the views of parents whose children participate in a Swedish longitudinal child cohort. This paper contributes to a live debate in research ethics and provides valuable empirical evidence. Let me just highlight two of their important findings. When asked at what age children should begin to participate in decisions about study participation, the median answer was 8 years and the 90th percentile 10 years (table 2), showing significant trust in the decision-making abilities of young (ish) children. But when asked about which interests it is most important to protect in longitudinal paediatric research, “Child autonomy and decision-making” came at the bottom of the list with “Transparency and generation of trust” at the top (see page 450).

LIMITS TO RESEARCH RISK

The paper by Miller & Joffe also contributes to a very active discussion in research ethics about how systems that regulate research ethics should handle research risk. The two poles in this debate are, on the one hand, the view that there should be no limits to the risk that research participants can consent to and, on the other hand, the view that we should regulate such risks very strictly. Miller & Joffe argue that both these views are wrong.

Against the restrictive view, they argue that there is no in principle way of setting definite limits for allowable risk, and, against the permissive view, that the social value of any particular research project is too uncertain to justify “…clinical research that imposes high net risk…”. They further argue that exposing research subjects to high risk of harm may diminish public confidence in research in general. If we allow high-risk research, there will be cases where the risks eventuate, research participants are harmed, but no important knowledge is generated. If and when such cases become public, they are very likely to lead to negative publicity.

The article is, however, not only negative—it also contains a positive argument. The authors argue that, by finding an appropriate comparator activity outside of the research context, a research regulator may be able to make informed judgements about acceptable risk in relation to specific research projects (see page 445).

WHAT DO YOU EXPECT!?

In the debate about genetic enhancements, it is often claimed that genetic and environmental enhancements of the same human function are morally equivalent—for instance, that there is no morally relevant difference between genetic enhancement of intelligence and sending your child to a good school. On the basis of this claim, it is then argued that the state has no justification for restricting only one of these classes of enhancement.

The interesting paper by Kelly Sorensen attempts to undermine this line of argument by pointing out that there is actually a morally relevant difference between genetic and (some) environmental enhancements. The difference she identifies is a difference in legitimate, settled expectations. Sorensen argues that parents have legitimate, settled expectations in relation to choice of school, but not in relation to choice of genetic enhancement. And, further, that, if a given practice is new and likely to worsen inequality, then society is justified in regulating it, if there are no prior settled expectations to be allowed to pursue it.

Such expectations are, on her analysis, a matter of social fact, not a question of abstract rights. A view she also ascribes to the US Supreme Court (see page 433).

ARE ETHNICALLY TARGETED WEAPONS WORSE THAN OTHER WEAPONS?

Genetic engineering may make it possible to develop biological weapons that primarily target certain ethnic groups. It is often claimed that there is something particularly pernicious about such weapons—for instance, because they are “racist”. Jacob Appel analyses whether ethnically targeted weapons are morally worse than other biological weapons that are not targeted, except in the sense that all weapons are used in a targeted way to kill or injure an enemy. Appel argues that targeted bioweapons are not worse than non-targeted ones in the context of warfare. Two main arguments sustain this conclusion. First, that, in warfare, enemy soldiers are killed primarily because they are enemy soldiers, not because they belong to a particular ethnic group. And, second, that “Any harm done as a result of ethnic categorisation is dwarfed by the deaths of thousands or millions that are likely to result from biological warfare…”.

Appel calls for further discussion and, given the provocative nature of the conclusion, this will surely be forthcoming. One outstanding question is, for example, whether Appel’s arguments hold outside of classical warfare. Are they valid in cases of ethnically based insurgency or more generally if bioweapons are used for internal repression? (see page 429)

Linked Articles