Article Text

Download PDFPDF

Biosecurity and the division of cognitive labour
Free
  1. Thomas Douglas, Associate Editor

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The last 12 years have seen historically high levels of interest in biosecurity among life scientists, science policymakers, and academic experts on science and security policy. This interest was triggered by the 9/11 terrorist attacks, the ‘anthrax letters’ attack of the same year, and two virology papers, published early last decade, that were thought to raise serious biosecurity concerns.1 Ethicists have come relatively late to the game, but, in recent years, a lively debate has developed on ethical issues raised by biosecurity policy, and, more generally, on the ethics of producing and disseminating ‘dangerous’ biomedical knowledge. Unsurprisingly, this debate has taken on increased sense of urgency over the last 18 months as the journals Science and Nature, the United States National Science Advisory Board for Biosecurity, and the World Health Organization, among others, have been considering whether and how to publish two academic papers reporting means of enhancing the transmissibility of H5N1 influenza, or ‘bird flu’ (see, for discussion, Evans' paper in this issue).

We hope that this issue of the Journal of Medical Ethics will substantially advance the emerging ethical debate in this area. The issue features five articles on the ethics of biosecurity: a feature article, by Allen Buchanan and Maureen Kelley (see page 195, Editor's choice); three brief replies to this article, by Michael Selgelid (see page 205), Thomas May (see page 206), and Nicholas King (see page 207); and a stand-alone paper by Nicholas Evans (see page 209), which discusses the recent H5N1 controversy and analyses the appeals to scientific freedom that have been made by some of its protagonists. In this ‘concise argument’, I can, regrettably, comment only on the Buchanan and Kelley piece.

Buchanan and Kelley on the framing of the biosecurity debate

Buchanan and Kelley aim to broaden and reframe the existing debate on biosecurity. They begin by noting that science policy discussions [regarding biosecurity] have focused largely on ‘the dual use problem’: how to preserve the openness of scientific research while preventing research undertaken for the prevention or mitigation of biological threats from being used to cause harm by non-state terrorists or aggressive dictators. On this characterisation of ‘the dual use problem’, biomedical scientists must consider whether and, if so, to what extent the commitment to ‘open science’ ought to be compromised.

They then offer a thoroughgoing critique of this framing, of a number of assumptions that appear to underpin it, and of some of the policies and practices to which it has given rise in the USA. More positively, they offer several interesting proposals for reforming existing debate and policy. Let me here discuss two of these, one conceptual, and the other disciplinary.

The dual use problems and optimisation

The first, conceptual proposal is to clearly distinguish between two different variants of the dual-use problem, what Buchanan and Kelley call DU1 and DU2. The former arises when biological research could be used by ‘non-state terrorists or aggressive state actors’ to inflict harm; the latter when it can be used to inflict harm ‘by one's own government’ (on the assumption that one is a citizen of a more-or-less non-aggressive state, though one that could potentially become aggressive). We need to distinguish these, the authors argue, because measures that mitigate one dual use problem may fail to mitigate, and may even exacerbate, the other.

However, recognising these two distinct dual-use problems should be only a first step, in Buchanan and Kelley's view. DU1 captures a possible conflict between two values: scientific openness and security from terrorism and attack by aggressive states. DU2 adds a third value into the analysis: security from ‘non-aggressive’ states. But these are not the only values that are or could be substantially affected by biosecurity policy. Other relevant values include the protection of human and animal research subjects, alleviation of infectious disease among the world's poor, and restraint of government power. Debate on biosecurity policy questions should see these questions as requiring not (or not merely) the resolution of one or two bilateral trade-offs, but rather the resolution of a far more complex value-optimisation problem: ‘it is not simply that there are two dual use problems, not one … the more fundamental conclusion is that the dual use problems … are only aspects of a larger optimisation problem’.

The relevance of social epistemology

A second proposal made by Buchanan and Kelley is to bring social epistemology to bear on biosecurity discussions. Social epistemology is, roughly, the study of how institutional design and behaviour affects the creation of knowledge. Its importance has been a theme in much of Buchanan's recent work. Indeed, he has sought to demonstrate not only that social epistemology is important, but that its importance extends beyond the realms in which it has typically been applied. Social epistemology has traditionally focussed on the ways in which scientific institutions aid or thwart the acquisition of empirical knowledge, but Buchanan has argued that the discipline also has implications for the development of moral knowledge—or at least, justified moral beliefs—and has thus called for the development of a social moral epistemology.2–4

Though they do not explicitly frame it this way, Buchanan and Kelley's feature article could be viewed as an example of social moral epistemology at work. Decisions about how to resolve DU1, DU2 and the broader optimisation problem of which they are a part are moral problems, and much of Buchanan and Kelley's feature article explores ways in which institutional design could influence the quality of the moral decisions that society is making, and will make, in the face of these problems.

Their discussion here is rich and wide ranging and an editorial summary of it could not do it justice. But let me mention one lesson that they draw: that we should not expect that all agents and all institutions should adopt the same stance towards biosecurity. For example, though Buchanan and Kelley broadly endorse attempts to introduce risk-benefit analysis in biosecurity policy, they worry that proponents of this approach often propose that a number of different parties, who in fact occupy quite different roles, including the scientific researchers themselves, scientific journal editors and perhaps government officials as well, should follow the same risk-benefit assessment guidelines and apply them to the same thing, namely, the dissemination of particular research results… [But b]etter outcomes might be achieved if different agents, depending on their institutional roles, engage in different activities, following different guidelines.

This follows from a more general thought that overall institutional goals are often best realised when [d]ifferent agents, occupying different roles, … contribute to the achievement of institutional goals by acting on different and even sometimes conflicting norms. These norms do not direct agents to ‘achieve institutional goal G, G1, etc’, but instead prescribe specific actions or processes, which, taken together in the overall operation of the array of institutions, tend to promote institutional goals.

This idea has often been referred to as ‘the division of cognitive labour’, by analogy with ‘the division of labour’ that is normally thought to be necessary in economic systems. In economics, it is standardly thought that the best results can be achieved when an economy assigns different goals and norms to different economic agents, rather than having all strive to bring about the socially optimal outcome. Buchanan and Kelley argue that an analogous point may hold for biosecurity.

A tension?

While strongly endorsing Buchanan and Kelley's call for debate on biosecurity policy to heed the relevance of social (moral) epistemology in general, and the possible division of cognitive labour more specifically, I do wonder whether these claims might be in tension with their first proposal—the proposal that debate on biosecurity policy should acknowledge a wide range (perhaps the full range) of values that might be affected by such policy, unifying these into a more general value-optimisation model.

Buchanan and Kelley do consider and reject one objection to their value-optimisation model. This objection maintains that the USA is currently in a state of national emergency in which the threats posed by bioterrorism are so great that they justify ignoring all other values, with the exception of the value open science.

But the authors’ appeal to the idea of a cognitive division of labour might seem to pose another threat to their value-optimisation approach. One might argue that debate on biosecurity policy should address only a limited range of values—say, those expressed in DU1 and DU2—because this would be part of the most efficient overall division of responsibilities; other values, such as concerns for the global burden of disease and the protection of research subjects, are best dealt with elsewhere. For example, perhaps they are best dealt with by those engaged in debating or making policies on research governance, intellectual property and international aid.

Buchanan and Kelley argue persuasively that these latter values are morally relevant to biosecurity policy, but given their endorsement of a possible need for a cognitive division of labour, a further step would be needed to show that these values should actually be incorporated into the biosecurity policy debate. If this debate were to narrowly focus on DU1 and perhaps DU2, this might lead to policies that threaten research subjects and the global poor. But perhaps the most efficient response to this problem would be to alter policies on research governance, intellectual property and international aid to compensate for this effect, not to complicate the biosecurity debate itself. Of course, it may turn out that the complex value-optimisation approach to biosecurity policy is what social epistemology would recommend. The point is simply that this can't be assumed. Once one adopts the social epistemology perspective, the view that specific policy debates should ignore some morally relevant values becomes a live possibility.

I believe that, to resolve this tension—or, perhaps more accurately, possible tension–Buchanan and Kelley may need to distinguish between two different biosecurity debates. One is the debate currently taking place between individuals seeking to influence biosecurity policy in the USA and elsewhere. The other is the sort of theoretical debate that moral philosophers might engage in—debate that aims not to influence policy (at least, not directly) but rather to answer questions such as ‘what is the best biosecurity policy?’ and ‘what fundamental moral considerations bear upon this?’ At the level of theoretical debate, it seems clear that Buchanan and Kelley's first proposal is correct: this debate should consider all values that bear on biosecurity issues. But Buchanan and Kelley are at least as interested in the more practical debates that are actually taking place and are currently shaping policy, and here, the optimality of their optimisation approach is less clear.

References

Other content recommended for you