Article Text

Download PDFPDF

Epistemic paternalism in public health
  1. K Grill,
  2. S O Hansson
  1. Philosophy Unit, Royal Institute of Technology, Stockholm, Sweden
  1. Correspondence to:
 K Grill
 Philosophy Unit, Royal Institute of Technology, Teknikringen 78B, 100 44 Stockholm, Sweden;,


Receiving information about threats to one’s health can contribute to anxiety and depression. In contemporary medical ethics there is considerable consensus that patient autonomy, or the patient’s right to know, in most cases outweighs these negative effects of information. Worry about the detrimental effects of information has, however, been voiced in relation to public health more generally. In particular, information about uncertain threats to public health, from—for example, chemicals—are said to entail social costs that have not been given due consideration. This criticism implies a consequentialist argument for withholding such information from the public in their own best interest. In evaluating the argument for this kind of epistemic paternalism, the consequences of making information available must be compared to the consequences of withholding it. Consequences that should be considered include epistemic effects, psychological effects, effects on private decisions, and effects on political decisions. After giving due consideration to the possible uses of uncertain information and rebutting the claims that uncertainties imply small risks and that they are especially prone to entail misunderstandings and anxiety, it is concluded that there is a strong case against withholding of information about uncertain threats to public health.

  • MCS, multiple chemical sensitivity
  • REACH, Registration, Evaluation, and Authorisation of Chemicals
  • epistemic paternalism
  • public health
  • withholding of information
  • uncertain information

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

It is usually taken for granted that access to information is a good thing. To be informed means to be closer to the truth and to be able to make informed decisions. However, information can also have negative effects. A much discussed example is the negative psychological effects on a patient of information about a bad prognosis. Previously, a paternalistic approach to such information had a strong influence on medical practice and on medical ethics. According to this approach, a physician often best serves a patient’s interests by withholding negative information that the patient has asked to be told. In the last few decades, such epistemic paternalism1 has for the most part been replaced by a strong emphasis on the patient’s right to know.i

Today, the prevalent presumption is that the patient has a right to know what the physician knows about her condition. This right to know is explicitly related to the right to make decisions about one’s own welfare. Treatment should normally be given only with the patient’s informed consent, which is only possible if she has full access to information about her condition.2

Contrary to this development in medical ethics, there is in some quarters a growing concern over information to the public about general health threats. Sceptics about the value of such information argue that it can be extremely costly both in monetary terms and in terms of the social costs of needless worrying. These apprehensions are aggravated by the increased capacity to detect potential risk factors. As was noted by Viola Vogel, “[n]ew nanoanalytical tools are pushing detection limits down to the single molecule level, which is scientifically a huge success but could be a potential headache to regulators. Ultrasensitive detection of toxins and pollutants will alarm the public.”3

In a recent article on the regulation of chemicals, Bill Durodié suggests that the dissemination of information about potential adverse effects of chemicals to the public has predominantly negative effects, because it contributes to making people anxious and depressed.4 These views are expressed in relation to the European Commission’s proposed new system for the Registration, Evaluation, and Authorisation of Chemicals (REACH).5

In his criticism of REACH, Durodié attacks what he rightly describes as a contemporary consensus on the public’s right to know:

One common assumption in much of the current debate on issues relating to scientific reporting and decision making is that the public have a “right to know” and should be informed whenever and wherever there is any scientific uncertainty associated with products and processes (Durodié,4 p 393).

According to Durodié, “this ‘right’ would appear to suggest that consumers should be permanently bombarded with reams of information” (Durodié,4 p 393). In his view, this will have both short run and long run negative effects. In the short run, “enhanced risk awareness” could “leave us feeling more sorry than safe” (Durodié,4 p 394). In the long run, public anxiety and fear in the face of technological development could stifle progress:

[B]ringing up a generation of people in fear of everyday products, questioning the ability of science to improve their lives, and hence doubting the desirability of innovation and change, has a social cost which has yet to be calculated (Durodié,4 p 389).

It is not obvious exactly what Durodié is critical of. Misgivings about certain public awareness campaigns are seemingly taken to imply a negative appraisal of risk awareness in general and the REACH proposal in particular (Durodié,4 p 393–p 4). Such campaigns have, however, little in common with the commission’s proposal to make information about chemical testing available in a central database for “free and easy access”. According to the current proposal of the commission, non-confidential information on registered substances will be kept available on the internet by a proposed new European chemicals agency. This information will include chemical nomenclature; physicochemical data; the results of each submitted toxicological study; no effect levels and no effect concentrations when available; most of the information on the safety data sheet, and guidance on safe use provided by the company.6

The main focus of Durodié’s critique is on the “social and hidden cost that these proposals [by the European Commission] entail” (Durodié,4 p 393). Central to the critique is the (reasonable) assumption that many of the risks that will be inferable from the proposed database will be uncertain, and will remain so even after extensive research. Durodié calls for “a more measured approach to risk communication”, as opposed to “feeding the climate of risk aversion” (Durodié,4 p 394, p 396).

Though not explicitly advanced in favour of withholding of information, Durodié’s sharp critique of the proposed extent of communication implies an argument for some kind of limiting of the dissemination of information about uncertain threats to public health. Being supposedly in the best interest of the people concerned, such a proposal amounts to epistemic paternalism. It is the purpose of the present paper to develop the argument for epistemic paternalism and to evaluate it from an ethical point of view. In section two, we further develop the notion of making information available and distinguish it from actively communicating information. Section three introduces our methodology and some preconditions for the analysis that follow from it. After that we investigate the major types of positive and negative effects of withholding information on public health—namely effects on knowledge (section four); psychological effects (section five); effects on individual decision making (section six), and effects on political decision making (section seven). Our overall conclusions are summarised in section eight.


Our focus of attention is on the dissemination of information about public health—that is, general health information that has not been adjusted to the individual recipient’s health status, but nevertheless has (prognostic or preventive) relevance at least for segments of the population. Typical examples are “smoking is bad for your health” and “acryl amide in fried potatoes may increase the risk of contracting cancer”. We will pay particular attention to public health information that is based on less than conclusive scientific evidence, and therefore uncertain.

Our discussion presupposes that a distinction can be made between actively providing a person with a piece of information and (just) keeping that information available for her to seek out herself. In clinical medicine, a reasonable case can be made that the physician has at least a prima facie obligation to actively inform the patient about her health status and ensure that she has understood the information. Inessential details, such as individual laboratory values, should be made available to her upon request. In public health, authorities and companies have corresponding obligations to actively disseminate the more important information. All affected members of the public should be actively informed of serious risks. Hence, if the tap water in an area becomes undrinkable, all those who have access to this water have an obvious right to be informed about this. Lesser risks, such as a small increase in the concentration in tap water of some contaminant that is still far below agreed upon levels of concern, should perhaps not entail active informing, but such information should none the less be made available so that those who seek it may find it.

How to draw the line between information that should be actively communicated and information that should not be actively communicated but only made available is an interesting ethical issue. That is not, however, the topic of the present contribution. Instead, we will discuss whether there is another line to be drawn—namely between on the one hand public health information that should be made available, and on the other hand public health information that should not even be made available but rather be withheld from the public. The proposal in REACH to post information about registered substances on the internet is a clear example of making public health information available without necessarily communicating it actively to potential readers.

Whether or not information should be made available may seem to depend on how the information is managed once made available. We will, however, not discuss the management of information. Our narrow focus on making available or withholding is motivated by several circumstances. First, the problem has practical significance, since information holders such as scientists, companies, and public officials often face the choice of whether to make some piece of information available for further dissemination—for example, by putting out a press release without having much influence on how the information is managed once released. Second, although ineffective awareness campaigns should obviously be avoided, questions of research priorities and information management are largely empirical and very complex, demanding—for example, a discussion of the role of the free press. The more limited issue of whether or not information should be withheld can, we believe, be successfully dealt with somewhat independently and in the abstract. Finally, a clear presentation and evaluation of the case for withholding information will hopefully shed some light on related, more complex, issues.

It is of course not crystal clear what it means to make information available without actively disseminating it. Information can be made available in quite different ways. At one extreme, the information holder can actively inform the public about what type of information is available and how one can find it (without spreading the information itself). At the other extreme, information may be kept available simply by being buried in some unindexed but publicly accessible archive. It is, however, doubtful whether the last mentioned practice qualifies as making the information available. The REACH proposal of providing information for “free and easy access” on the internet is at the more open end of this spectrum.

An individual scientist cannot keep information to herself without sacrificing the fundamentally important scientific discussion. Strict withholding of scientific information would therefore demand that individual scientists avoid actively providing information to people outside of the scientific community by—for example, answering questions from journalists or other non-scientists (however the division into scientist and non-scientist would be made). The scientific community would also have to develop codes of behaviour that exclude the publication of uncertain public health information, or else the government would have to impose legal restraints on reporting scientific results indicating uncertainties about public health. Such arrangements would have to be internationally agreed upon in order to be efficient. Clearly, such a system would hamper scientific progress by reducing the flow of information within the scientific community as well as in the wider world.

Information may, however, be withheld to some extent without going to such lengths. Questions may be answered, but incompletely. Publications can be made, but not popularised and not discussed in public fora. We will discuss indirect effects of withholding information, with the understanding that its effects will in practice depend on how strict the withholding is and how smoothly the practice of withholding is defended, denied, or downplayed in communication with the public. In the process of defending a practice of withholding, there is always the risk of resorting to outright deception. Deception destroys trust much more thoroughly than the withholding of information. Having been kept in the dark, you could still trust what information has come and will come from the information holder. Confronted with liars, however, there is nothing left to trust.7 Government information or scientific results will be met with deeper scepticism the more the government or the scientists are engaged in deceitful practices. The same thing applies to these groups as to doctors: “Were trust to decline so that patients did not believe what was being said to them, not only reassurance but also genuine support during an illness would become impossible.”8


At the core of liberal political thought is the individual’s right to direct her own life. Though a choice between two unknowns would still be a choice, what we value is people’s right to make informed choices about their own lives. The right to direct your life in a meaningful way, to make informed choices about your own life, thus implies some version of a right to know. Much liberal thought, of course, goes on to argue that the right to direct your own life is an absolute right that cannot justifiably be infringed. If the same goes for the right to know, then the case for justified withholding of harmful information uninterestingly fails so that no amount of harm can cancel it. We will not pursue this line of thought further, but rather consider the more difficult case of consequentialist ethics.

Consequentialism is the “hard case” for the defence of the public’s right to information. In order to explore the best arguments for the withholding of information, we will therefore assume a form of methodological consequentialism. In other words, we will assume that the issue of whether or not information on public health should be held available for the public has to be determined by weighing the positive against the negative consequences of making it available.

Methodological consequentialism does not preclude a discussion in terms of rights and duties. Deontological language is convenient to describe the moral status of disseminating information. We use this terminology with the understanding that, from a consequentialist point of view, rights and duties are not foundational moral principles, but rather action guiding rules that are adequate to the extent that they tend to maximise the good. Clearly, from that point of view, to the extent that there is a right to know, this is only a prima facie right that can be overridden by other considerations. For reasons of terminological convenience, we use the phrase “right to know” to denote the right to have access to information rather than the right to actually receive it. A right to know is, in this sense, in principle compatible with a possible right not to actually receive the information (if one chooses not to ask for it). Such a right is sometimes inferred either from the right to direct one’s own life or from some version of a right not to be harmed. For an overview, see Chadwick.9

Furthermore, we will not reduce the value of the consequences to some unitary value, such as happiness or preference satisfaction. Instead, we will operate with values that have intuitive appeal and that may either be considered intrinsic or instrumental in contributing to more basic values. As mentioned in the introduction, four types of effects will be considered. Corresponding to Durodié’s concern with short run negative effects, we will consider effects on people’s psychological wellbeing. Corresponding to his concern with long run negative effects, we will consider effects on political decisions. We will furthermore consider effects on knowledge, primarily for their instrumental value. Finally, we will consider effects on private decisions, where the connection to informed control over one’s own life is most obvious.

We take none of these areas to be supremely important. Rather, we consider each area in its own right, on the assumption that they can in some way be incorporated into a wider consequentialist framework. Should the making available of some pieces or types of information lead to great enough harm, not countered by positive consequences of the same magnitude, such information could justifiably be withheld. This is a paternalistic stand that we find quite reasonable from a consequentialist point of view. The question is thus whether the balance will in practice be in favour of withholding information about public health or in favour of making it publicly available.


Apart from its possible intrinsic value, improved knowledge typically contributes to a person’s decision making capacity. Positive practical effects of public health information will ensure only if access to (correct) information of this kind gives rise to better knowledge. It is therefore important to consider the effects of public health information on a person’s state of knowledge.

We take for granted that under most circumstances access to more accurate information will improve a person’s knowledge. There are cases, however, in which (true) information confuses the recipient or causes her to have a more distorted view of the state of the world. This can happen if the new information connects with false beliefs already held—for example, there seem to be persons who believe that (ionising) radiation is “contagious” so that an irradiated object will itself emit radiation. If a person with this incorrect belief is told that a certain spice product has been subjected to irradiation, then she may conclude that the product is radioactive. Similar mistakes are the subject of numerous studies of risk communication between experts and lay persons. For an interesting exposition see Thomson.10

This problem may also arise in relation to uncertain information. Hence, in our example, if this person receives information saying that there is uncertainty whether or not a specific product has been irradiated, she will probably conclude that there is uncertainty whether or not it is radioactive. It is essential to note, however, that this problem is not specific to uncertain information. A policy of withholding information in order to avoid negative effects of information on knowledge would imply withholding both certain and uncertain information in those areas where misinterpretation is most likely. These would be areas where the meaning of terms differs between expert and lay use, where there is widespread prejudice, or where there is a general lack of knowledge, which is compensated for via superstition or wild guesses. The sheer magnitude of prevalent misconceptions and possible misunderstandings, and—not least—their unpredictability, makes such a project of withholding a rather implausible undertaking.

Let us, however, for argument’s sake, consider the possibility that information about uncertain threats to public health is one of those areas where misunderstandings are most likely to have negative effects on knowledge, or where the effects are most severe. There would then be a case for the withholding of such information in order to avoid adding to existing confusion. How strong the case is depends on the likely outcome of withholding.

An obvious danger with withholding is that awareness of the lack of information will create greater confusion than would the information itself. The knowledge or suspicion that information is withheld can induce false beliefs that involve the attribution of exaggerated weight to withheld information: “if they don’t tell us, it must be important”. Furthermore, such information as would become available under a practice of withholding would not only tend to be partial and unbalanced, but would also likely be misinterpreted due to suspiciousness and to lack of relevant background information. The attempt to limit uncertainty might thus create even greater uncertainty about the state of things. As long as information is made available about the uncertain threats to health that science discovers then at least we know (or can find out should we want to) what these uncertain threats are, and we need not speculate about other threats that scientists might possibly have discovered. If they are not made available, the scope for such speculation is wide open.

Generally speaking, when a person or group of persons have misconceptions that give rise to adverse effects of information, withholding information that can be misconceived seems to be a dubious strategy. The obvious solution is instead to provide information that counteracts the misconception, both in general and in connection with particular pieces of information that may otherwise be misunderstood. In the above example of irradiated food, this means that (i) general education about the physical characteristics of radiation should be promoted, and (ii) information about the irradiation of food products should when necessary be accompanied by information that, as far as possible, forestalls potential misunderstandings.


Information often threatens our wellbeing and peace of mind. Bad news can be harmful simply by making us aware of displeasing facts. In everyday life, we act under the presumption that such harm is outbalanced by the usefulness of being aware of the state of things. In most cases this presumption is empirically well founded. This is why we do not in general need to hesitate to tell a student that she has failed an exam or a patient that she has diabetes. There are cases, however, where we tend to hesitate, such as telling a friend that her partner cheats on her or a patient that she has a deadly, incurable disease. In these and some other situations it can reasonably be claimed that an affected person would be better off without the information.

How harmful, then, is information about uncertain threats to public health? This is, at least in part, an empirical issue. However, there does not seem to be an adequate information base for answering it. Durodié refers to an article by Winters et al, who have carried out a series of experiments in which adverse symptoms, mainly from the respiratory organs, were induced by inhalation of air with enhanced CO2 concentration.11 By adding an odour to the CO2 enriched air, subjects were conditioned to exhibit these symptoms also when exposed to the odour alone. This associative (Pavlovian) learning was substantially enhanced in subjects who were, prior to the exposure, given a leaflet that, according to the authors, contained “information similar to that found on websites and other media about environmental pollution and a description of a patient with MCS (multiple chemical sensitivity)”.11 Durodié also cites Simon Wessely as claiming that Sweden, with its restrictive chemicals regulation, has one of the highest levels of self reported sensitivities to chemicals in the developed world, but we could not find this claim in the paper referred to. (Wessely S. Psychological, social and media influences on the experience of somatic symptoms. Workshop paper, 1997. Manuscript received from author, via personal communication, in 2004.)

According to Durodié, these results indicate that official recognition of health threats contribute to the proliferation of such diseases as MCS (Durodié,4 p 394). The experiment conducted by Winters et al does not, however, establish a link between information about chemical substances of the type proposed by regulatory agencies and psychosomatic diseases such as MCS, and this for at least two reasons. First, the leaflet contained sweeping formulations about chemicals in general very different from the specific science based statements about potential problems with specific substances that are intended, for instance, in the REACH proposal criticised by Durodié. Second, MCS was not reported by Winters et al.11 It remains to be shown that MCS or related psychosomatic disorders can be caused by Pavlovian learning or by some other mechanism for which results from Pavlovian learning are relevant.

As with effects on knowledge, the psychological harms of information need to be evaluated in relation to the harms of withholding information. Withholding must be organised in some fashion and indirect effects will ensue. Lack of information may induce not only confusion but also anxiety, perhaps in particular if one is aware of the fact that information is being withheld in order to prevent anxiety. We find it hard to believe that the type of anxieties that may contribute to MCS would be relieved if the present situation were replaced by one in which the public knew or suspected that there was (uncertain) scientific information about possible threats to their health that were being withheld from them. Furthermore, it is difficult to see how professionals could bring these patients to trust them under such circumstances.

In conclusion, the psychological effects are to a large extent unknown. Epistemic paternalism may lead to a situation in which the public knows that there is scientific uncertainty about the health effects of certain chemicals, and also knows that the identities of these chemicals are not disclosed to them. We propose that the latter situation is likely to have more serious psychological effects than one in which the public receives full and correct information about scientific uncertainties. Hence, there is a real possibility that epistemic paternalism will be counterproductive and withholding must be evaluated with this possibility in mind.


Although Durodié is certainly right that the potential threats to public health from chemicals in products and processes are often uncertain, it does not follow that these uncertain risks are better ignored. Without going into detail, Durodié dismisses uncertain risks as an object of rational decision making: “The emphasis, promoted by some, on what could be, rather than on what is, removes human action, understanding, competence, and will from the equation” (Durodié,4 p 393). This quote catches in a nutshell the assumption that uncertainties are not part of “what is”.

Durodié claims to be writing “in the spirit of” an earlier article by Chauncey Starr, which deals with uncertainties in a somewhat more stringent manner. Durodié quotes with approval Starr’s description of such alleged threats to health as global warming, radiation, and genetic modification as examples of “amplification of a minor popular concern into an apocalyptic dogma”.12 Starr is concerned with public fears that are “hypothetical”—meaning that “the guesstimate of either their probability, their consequences, or both has an indefinitely wide range of uncertainty”. The wide range of uncertainty is implicitly assumed to imply that the risks are certainly minor.12

In opposition to the above, it must be emphasised that uncertainty does not equal complete lack of knowledge. When scientific findings imply that some apparently harmless substance might be hazardous, this means something more than the mere logical possibility that it may be hazardous. The range of uncertainty is not indefinitely wide. Maybe the substance has proven hazardous to rats, or maybe its molecular structure indicates that it can be harmful. Such indications, and others that scientists in the field deem to be relevant for risk assessment, do tell us something. A scientific judgment that there is an uncertain risk associated with some product or process is a qualified judgment, not to be confused with the situation where no such judgment has been made, and certainly not with the situation where a product or process has been judged harmless.

Hence, when the risks associated with the use of some chemical, for example, are uncertain, then this means that the situation with respect to this substance differs from the situation of both substances that are known to have serious adverse effects and from the situation of substances that we can be reasonably certain have no such effects. For a rational decision maker who strives to avoid negative health effects, these differences should have practical consequences. Everything else being equal, she should prefer the substance known to be harmless to the substance with unknown properties, and the latter to the substance known to have serious negative effects.

Information is directly useful to the degree that it actually guides concrete choices. Being informed, however, also contributes to your capacity to make informed choices whether or not you use it. The capacity to make informed choices and thus direct your own life is what is usually called autonomy—for example, if you wish to avoid uncertain health threats, but do not know which those threats are, you are effectively prevented from acting in accordance with that wish. In general, information is a prerequisite for meaningful choice.

Good reasons can be given for avoiding uncertain threats to health when the costs of doing so are sufficiently small. Avoiding uncertain threats to health will in the long run improve our chances of staying well, although we do not know to what degree. The point of balance will differ between individuals depending on the values on which they base their decisions. Starr mocks the precautionary principle, saying it is caused by “a primitive instinct to suspect the unknown” (Starr,12 p 804). Individuals’ negative evaluations of risks and uncertainties can more neutrally be referred to as aversions. Granted that individuals should be allowed to choose what products to use and what activities to take part in based on their own values, including their own degrees of risk aversion and aversion to uncertainty, they can obviously make use of information about the possible health risks connected with different products and activities.


In a democratic society the people must have access to adequate information on the performance of the government. To be able to evaluate the government, people need to be informed of the consequences of government policy. Issues such as public health, consumer safety, and the status of the natural environment are common and fully legitimate citizen concerns. Another use of information about threats to public health is thus political.

Of course, citizens are never fully informed and the government can be evaluated on the basis of what people do know even if this is not all there is to know. However, every bit of information that is concealed makes citizens less informed and thus less able to fulfil their political role. People are interested in different matters and attach different degrees of importance to different types of facts. Each citizen should be free to look into those aspects of government that she finds most important or most neglected, while at the same time ignoring other aspects of government. To accept, however, that not everyone knows everything is one thing, excluding some matters from public scrutiny altogether is a completely different matter. (There are policy areas, such as foreign and security policies, where information is withheld from the public for reasons that are not relevant for issues of public health. We will not discuss these practices here.)

Most obviously, citizens need information about uncertain health threats in order to be able to participate in the political process as it concerns regulatory issues. They also need such information in order to be able to discuss the government’s research priorities and the priority given to research in comparison with other areas. Information about uncertain threats to public health is in fact an integrated part of the information base on which political choices should be made. To the extent that there is such a thing as a citizen right or duty to make informed choices among political alternatives (parties, presidential candidates, or alternatives in a referendum), there is also an inferred right or duty to be informed of the relevant facts.

In an open society, openness and sharing of information is a driver of progress not an obstacle. New discoveries are constantly made in all segments of society. The sharing of these discoveries and the building on each other’s achievements is what makes possible the rapid technological development that we currently enjoy. It is this pure effectiveness of information sharing that prompted John Stuart Mill, with his otherwise sharply critical view of centralised government, to champion “the greatest dissemination of power consistent with efficiency; but the greatest possible centralisation of information, and diffusion of it from the centre”. Mill believed that the central government “should have a right to know all that is done, and its special duty should be that of making the knowledge acquired in one place available to others”.13 Particularly in the age of the internet, one does not need to endorse the proposed role of central government in order to sympathise with the underlying purpose of this proposal—namely to improve the quality of decisions throughout society by making as much information as possible available to all decision makers, and, as Mill would have been eager to point out, in a democracy all citizens are decision makers.

Durodié and Starr fear that technophobia will grow and progress suffer. But we need to avoid not only technophobia, but also technomania. A proper balance must be struck between safety and development. To the extent that should the public be able to take part in political decision making, they need access to full scientific information, including information about scientific uncertainties.


In summary, we have found epistemic paternalism about public health to be a very problematic position, possibly to the degree of being self defeating. Withholding information about threats to public health interferes with people’s ability to make informed choices in both the private and the political parts of their life. Furthermore, such a practice is likely to give rise to more confusion and unwarranted anxieties than it can prevent. This applies even if the practice of withholding is restricted to information that is uncertain.

The argument for epistemic paternalism would be stronger if great uncertainty about health effects would imply small risks. Unfortunately, things are not that simple. History is rich with examples of threats to health that were first considered non-existent or simply not considered, later considered uncertain, and are now known to be harmful. Tobacco and asbestos are among the best known examples of this.

It should be conceded that the policy we propose has the unavoidable disadvantage that warnings will sometimes go out about things that will later be found quite safe. An instructive case is the famous Berg letter to Science in 1974, in which several distinguished scientists went public with their fear of the “potential hazards” of in vitro recombination of genes.14 The letter initiated a large and sometimes untidy public and scientific debate about the risks of gene splicing. Since scientists now, including the authors of the Berg letter, agree that the potential hazards were not actual, we can conclude that the spread of these scientific doubts did little good and caused much anxiety and, possibly, false beliefs. This does not mean, however, that it was wrong to inform the public of the potential problem. It lies in the nature of uncertainties that they may later become certainties, in either direction. The appropriateness of informing about uncertainties cannot be evaluated on the basis of scientific results that were unavailable at the time. We agree with Keith Boone, writing about the controversy in retrospect, that the strong reactions toward the Berg authors, the later triumphant demeanour of the critics and the repentant attitude of the authors, were unfortunate and misplaced. Events such as the Berg letter are an integral part of an openness that has some negative consequences but is on the whole superior to secrecy. The Berg letter controversy was, it should also be noted, amplified by earlier policies of secrecy in the USA, resulting in diminished trust in public and professional figures on the whole.15

Withholding, like manipulation, might be the best way to achieve a short term well defined objective—for example, to avoid anxiety. In the long run, openness and truthfulness have better consequences. The REACH proposal to make information about chemical testing available to the public is a prominent example of such openness. To lessen the harm done by information, we should look at how information is communicated, doing what we can to avoid ill considered campaigns and exaggerated media reports. Even if, however, such efforts prove to be unsuccessful, it is still better to make information about uncertain threats to health available to the public than to adopt a policy of keeping them secret. As Roger Higgs so eloquently put it: “The antidote to fear is not silence but open discussion”.8


We would like to thank an anonymous reviewer for valuable comments.



  • i The term “epistemic paternalism” is borrowed from Goldman. He first introduces the term as referring exclusively to withholding of information in the subject’s best epistemic interest (Goldman,1 pp 118–19), but later includes extra-epistemic reasons for withholding under the same concept (Goldman,1 p 127).

Other content recommended for you