Recent decades have witnessed significant changes in the management of medical and scientific ethics. In Britain and worldwide, members of several professions consider contentious issues that were traditionally the preserve of doctors or scientists. In government committees and ethical review boards, best practice is now determined by a diverse array of actors: including philosophers, lawyers, doctors, scientists, health-care managers, theologians, social scientists, and representatives of patient and charitable groups. Instruction in ethics, once a matter of professional etiquette, takes place on dedicated courses, with emphasis on law and moral philosophy. A growing body of interdisciplinary journals discusses problems that were previously encountered in the correspondence pages of the Lancet or the British Medical Journal. And in media discussion of biomedical research and clinical practice, the relevant ethical issues are now more likely to be highlighted and discussed by philosophers and lawyers than by doctors or scientists.

Like the approach it signifies, the name given to this enterprise is a recent invention. Bioethics was first coined in 1970 by the American biochemist Van Rensselaer Potter. However, this designated an approach that is not familiar to us today. For Potter, ‘bioethics’ involved a system of humanist ethics derived from biology and medicine – a ‘science for survival’ (Potter, 1970). Quite independently, only months later, the philosophers Andre Helleger and Sargent Shriver used the same term to name a new Institute for the Study of Human Reproduction and Bioethics at Georgetown University, a private Jesuit institution in Washington DC. Helleger and Sargent's definition is the one we recognize. To them, ‘bioethics’ constituted the examination of biomedicine by people outside its professional confines (Cooter, 2004). As Stevens has outlined, it quickly became the label for growing external scrutiny of science and medicine, with philosophers, lawyers and theologians serving on federal commissions and working in dedicated centres for bioethics (Stevens, 2000). Few would deny that this new configuration has had a sizeable impact on public and political life in the past 30 years. As Rose details, in regulatory commissions, national and international committees, and in the public discussion of professional practices, we have ‘witnessed a bioethical encirclement of biomedical science and clinical practice’ (2007, p. 30). And this, as Salter argues, represents a fundamental shift in the location and exercise of biopower: with new actors determining the development of policies and biomedical technologies that, in turn, play a crucial role in governing the health of individuals and populations (2004).

Bioethics appears to be a rich subject for historical investigation: revealing new regulatory strategies, changing relations between professions and new notions of ethical expertise. But our understanding of its emergence remains sketchy. As Cooter and Rosenberg detail, most historical accounts have been written by bioethicists themselves (Rosenberg, 1999; Cooter, 2000). These adhere to an explanatory framework one sociologist has recently called the ‘origin myth’ model (Armstrong, 2007). They claim that scholars in several fields, enthused by 1960s civil rights and countercultural politics, increasingly began to take an interest in issues that had previously been monopolized by doctors and scientists; and they present bioethicists as oppositional critics of the biomedical establishment – responding to unprecedented ethical dilemmas on behalf of patients and society, and reviving disciplines like moral philosophy in the process (for example, Toulmin, 1982; Jonsen, 1998; Harris, 2001).

These celebratory narratives have not gone unchallenged. In recent years, several anthropologists, sociologists and medical historians have criticized bioethics. These dissenting voices roughly fall into two camps. Some critique bioethics for an overriding reliance on formalistic philosophical principles that, they claim, are divorced from the expectations of the patients it claims to represent. These critiques do not challenge bioethics in and of itself, but argue it would be better served by adopting ethnographic, sociological or historical perspectives (for example, Kleinman, 1999; Belkin, 2004; Lopez, 2004). Others, meanwhile, attack the bioethical enterprise itself: claiming that instead of providing a challenge to biomedicine and acting on behalf of patients, it serves to insulate science and medicine from threatening questions about new technologies and, through an increasingly bureaucratic process, provides ‘ethical warrants’ that allow research to proceed (Rosenberg, 1999; Stevens, 2000; Evans, 2002). To Francis Fukuyama, ‘bioethicists have become nothing more than sophisticated (and sophistic) justifiers of whatever it is the scientific community wants to do, having enough knowledge of Catholic theology or Kantian metaphysics to beat back criticisms by anyone … who might object strenuously’ (2002, p. 204).

But both forms of critique regularly fail to identify the broad assumptions and mechanisms that underpinned the emergence and growth of bioethics in particular times and places. As Ashcroft claims, a more interesting and challenging analysis would begin by posing the question: ‘if bioethics is the answer, what was the question?’ (2004, p. 158). This, he follows, would then allow us to investigate precisely what interests were served, and linked, by external arbitration of biomedicine and, likewise, to identify the various parties who stood to benefit from the development of particular answers. Seeking to answer these questions through self-serving ‘origin myths’ will not suffice. For one, we cannot simply explain the growth of bioethics by pointing to the inherently controversial nature of new technologies or practices. Issues such as animal experimentation, reproductive medicine and human experimentation caused discord well before the 1970s, but did not necessitate outside arbitration. In other words, the ‘bioethical’ aspects of particular practices and objects were not self-evident, but were the product of specific socio-political contexts and professional agendas in the late twentieth century (Cooter, 2000). But, additionally, when looking to the broader factors that made specific issues ‘bioethical’, we cannot fall back on references to countercultural and civil rights politics. Although they may partly account for the growth of bioethics in America, they cannot account for its emergence elsewhere.

This is certainly the case in Britain. Here, ‘bioethics’ was considered an American neologism during the 1970s, and doctors and scientists continued to control professional ethics and decision making (Cooter, 2004). Bioethics did not gain currency – either as a term or an approach – until the 1980s, when philosophers and lawyers became actively involved in the public discussion and, crucially, the regulation of biomedicine. It soon proved pervasive. By 1990, Britain had a dedicated bioethics council, prompting the Guardian to talk of an ‘ethics industry’, and its growing band of external arbiters had become respected public and political figures (Anon, 1991a).

This article investigates the broad factors that created the demand for outside involvement with science and medicine in Britain, and charts how particular individuals fashioned themselves into ‘bioethical’ experts. My analysis centres on the moral philosopher Mary Warnock, who was appointed chair of a government inquiry into human fertilization and embryology in 1982, and went on, as Jasanoff notes, to become ‘synonymous with British bioethics’ (2005, p. 152). Although Hedgecoe has rightly argued that we should avoid focusing on critical events and notable figures in the history of bioethics, as it carries the danger of recapitulating ‘origin myths’, I believe that studying the debate that preceded Warnock's appointment, and her subsequent discussion of reproductive medicine, provides an excellent window onto the interplay between professional actions and socio-political concerns that fostered the growth of the British ‘ethics industry’ (Hedgecoe, 2009). As we shall see, Warnock's selection as chair of the government inquiry came amidst, and reflected, growing calls for external oversight of science and medicine that dovetailed with the incumbent Conservative government's desire to make professions accountable to newly empowered end-users – be they patients, parents or shareholders (Rose, 1993, 1999; Power, 1997; Strathern, 2000). Through an analysis of official papers, I show that officials at the Department of Education and Science prioritized the selection of ‘an outside chair’ to lead their inquiry. This task, as we shall see, presented Warnock with the chance to fulfil her long-standing ambition of applying philosophy to practical affairs. What is more, she used her position as chair to publicly endorse greater oversight of science and medicine, arguing the public were ‘entitled to know, and even to control’ professional practices (1985a, p. xiii). With this in mind, and by charting how the lawyer Ian Kennedy led calls for appointment of an ‘outside’ chair in the first place, I claim that we need to be attuned to the central role that putative bioethicists played in generating and harnessing the demand for bioethics in Britain, as much as they reacted to it.

But, crucially, I also show that Warnock did not present external oversight as opposed to biomedical interests. She instead promised researchers it would confer legitimacy on their work, by proving to politicians and the public that it could proceed ‘without being put to morally intolerable uses’ (1988a, p. 1627). Many clinicians and researchers agreed: believing new oversight regimes would make their research ‘socially palatable’, and supporting Warnock's call for a national bioethics council (Anon, 1983b). I argue that Warnock's importance lay in the way she presented external arbitration as an essential intermediary between the political demand for accountability and the professional desire for legitimacy. This historical analysis strengthens the view that we can explain the demand for bioethics by seeing it, to quote Rose, as ‘a necessary supplement to the imperatives of political decision making concerning the life sciences’: which claims to represent the public interest, while simultaneously legitimating biomedical research (Salter and Jones, 2005; Rose, 2007, p. 30).

Mary Warnock, Practical Philosophy and ‘Test-Tube Babies’

Mary Wilson was born in Winchester on 14 April 1924, 7 months after her father died from diphtheria. Despite being one of six children in a single parent family, the young Mary enjoyed a comfortable childhood. Her family remained wealthy thanks to her maternal grandfather, a German-born banker, and she was educated at the prestigious St Swithin's School in Winchester (Warnock, 2000). After leaving this school in 1940, she spent three terms at St Prior's School in Surrey, which was founded by the Huxley family and counted Julian and Aldous Huxley among its former pupils. In 1942 she won a scholarship to Lady Margaret Hall, Oxford, to study Classics. It was here that she met a fellow student, Geoffrey Warnock, who went on to become a renowned philosopher and Vice-Chancellor of Oxford. In 1949 they married, and the same year the new Mrs Warnock was appointed lecturer in moral philosophy at St Hugh's College, Oxford.

While Mary Wilson was growing up in interwar Britain, the ‘test-tube baby’ that later brought her to public attention featured in debates on population health, eugenics and industrial culture. In his 1924 essay Daedalus, or Science and the Future, the geneticist J.B.S. Haldane predicted that advances in tissue culture techniques would soon underpin the fertilization and growth of human embryos in laboratories, a process he termed ‘ectogenesis’. Haldane predicted that by late twentieth century few children would be ‘born of woman’, and that ectogenesis would overcome biological degeneration by permitting selection of embryos that were ‘undoubtedly superior to the average’ (1924, p. 64). Reviews of Daedalus claimed that Haldane's predictions were justified ‘if what has been done with tissue culture is remembered’ (Anon, 1924, p. 740). And biologists who cultured mammalian embryos, such as Thomas Strangeways, publicly stated the idea of the ‘test-tube baby’ is not inherently impossible (Strangeways, 1926).Footnote 1 At the same time, however, others presented test tube babies as symbols of subordination in modern industrial culture. Aldous Huxley's Brave New World, for one, envisaged a dystopian future where ectogenesis displaced humans from reproduction, with dehumanized clones mass-produced on production lines like Henry Ford's Model-T (Wilson, 2005).

Although the social implications of ectogenesis, and science in general, were publicly discussed during the 1920s and 1930s, philosophers were absent from these debates. The only one who seemed engaged with these concerns was the analytic philosopher Bertrand Russell, who warned of the harmful union between science and authoritarian politics in Icarus (1924) and The Scientific Outlook (1931), and who wrote on a range of political and moral issues into the 1960s. But Russell, tellingly, did not consider these books to be philosophy. Partly owing to his horror at the First World War, when the Germans and the British appealed to incommensurate notions of a just cause, and partly owing to efforts to endow philosophy with the same objective stance as the sciences, he claimed that moral judgements should be regarded as non-verifiable expressions of attitude that fell outside the domain of philosophy (Pigden, 2003). The belief that moral statements were subjective displays of emotion – no more objective than hissing or handclapping – underpinned an austere view of philosophy that lasted until the 1970s.Footnote 2 Following Russell, influential philosophers like Ludwig Wittgenstein, Alfred Ayer and Charles Stevenson ignored substantive issues and occupied themselves with meta-ethical investigation of what counted as a ‘moral’ judgement. In 1954, Ayer claimed that people should not look to moral philosophers for practical guidance, as all they could properly demonstrate was that moral statements were ‘expressions of attitude and not statements of fact, and consequently that they cannot be true or false’ (1954, p. 246).

This standpoint clearly irked Mary Warnock. In a 1960 book on ethics, she complained that moral philosophy ‘as a serious subject has been left further and further behind’ by the ‘the refusal of philosophers in England to commit themselves to any moral opinions’ (1960, p. 203). Disaffected with the state of her subject, and having decided that she was an ‘entirely unoriginal thinker’, Warnock took the position of headmistress at the private Oxford School for Girls in 1966 (Warnock, 2004, p. 14). When she returned to academia in the mid-1970s she was more optimistic, declaring in a new book on ethics that ‘the most boring days were over’. Philosophy, she noted, was gradually becoming ‘a practical subject, and therefore more urgent and interesting’ (1978, p. 139). Warnock's newfound confidence reflected that philosophers on both sides of the Atlantic had begun to discuss practical issues: motivated partly by radicalized students who wanted to discuss the rights and wrongs of the Vietnam War – not issues of meta-ethics (O’Neill, 2009; Warnock, 2009). Warnock had recently been presented with the chance to engage with practical affairs herself. In 1972, she was appointed to a committee of inquiry into regulation of the medical profession, chaired by the physicist Alec Merrison. And in 1974, she was appointed chair of an inquiry into special education by Margaret Thatcher, then secretary of state for Education. This committee issued its report in 1978, and its recommendations were central to the Conservative's 1981 Education Act.

In the United States, the philosophical engagement with substantive issues extended to an active role in the regulation of biomedicine. In 1972, the New York Times reported that researchers investigating the natural history of syphilis had intentionally withheld treatment from 400 African Americans with the disease in Tuskegee, Alabama, since 1932. In 1974, in response to the Tuskegee study and other controversies, including the disclosure of non-consented research on institutionalized children, President Nixon established a National Commission for the Protection of Human Subjects in Biomedical and Behavioural Research (Fox and Swazey, 2008). The Act that established the Commission stipulated that no more than five of its 11 members should be scientists or doctors – with the majority to be chosen from philosophy, law, sociology, theology, government and the general public. Several of these ‘outside’ members, including the British-born philosopher Stephen Toulmin, went on to become prominent authorities on scientific and medical ethics. And this, as Rothman remarks, ‘made apparent that the monopoly of the medical profession in medical ethics was over’ (1990, p. 189).

The situation in Britain, however, differed markedly. Here research practices were also called into question, following the 1967 publication of Maurice Pappworth's Human Guinea Pigs; but in contrast to the United States the regulation of clinical or basic research was not redefined as a matter that required external input (Ashcroft and Dixon-Woods, 2008, p. 382). Evoking the emphasis on clinical autonomy struck as part of the 1948 settlement that created the NHS, prominent doctors successfully argued that legal and ethical responsibility should remain ‘firmly on the shoulders of the medical profession’ (Hedgecoe, 2009, p. 338). Although growing numbers of hospitals formed research ethics committees in the late 1960s and 1970s, only one-fifth of the committees surveyed in 1972 contained a non-medical member (ibid.). And Merrison's inquiry on medical regulation endorsed this state of affairs: arguing that since ‘it is the essence of professional skill that it deals with matters unfamiliar to the layman’, the public would be best served by a regulatory body composed predominantly of doctors (Anon, 1975, p. 188). When scrutiny and criticism of professional actions arose, it came from within science and medicine: from the student doctors who ran regional Medical Groups, or leftist scientists in the British Society for Social Responsibility in Science (Whong-Barr, 2003; Werskey, 2007). Those philosophers and theologians who began to engage with ethical issues in this period – such as Alistair Campbell, Raymond Plant or Robin Downie – did not challenge professional paternalism and had no say in the regulatory process. Their role instead lay in teaching ethics to doctors, nurses and social workers: helping improve professional practices by outlining and clarifying particular moral dilemmas (Campbell, 2009).

The re-emergence of ‘test-tube babies’ encapsulated the differences between Britain and the United States in this period. In February 1969, the Cambridge physiologists Robert Edwards and Barry Bavister, and the Oldham obstetrician Patrick Steptoe, announced the first in vitro fertilization (henceforth IVF) and maturation of human oocytes (Edwards et al, 1969). Yet despite the growing scrutiny of basic and clinical research, IVF was not considered problematic in Britain during the 1970s. Media unease only peaked briefly after Edwards et al's 1969 announcement, with newspapers predicting that communist states would use the technique to clone armies of ‘supermen’ (Turney, 1997). In 1972, the biologist Steven Rose, founding member of the British Society for Social Responsibility in Science, argued that discussion of IVF diverted attention from the more pressing issues in medical ethics, such as the ‘sharp disparities in the application of medical care’ between social classes (Anon, 1972, p. 342). Indeed, IVF was not even considered the most problematic reproductive technique in this period. Attendees of a 1973 symposium agreed it would raise fewer ethical and legal problems than artificial insemination by donor (henceforth AID), because it did not involve a third party and did not raise questions surrounding paternity and anonymity (Wolstenholme and Fitzsimmons, 1973).

The only ‘outsider’ who discussed the ethics of IVF in any detail was the theologian Gordon Dunstan. In his 1974 book The Artifice of Ethics, Dunstan argued the overriding ethical priority in IVF involved ensuring that sperm and egg were brought together responsibly in vitro; but he held this duty should also underpin AID, as well the actions of couples looking to conceive naturally (Dunstan, 1974). What was more, Dunstan saw no problem with the possible uses that could be made of in vitro embryos – including experimentation. He claimed experiments on embryos were vital ‘for research into recesses otherwise inaccessible … to study embryonic growth, for instance, with a view to detecting the origin of disorders and to find, perhaps, the means to correct or prevent them’ (ibid, p. 67). In the absence of any sustained critique, Robert Edwards commanded the ethical discussion of IVF himself. In public lectures and journal articles, he claimed that an infertile couple's right to have children outweighed any potential objection. ‘Fertilization in vitro followed by reimplantation into the mother does not pose any moral problems’, he argued, ‘and the right of couples to have their own children should not be challenged’ (1974, p. 16).

When Louise Brown, the first ‘test-tube baby’, was born in Greater Manchester during July 1978, the Guardian commented on the absence of ‘moral or ethical outrage’ (Tucker, 1978, p. 11). Press coverage was overwhelmingly positive: greeting the ‘Baby of the Century’ and claiming IVF provided hope to thousands of infertile couples (Turney, 1997). As the British Medical Journal detailed, this differed from the United States, where the major question was ‘not whether the baby would be a boy or girl but whether its presumably unprecedented manner of coming into being is ethical’ (Culliton and Waterfall, 1978, p. 1270). The contrast was notably held as evidence of a distinctly American phenomenon: the flowering of bioethics. The British Medical Journal reported that American politicians increasingly based federal policy on the opinions of theologians, philosophers and lawyers – who acted ‘as society's conscience in matters once left entirely to the medical profession’ (ibid.). And the strident opinions of some bioethicists, as well as a politically active pro-life lobby, ensured American attitudes to IVF were far more critical. The Princeton theologian Paul Ramsey, for one, urged Congress that it ‘should not be allowed by medical or public policy in the United States – not now, nor ever’ (ibid.).

‘We Must ALL Have a Say on Test Tube Babies’: Promoting External Oversight in the 1980s

But the British enthusiasm for IVF soon dissipated. By the early 1980s, newspapers claimed the procedure raised social and moral dilemmas that the Daily Express collectively termed ‘the aberrations of the baby revolution’: including the implantation of multiple embryos in one cycle, the use of IVF by unmarried couples, the prospect of commercial surrogacy and experimentation on embryos in vitro (Hadley, 1984, p. 10). The Daily Mail, which had welcomed the birth of Louise Brown, withdrew the money it pledged for the clinic Edwards and Steptoe were building at Bourne Hall, Cambridge (Warnock, 2004, p. 74). And political figures, such as the Conservative Peer Lord Campbell, warned that IVF would ‘imperil the dignity of the human race, threaten the welfare of children, and destroy the sanctity of family life’ (Campbell, 1982, p. 1001).

Michael Mulkay argues this criticism can be explained by the ‘socio-political’ changes that followed the 1979 election of Margaret Thatcher's Conservative Party (Mulkay, 1997). Members of the new government regularly stressed the need to reaffirm social principles undermined by the ‘permissive’ Bills on homosexuality, abortion and capital punishment that Harold Wilson's Labour government passed during the 1960s. Their emphasis on ‘traditional’ morals, Mulkay claims, gave influence to pro-life organizations such as LIFE and the Society for the Protection of the Unborn Child, which remained marginal during the 1970s. Accorded a greater political and media profile, these groups helped problematize issues that were hitherto uncontroversial. For instance, during a television interview in February 1982, Robert Edwards admitted to experimenting on embryos he had no intention of implanting into patients and claimed that ‘these spare embryos can be very useful … they can teach us things about early human life’ (Williams and Stevens, 1982, p. 314). Whereas Edwards and Gordon Dunstan had both endorsed embryo experimentation during the 1970s – with little resistance – LIFE now led calls for Edwards to be immediately prosecuted for ‘manipulation of life on a horrifying scale’ (Reynolds and Badford, 1982, p. 7). Indicating how opinion had swung against IVF, a grave editorial in The Times also stated that human embryos ‘ought not to be regarded as dispensable matter’, and called for an urgent debate on ‘which of the many strange possibilities now opening up are acceptable, which need controls, and which are unacceptable’ (Anon, 1982b).

Yet the emphasis on traditional morals only partly explains why IVF became contentious in the 1980s. Criticism of the procedure also reflected, and helped instantiate, a growing distrust of biomedical self-regulation: with several figures in Parliament and the media calling for new regimes of external oversight. Crucially, many of these demands emanated from lawyers who began to challenge the current state of medical ethics in Britain – calling for new modes of scrutiny and regulatory arrangements they termed ‘bioethics’. Foremost among these new critics was Ian Kennedy, founder of the Centre for Medical Law and Ethics at King's College, London. In his 1980 BBC Reith Lectures, Unmasking Medicine, Kennedy launched a stinging attack on biomedical paternalism. He claimed the increasing discussion of medical ethics in the 1970s had amounted to little more than a fait accompli, as it continued to exclude patients and those in other professions (Kennedy, 1981a, p. 119). As a solution, he argued medical practices should come under increasing scrutiny from outsiders, as ‘it could be said that that it is only somebody who is free from any claims which medical professional loyalty may make on his objectivity who can successfully examine the institution of medicine’ (ibid, p. viii). This scrutiny, Kennedy recommended, should involve ‘ethics and law, together with sprinklings of philosophy, sociology, and politics’ (ibid.). While he admitted that ‘here in the United Kingdom we do not have a label’ for the new approach, he noted it was known as ‘bioethics’ in the United States.

Although Kennedy found the term bioethics ‘unappealing’, he quickly became a firm advocate of increased oversight of science and medicine (ibid.). During 1981, in the Journal of Medical Ethics and a BBC documentary, he urged the government to form outside ‘inspectorates’ and argued that: ‘If a profession by definition exists to serve the public interest, then clearly it must ultimately be the public who judge what that interest is and whether it is being served’ (Kennedy, 1981b, p. 206). By 1983, in a review of Kennedy's television series The Doctor's Dilemma, the Lancet remarked that his calls for external oversight had quickly become ‘ubiquitous’ (Anon, 1983a, p. 1026). And in 1984, Raanan Gillon, editor of the Journal of Medical Ethics, placed Kennedy at the forefront of a campaign that marked the end of ‘medicine's halcyon days when doctors – for the most part only very senior doctors – discussed the dilemmas of medical ethics in privacy and leisure’ (1984, p. 16).

IVF was one of several issues Kennedy highlighted in Unmasking Medicine, alongside care of the disabled and terminally ill, genetic counselling, provision of health care, and disclosure of information to patients. But it soon became the main focus of calls for external oversight. Responding to the growing criticism of embryo experiments in 1982, Kennedy wrote in The Times that discussion of IVF needed to be ‘dragged into the open’ and ‘cannot be simply left to one professional group’ (1982, p. 17). Another lawyer, Geoffrey Robertson, endorsed this in the Observer, claiming IVF raised issues too profound to be handled ‘behind a closed door marked “Medical Ethics – laymen and lawyers keep out” ’ (1982, p. 8). Robertson argued that the only way to resolve these ‘dilemmas of bioethics’ was through ‘interdisciplinary co-operation and insistence on public participation’ (ibid.). This struck a chord with the Labour MP Leo Abse, who claimed that inquiries recently established by the Medical Research Council (MRC), the British Medical Association, and the Royal College of Obstetricians and Gynaecologists, were insufficient as IVF raised issues ‘too enormous to be left to doctors’. The only adequate solution, he argued in Parliament, was to convene an ‘inter-departmental and inter-disciplinary inquiry’ (Anon, 1982a, p. 2).

These demands tallied with a Conservative enthusiasm for increased surveillance and regulation of professional actions. Drawing on neo-liberal theorists such as Friedrich von Hayek and William Niskanen, who argued that welfare states had allowed professions to become overly bureaucratic and self-serving, members of Thatcher's government sought, in the words of Nigel Lawson, to ‘break from the predominantly social democratic assumptions that have hitherto underlain policy in post-war Britain’ by exposing many professions and public services to ‘the disciplines of the market’ (1980, pp. 6–7). They promoted increased scrutiny, the adoption of performance indicators and greater competition as ways of making professions accountable to the demands of parents, patients, citizens and investors – who they represented as increasingly autonomous consumers. Over the course of the 1980s, this fostered the development of what Michael Power labels the audit society. Across professions such as teaching, social services, local government, the financial sector and medicine, reliance on insider knowledge gave way to mechanisms of external oversight that promised to enforce transparency and public accountability (Jacob, 1991; Bartlett, 2000; Salter, 2000; Munro, 2004; Lowe, 2007). And in line with the Conservative commitment to what Lawson called ‘rolling back the frontiers of the State’, this oversight was not performed directly by the government, but was entrusted to an array of outside consultants and agencies who acted as proxies for consumer interests (Lawson, 1980, p. 5; Rose, 1993; Power, 1997).

The government's committee of inquiry into human fertilization and embryology, announced in July 1982, can be seen as both an enactment of this governmental ethos and the broader critiques of biomedical paternalism. As correspondence from the Department of Education and Science to the MRC indicates, ministers elected to form an inquiry as they did not consider any of the existing professional inquiries to be ‘sufficiently broadly based or sufficiently representative’ of public opinion (Norton, 1982). By contrast, ministers wanted their committee to include representatives from several professions, with members having diverse religious backgrounds.

The immediate priority, however, was to appoint what civil servants termed an outside Chairman (ibid.). During April and May, staff at the Departments of Education and Science (DES) and Health and Social Services (DHSS) cast around the names of possible chairs. None, importantly, had any connection with IVF or ‘Education and Science’. The suggestions were Sir Norman Lindop, an osteopath, James Sutherland, a solicitor, Lady Gillian Wagner, chair of Dr Barnardo's, and Mary Warnock, then senior research fellow at St Hugh's College, Oxford (Newton, 1982). Warnock was the preferred candidate from the outset – identified by civil servants as ‘very well qualified for the job’ (ibid.). But these qualifications only stemmed partly from her status as an ‘outsider’. While Warnock's selection would have fulfilled the demand for outside involvement with science and medicine, she also fulfilled many of the traditional requirements for chair of a government inquiry. She had led a previous committee, was known for her organizational skills, and was typical of the well-connected, Oxbridge educated figures that civil servants looked to when selecting committee members – known as the ‘Great and the Good’ (Warnock, 1988b; Hennessy, 1990).

Warnock saw the inquiry as a chance to practically engage with philosophical questions of when human life could be said to begin, and accepted the government's invitation in early June (Warnock, 2004). She then worked with civil servants and Norman Fowler, the secretary of state for Health and Social Security, to select other members. This resulted in the appointment of seven doctors and scientists, with different religious backgrounds, and eight individuals from other professions – including two solicitors and a court recorder, two social workers, two managers of a health-care trust, a theologian, and the vice-president of the UK Immigrants Advice Service. Warnock's appointment as chair, and the fact that doctors and scientists were outnumbered by members of other professions, met the government's desire for a ‘broadly based’ committee. Announcing the inquiry's formation on 23 July 1982, Fowler took pains to distance it from the ‘examinations already underway by medical bodies’. Its membership, he stressed, was ‘broad-based’ and would hear from many ‘lay and religious viewpoints’ (Fowler, 1982, p. 329).

Committee members also viewed their diverse backgrounds as a means of ensuring public accountability. During their second meeting, in December 1982, they criticized representatives from the MRC for only having one non-scientist on their inquiry, and noted that the individual in question, the Bishop of Durham, used to be a scientist anyway. This, they claimed, would simply increase distrust as ‘it might be seen by the public as a situation when scientists who had an interest in this research quite naturally gave it their approval’ (Warnock Committee, 1982). In these initial meetings, the Committee also endorsed Fowler's demand that opinion on IVF should be sought from a ‘wide range of interested bodies’ (DHSS correspondence, 1982). Committee secretaries were instructed to invite written or spoken evidence from over 300 organizations and individuals: from scientists such as Robert Edwards; from anti-abortion, family planning and feminist groups; from lawyers such as Ian Kennedy; from marriage counsellors and adoption agencies; from many university departments, including law, theology and medicine; and from representatives of all major religious denominations (ibid.).

While Warnock's committee was hearing evidence, media reports continued to demand greater public scrutiny of, and influence over, IVF. An editorial in The Observer stated that test-tube babies were ‘now a public subject’ and claimed that if scientists were allowed to proceed unchecked, ‘then we can hardly complain at the lack of faith shown by the public [in science]’ (Anon, 1984a, p. 18). And an editorial in the Mail on Sunday, entitled ‘why we must ALL have a say on test tube babies’, argued that ‘the time has come for the public to be involved in the decisions which are being made in the laboratory’ (Anon, 1984b, p. 16). At the same time, despite the fact that her committee was not due to issue its recommendations for 2 years, Warnock became one of the strongest supporters of external oversight herself. In a 1983 edition of the Philosophical Quarterly, she argued her committee's main priority was to ensure that discussion, and even regulation, of IVF be taken, not in the private, but in the public sphere (1983, p. 249). The only way to render IVF publicly accountable, she claimed, was to establish a ‘system of surveillance’ that allowed it to be ‘constantly watched, not merely by the medical profession and the research biologists, but the lay as well, the stupid, the prejudiced, the sentimental, the religious and the moralistic’ (ibid.).

This emphasis on oversight clearly influenced the committee's thinking. In a newspaper interview before its report was published, Warnock admitted that the key proposal involved the formation of a ‘monitoring body to keep all innovations and technical developments under constant review’ (Lowry, 1984). Issued in July 1984, the report framed this authority as the ‘most urgent’ of its 64 recommendations, and stressed that it must not be ‘exclusively, or even primarily, a medical or scientific body’. In order to ensure it would not be ‘unduly influenced’ by professional interests, it stressed that it should incorporate a wide-ranging membership and, crucially, ‘that the chairman must be a layperson’ (Warnock, 1985a, p. 79). Warnock justified this proposal in the New Scientist by framing the public as increasingly empowered stakeholders in science and medicine. When research raised a moral dilemma, she claimed,

there is no reason why scientists should be responsible by themselves for solving it … A society in which what might or might not be done was decided solely by those committed to the advance of knowledge would not be acceptable to those of us who are not scientists. There are other values to be considered. Increasingly, and rightly, people who are not experts expect, as of right, to help determine what is or is not a tolerable society to live in. (1984, p. 36)

But Warnock, crucially, did not present the growing emphasis on oversight as an impediment to science or medicine. She argued, rather, that devolving regulatory power to outsiders gave researchers more time to pursue their work, and that ‘many scientists want the onus of deciding what is and what is not morally acceptable to be partially lifted from their shoulders’ (1985b, p. 514). In the British Medical Journal, she claimed oversight acted as a safeguard for research by showing it ‘can be regulated without being banned, that knowledge can be pursued without being put to morally intolerable uses’. In a climate of public and political distrust, she followed, it had become essential ‘if we are to continue, as we must, to push back the frontiers of science’ (1988a, p. 298).

Warnock was certainly right to claim that many scientists supported external scrutiny of IVF. A 1983 editorial in Nature had argued it would help make the procedure ‘socially palatable’, and recommended the formation of a statutory body to ‘exert a supervisory influence, consider difficult questions as they arise, and keep the general public informed’ (Anon, 1983b, p. 735). Following the publication of the Committee's report, the British Medical Journal also claimed that scientists ‘will welcome the suggestion that a new licensing authority should be set up to regulate infertility services, monitor new developments, and vet individual research projects’ (Anon, 1984c, p. 207). Although the Lancet was more cautious about increased oversight, warning it must not impede research, it begrudgingly accepted that declining faith in professional expertise and an increasing ‘consumer movement’ made it inevitable (Anon, 1986, p. 1016).

Following the publication of her committee's report, Warnock became synonymous with the approach increasingly labelled as ‘bioethics’ (see Figure 1). After being appointed to the House of Lords as a cross-bench peer in 1985, she contributed two articles to a new journal of Bioethics, publicly discussed the ethics of issues like IVF, gene therapy and animal experimentation, and was appointed as the British representative on a new European Commission on Bioethics (Warnock, 1987a, 1987b; Jasanoff, 2005). The association was strengthened further by Warnock's calls for a permanent ethics committee that, as she wrote in the British Medical Journal, would constantly monitor ‘a wide range of ethical problems, arising in both medical practice and research’ (1988a, p. 1626). Again, she took pains to present this as a benefit to biomedicine, claiming that research would continue to be publicly criticized unless decision making became ‘highly visible’ (ibid.). Having been endorsed by several MPs and biomedical journals, Warnock's proposal was discussed at length by civil servants at the Cabinet Office, as well as delegates at conferences sponsored by the CIBA and Nuffield foundations (CIBA Foundation, 1989; Lock, 1990). By 1990, however, it was clear any national committee would not have parliamentary links: due to political skepticism toward another quasi-official body, or ‘quango’, and reluctance toward possible ministerial interference (O’Neill, 2009). Instead, after encouragement from the molecular biologist Sir David Weatherall, the Nuffield Foundation established an independent Council on Bioethics in December 1990: with the former DHSS secretary Sir Patrick Nairne as chairman, and a diverse membership comprising scientists, doctors, lawyers, philosophers, economists, industrialists and journalists (Anon, 1991b).

Figure 1
figure 1

‘To be or not to be?’ Illustration to a 1994 Sunday Telegraph profile of Mary Warnock, which detailed her ‘wide and extraordinary influence’ over ethical debate in Britain (Anon, 1994). Reproduced courtesy of Edward Collet.

The formation of a permanent bioethics council highlighted that scientists and doctors were no longer the sole arbiters of ethical expertise in their respective fields. Following Warnock's appointment as chair of the government inquiry in 1982, bioethics quickly became the norm in regulatory commissions and public debate. The marked growth and influence of what the Guardian called the ‘ethics industry’ appeared bound to the widespread demand for oversight of science and medicine in the 1980s. As we have seen, politicians and public figures like Ian Kennedy endorsed it as a way ensuring public accountability, while many doctors and scientists promoted it to colleagues as a means of protecting research. And belief in its value was consolidated by the way that new external arbiters, like Mary Warnock, positioned themselves as vital intermediaries between these two views: promising to represent the public interest and legitimate research.

‘No Moral Experts’: Moral Pluralism and Critiques of the Warnock Committee

But we must not assume that this external oversight was universally accepted. To do so would ascribe to the ‘origin myth’ model, by parading what Cooter terms a ‘positive narrative of moral progress’ (2000, p. 453). In fact, Warnock's engagement with science and medicine prompted considerable criticism; and it took 6 years for her committee's recommendations to be passed into law, in the 1990 Human Fertilization and Embryology Act. This stemmed from intractable differences of opinion regarding embryo experimentation, which became apparent to Warnock as soon as her committee began to hear evidence. Supporters and opponents of experiments on embryos both mobilized equally valid, but incommensurable, claims to support their case. Groups like the Royal Society, the MRC and the British Medical Association endorsed research on utilitarian grounds: claiming experiments on small numbers of embryos were essential to overcoming the developmental abnormalities that afflicted thousands of children. They argued experiments could be justified at an early developmental stage, as the embryo equated to little more than a bundle of cells and was not recognizably human (Pallott, 1983). However, as a committee memo noted, this ran counter to the ‘substantial body of opinion’ that totally opposed any research on embryos (Warnock Committee, 1983). Groups like LIFE, the Guild of Catholic doctors, the Women's Institute and the general practitioner's association asserted that human life began at, and deserved legal protection from, conception. Rejecting the argument that early embryos were not recognizably human, these groups claimed that developmental biology provided evidence for an outright ban – as it showed that ‘the genetic coding is laid down on fertilization and [is] discernable as human on the first mitosis’ (Spencer, 1983, p. 1823).

Perhaps unsurprisingly, given its ‘broad based’ composition, similar divisions emerged within the committee. Three members argued that embryos should never be used in research, while others believed experiments were essential (although they could not agree whether embryos should be created specifically for research, or where in development any cut-off should be drawn). Warnock quickly realized there would be no way of arriving at a proposal that satisfied all the committee, or the broader groups for which it was proxy. Instead, as she outlined in a 1986 lecture, the solution lay ‘in the messier, less tidy business of compromise … of attempting to come up with a satisfactory solution which, while retaining as many of the calculated benefits to society as possible, will nevertheless offend and horrify people as little as possible’ (1987a, p. 8). Here, for all the emphasis on its non-scientific members, the committee fell back on the expertise of the developmental biologist Anne McLaren – who Warnock has since identified as ‘indispensable’ (2004, p. 80). McLaren advised the committee to adopt 14 days as a cut-off for embryo experiments. Around this point, cells in one pole of the rudimentary embryo condense to form the ‘primitive streak’, which then differentiates into the antecedents of the spinal cord and nervous system. McLaren claimed that before the primitive streak formed, there was no possibility of an embryo experiencing pain. And she argued that the primitive streak could be framed as the beginning of individual development instead of fertilization, as it marked the last point at which the embryo could cleave to form twins (Warnock, 2004, pp. 81–83).

McLaren's arguments satisfied those committee members, including Warnock, who sought to permit embryo experiments up to a specific stage in development. The committee report subsequently recommended 14 days as the cut-off for research, presenting it as the point in which a ‘loosely packed configuration of cells’ developed the ‘first features of the embryo proper’ (Warnock, 1985a, pp. 58–59). But the trio who opposed experiments at any stage of development refused to support this proposal, and set out their objections in an appendix (ibid, pp. 90–92). This undermined Warnock's efforts to present 14 days as an acceptable cut-off, and provided ammunition for those groups, like LIFE, who argued embryo experimentation was ‘not in keeping with the respect due to human life’ and campaigned for legislation banning all research (Hiley, 1984, p. 90). For a while it appeared that Parliament would do just this. Following a series of debates in which many MPs and Lords criticized the 14-day limit, the Conservative politician Enoch Powell introduced a private member's Bill late in 1984, which, if passed, would prohibit all research on in vitro embryos (Mulkay, 1997). This Unborn Children (Protection) Bill was only defeated after a pro-research lobby, including Mary Warnock, distinguished the early ‘pre-embryo’ from an ‘unborn child’, and warned politicians that a total ban would stifle essential research (Warnock, 1985c).

At the same time, others attacked the 14-day proposal as too restrictive. Editorials in Science and Nature urged the government to ‘devise more liberal legislation’, while Robert Edwards claimed ‘many fundamental studies on differentiation, human anomalies and other major advances may require more days in vitro’ (Edwards, 1984; Sattaur, 1984; Anon, 1984e). And the Oxford philosopher Michael Lockwood described the 14-day limit as ‘unfortunate’, suggesting that research should be permitted on embryos up to 6 weeks after fertilization (1985, p. 187). Others, meanwhile, used the controversy to make a specific point about the types of expertise needed for ethics committees. Another Oxford philosopher, Richard Hare, claimed that disagreement surrounding embryo experiments was likely ‘to go on inconclusively’ without thorough utilitarian consideration of the benefits and harms that followed from a particular moral positions (1987, p. 71). Hare believed philosophers had the crucial role in helping other committee members, politicians and the public to derive ‘clear answers’ – persuading them to give reasons to support their stance, or agree there was a better course of action. Rather damningly, however, he claimed Warnock

was content with the second best alternative, which was perhaps all she could manage. This was to find some conclusions which the members of the committee, or as large a majority as possible, would sign, and not bother about finding defensible reasons for them. Since the members were fairly typical in their moral attitudes or prejudices, it might be hoped that conclusions to which they would agree would also be acceptable to the public. (ibid, p. 82)

Hare concluded that by not ensuring unanimity in her committee, Warnock simply ensured that opposing groups ‘were having a field day and the public is still floundering’ (ibid, p. 88).

Responding to this criticism, and the controversy surrounding embryo research, Warnock claimed that moral disagreement was ‘unavoidable’ as pluralist societies lacked an ‘agreed set of principles which everyone, or the majority, or any representative person, believes to be absolutely binding’ (1985a, p. xi). To Warnock, it followed from this that no field – including philosophy – should dominate ethical oversight and decision making. Rebutting Hare, and echoing Thatcher's claim that ‘choice is the essence of ethics’, she argued that

In matters of life and death, of birth and the family, no-one is prepared to defer to judgements made on the basis of a superior ability in philosophy. For these are areas that are central to morality, and everyone has a right to judge for himself. Such issues indeed lie at the heart of society; everyone not only wants to make their own choices but are bound to do so. And this is why there cannot be moral experts. Everyone's choice is his own. (ibid, p. 96; cf Dowden, 1978).

With this in mind, Warnock countered that if her committee had ‘been undivided then it would inevitably also have been unrepresentative, perhaps seen as biased’ (1985b, p. 519). As the divisions on embryo experimentation were irreconcilable, she followed, the solution lay not in striving for a ‘correct’ answer, but involved proposing ‘something practical, regretted no doubt by some as too lax, by others as too strict, but something to which, whatever their reservations, everyone would be prepared to consent’ (ibid, p. 521).

In line with the prevailing emphasis on public accountability, and the associated distrust of professional expertise, Warnock claimed that simply replacing the expertise of doctors and scientists with that of philosophers was ‘not only out of place, but totally unacceptable’ (1985a, p. 96). As she detailed throughout the 1980s and 1990s, she believed ethics committees should provide a form of ‘corporate decision-making’ in which various professions and interest groups formulated acceptable solutions to contentious issues (Warnock, 1992, p. 31). This line of thought has proved influential – and helps answer the vexed question of what bioethics is. To the philosopher Onora O’Neill, an ex-member of the Nuffield Council on Bioethics and former student of Mary Warnock:

Bioethics is not a discipline, nor even a new discipline; I doubt it will ever become a discipline. It has become a meeting ground for a number of disciplines, discourses and organizations concerned with ethical, legal and social questions raised by advances in medicine, science and technology. The protagonists who debate and dispute on this ground include patients and environmentalists, scientists and journalists, politicians and campaigners, and representatives of an array of civic and business interests, professions and academic disciplines. (2002, p. 1)

As Roger Brownsword notes, moreover, we should not consider moral pluralism a problem for bioethics, but should see it as the source of its socio-political utility: providing ‘outsiders’ with the chance to broker compromises and facilitate ‘the process of practical decision-making’ (2008, p. 29). But this state of affairs is by no means self-evident, and is rather the product of considerable negotiation by bioethicists themselves. Here, again, lies the value of historical studies of bioethics. Examining how figures like Warnock turned criticism of her committee's deliberative process into an argument for continued, interdisciplinary oversight of biomedicine goes a long way to helping us appreciate precisely why bioethics remains such a visible, and valued, enterprise.

Conclusions

I have argued that studying debates on reproductive medicine can help us appreciate the demand for bioethics in Britain since the 1980s. Bioethics gained currency in this period because it fulfilled, and linked, the ambitions of politicians, scientists and doctors, and those figures in law and philosophy who went on to become ‘ethical experts’. The prospect of externally policing biomedicine tallied with the Conservative desire to audit hitherto self-regulating professions and increase their accountability to empowered ‘consumers’. Correspondence from the DHSS and DES indicate that this extended to the appointment of an ‘outside’ chair and a ‘broad-based’ membership for the government's inquiry into human fertilization and embryology.

At the same time, the political demand for external scrutiny of professions fulfilled Warnock's belief that philosophers should apply themselves to practical matters. And once selected as chair of the government inquiry, she became a strong advocate of what became known as ‘bioethics’: criticizing biomedical paternalism and extolling the benefits of external oversight. Like Ian Kennedy, her rhetoric was not simply a reaction to the growing calls for oversight in this period, but was fundamentally constitutive of it. This offers strong evidence that the principal figures in this history generated or perpetuated the demand for bioethics as much as they responded to it. This hardly comes as a surprise. As Downie and MacNaughton have outlined, bioethics had, and continues to have, obvious allure to philosophers looking to tackle substantive issues and play a role in shaping their culture; and its appeal is heightened in an increasingly competitive funding climate, where research councils prioritize practical relevance (2007, p. 32). Those who engage with bioethics no doubt draw encouragement from the considerable benefits that Warnock and Kennedy reaped from their early groundwork: both went on to serve on further regulatory commissions, were honoured by the government, and remain respected authorities on the ethics of science and medicine.

But as Downie and MacNaughton continue, bioethics does not just appeal to those philosophers or lawyers seeking practical relevance. It also offers assurance to doctors and scientists who, since the 1980s, are challenged to justify new technologies and practices to politicians and patient groups. As we have seen, Warnock promised researchers that external oversight was an essential means of insulating their work from political and public criticism. Many doctors and scientists clearly agreed: endorsing her calls for oversight and a national bioethics council. Indeed, by analysing the skillful way that Warnock positioned herself as an essential broker between political and biomedical concerns, we can go a long way to explaining the growth of bioethics in Britain. From the outset, this new form of oversight was as concerned with legitimating research as it was with ensuring public accountability. And this reaffirms Rosenberg's claim that, contrary to its ‘origin myths’, bioethics is not, and has never been, a ‘free-floating, oppositional and socially critical reform movement’ (1999, p. 38). In Britain, as elsewhere, it was ultimately about bridging divides, not exacerbating them: deriving workable solutions without fundamentally questioning the forms of power or control invested in modern biomedicine.

Some predict this lack of critical edge will spell bioethics’ downfall, but I would argue precisely the opposite (Cooter, 2004). If we see bioethics as a ‘mediating element’ between politics, the public and science, then contemporary society provides it with fertile ground (Rosenberg, 1999, p. 38). The biomedical sector is increasingly seen as a prized component of the so-called ‘knowledge economy’, with politicians and private investors placing great stock in the progress of research (Rose, 2007). And successive New Labour governments have increased the trend toward public scrutiny and accountability in many policy areas (Keane, 2009). This is especially the case for science and medicine: where amidst what Franklin calls a ‘crisis of bad faith’, political and media discussion of GM crops, stem cells and retained organs continues to prioritize external oversight and public participation in the regulatory process (Franklin, 2003). By simultaneously providing assurance to politicians, social groups and scientists – while continuing to generate and sustain public debate – bioethics will no doubt remain influential for years to come.