Article Text

Download PDFPDF

When the boss turns pusher: a proposal for employee protections in the age of cosmetic neurology
Free
  1. J M Appel
  1. Dr J M Appel, 140 Claremont Ave #3D, New York, NY 10027, USA; jacobmappel{at}gmail.com

Abstract

Neurocognitive enhancement, or cosmetic neurology, offers the prospect of improving the learning, memory and attention skills of healthy individuals well beyond the normal human range. Much has been written about the ethics of such enhancement, but policy-makers in the USA, the UK and Europe have been reluctant to legislate in this rapidly developing field. However, the possibility of discrimination by employers and insurers against individuals who choose not to engage in such enhancement is a serious threat worthy of legislative intervention. While lawmakers should not prevent individuals from freely pursuing neurocognitive enhancement, they should act to ensure that such enhancement is not coerced. This paper offers one model for such legislation, based upon a proposed US law, the Genetic Information Nondiscrimination Act of 2008, to address precisely this particular pitfall of the impending neuroscience revolution.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

If “the twenty-first century will be the century of neuroscience,” as a panel of leading scientific intellectuals predicted in 2004, then the bioethics of the coming era is likely to be dominated by neuroethics.1 Much of the controversy in this field now focuses on neurocognitive enhancement—often referred to as “cosmetic neurology,” a term coined by University of Pennsylvania neurologist Anjan Chatterjee in a seminal 2004 article on the subject.2 Chatterjee argued that Western medicine stands on the brink of an inevitable neuro-pharmacological revolution in which healthy people will be “treated” with brain-enhancing drugs in order to improve performance in such fields as attention, learning and memory.3 In a series of subsequent articles, Chatterjee has documented dozens of different ways in which therapeutic agents may also be harnessed to augment the mental abilities of individuals without illnesses—ranging from commercial airplane pilots whose performance in simulated emergencies improved when trained on a reversible acetylcholinesterase inhibitor, donepezil, to the use of selective serotonin reuptake inhibitors in order to foster “affiliative behaviour” in healthy, non-depressed adults.4 5 These possibilities are so far-reaching and momentous that one leading commentator, drawing comparisons with the significant part neurologists once played in promoting the concept of brain death, has called for his colleagues to take charge of the discussion surrounding cosmetic neurology because, in doing so, they will “assume a role in shaping the debate about what it means to be fully human”.6

The champions of neurocognitive enhancement often make for strange bedfellows. Transhumanist philosophers, such as Nick Bostrom and Max More, have long sought in medical technology an opportunity to overcome “traditional human limitations” for the benefit of society.7 8 Libertarian bioethicists argue for access to these advances on autonomy grounds, demanding that individuals be permitted to use any available technology for self-improvement.9 Anita Silvers, the prominent San Francisco State University-based bioethicist and disability-rights advocate, writes that such modifications are a basic human right and the very “essence of freedom”.10 Military physicians, citing the dangers of sleep-restricted environments, claim an entitlement—and even a moral duty—to “help healthy inviduals optimise their cognitive potential”.11 In contrast, conservative critics of neurocognitive enhancement, such as Francis Fukuyama, see unchecked tinkering with the healthy brain as an unnatural threat to the “human essence”; Fukuyama fears that such cosmetic interventions could ultimately lead to an entrenched inegalitarianism that would undermine democratic institutions.12 In addition, University of Rochester philosopher Richard Dees objects to Chatterjee’s claim that a neurocognitive revolution is an inevitable result of military pressures and market forces. Dees views this outlook as the surrender of ethics to power—an abnegation of moral duty—and he calls for the use of “democratic checks” in order to “collectively control our own destinies” in the face of the neurocosmetic onslaught.13 Yet while ethicists and physicians have debated the merits of these new technological possibilites, legislators and policy-makers in the USA, UK and continental Europe have largely steered clear of the issue. For all practical purposes—except within the limited realm of state-run schools—neurocognitive enhancement remains no more regulated today than any other basic medical or pharmacological interventions.

Opponents of unlimited neurocognitive enhancement tend to advance four different sets of concerns, three of which offer decidedly poor grounds for government regulation or restrictive legislation.i First, objectors argue that neurocognitive enhancement is anti-egalitarian because these technologies are expected to be costly and the wealthy will have significantly more access to them. This is indeed likely to be the case—unless society chooses to subsidise enhancement, as it does public education and (outside the USA) healthcare. However, similar inequalities are generated by private grammar schools and tutors for the SAT (a college and university admission test) and Ivy League universities, yet few suggest outlawing these threats to distributive justice. As the University of Virginia’s Jonathan Moreno saliently points out, “We don’t stop people from giving their kids tennis lessons.”14 The procrustean sacrifice of autonomy needed to achieve equality of outcome, where fundamental rights or needs are not concerned, is one that Western societies have long ago rejected on both ethical and practical grounds.

A second set of concerns about unlimited neurocognitive enhancement is advanced by objectors who assert that neurocognitive enhancements are both unnatural and a threat to good character. In other words, suffering is an essential part of the human experience, and if it does not kill us, it makes us stronger. Whether or not this is true—and some radical Christians might argue that it is—the Kafkaesque notion that the state should impose suffering for the sake of suffering is obviously not generalisable to other circumstances. If we are to ban cognitive enhancement for the sole purpose of building stronger psyches, we might just as well ban analgesics or even comfortable shoes, both of which are equally “unnatural” to man’s presocietal condition. Nothing about medicine—from aspirin to x rays—can be claimed as “natural” under those criteria. Moreover, the very legitimacy of the state in the post-Enlightenment era derives from its ability to reduce individual human misfortune—that, presumably, is why we accept the rule of law; and to have the goverment intentionally do otherwise, barring precise and compelling circumstances, seems a far greater threat to the democratic institutions that Fukuyama cherishes than does cosmetic neurology.

A third group of critics resists neurocognitive enhancement on safety grounds. Since these interventions are entirely elective, some opponents believe that the risk of an unknown future harm outweighs any short-term intellectual benefit. (This set of objections can be—and often is—levelled against cosmetic surgery as well.) The problem with this line of reasoning is that many forms of pleasure entail considerable hazards—from eating a cheeseburger to riding a motorcycle. While neurocognitive enhancement should certainly be regulated on safety grounds to the same extent as other medical goods and services, ensuring that products are well tested and consumers kept reasonably informed, why extra precaution should be taken with regard to this one set of intervenions, barring any documented evidence of specifically heightened risks, remains unclear. A balanced response to safety concerns ought not be outright prohibition, but rather the commitment of resources—either public or private—to ensure that research and development organisations take necessary care in evaluating their products.15 The reality is that most people engaging in neurocognitive enhancement, considering the elevated stakes and economic costs involved, are probably far more likely to investigate the various benefits and dangers of their choices than the average motorcyclist or patron of McDonald’s. As a last resort, the state would be far wiser either to mandate long-term insurance for those opting to enhance, or to set up a taxpayer-funded compensation pool, such as that used to protect victims of vaccination reactions, rather than minimising individual choice in the face of a theoretical and, as yet, unsubstantiated risk.

The one area in which objectors can make a good case for legislative intervenion is with regard to coercion. If the goal of good social policy is to maximise autonomy while minimising suffering—and I believe that it is—then the threat of individuals being pressured into unwanted enhancement must be examined seriously. This is particularly true regarding inherently unbalanced relationships, such as those between employer and employee, where the inequality of bargaining power often limits meaningful employee choice. For example, what if hospitals started to demand that medical residents dose up on methylphenidate, a drug used to improve concentration, as a prerequisite for employment? Or if fast-food chains insisted that all counter employees consume serotonin reuptake inhibitors to keep them “affiliative” when confronted by dissatisfied customers? To some, these technologies offer an opportunity to maximise employee productivity and enhance the society’s overall quality of life. To others, they bring us one step closer to the dystopia of Brave new world. A minority of philosophers, motivated solely by utilitarian concerns and without any interest in individual will, theoretically might endorse overtly compulsory enhancement as a means of collective betterment. However, once autonomy is accepted as a desirable value, even the strongest supporters of cosmetic neurology must agree with Arthur Caplan that it is essential to ensure that “enhancement is always done by choice, not dictated by others.”16

Concerns about forced enhacement have already arisen in the context of the public schools, where parents and administrators have been battling for more than a decade over whether educators can mandate drugs such as Ritalin (methylphenidate hydrochloride) and Prozac (fluoxetine) for students with behavioural difficulties. In a rare exception to the general trend against restrictive legislation, Minnesota and Connecticut in 2001 became the first of a growing number of states that prohibit schools from forcing treatment for attention-deficit/hyperactivity disorder on students.17 The US Congress amended the Individuals with Disabilities in Education Act (IDEA) in 2004 to impose these restrictions nationwide.18 However, as other neurocognitive enhancement technologies remain in their infancy, lawmakers have proved far less willing to extend similar restrictions to private employers.

The dilemma with regard to employment is complex, in that it pits the rights of some potential employees to choose to enhance against the rights of others to be free from the coercive pressure to enhance. As the experience of doping in professional sports has demonstrated, those who choose synthetic augmentation place those who do not at a competitive disadvantage. The Economist compared the ethical concerns posed by the neuroscience revolution to those generated by the genetic revolution. Like geneticists, “neuroscientists may soon be able to screen people’s brains to assess their mental health; to distribute that information … to employers or insurers; and to “fix” faulty personality traits with drugs or implants on demand.”19 Denying some individuals the opportunity to enhance in this way clearly undermines their right to do with their bodies as they choose. However, to permit some to engage in these enhancements may lead to an inevitable race to the bottom—or top—in which employers and market forces pressure more and more American workers to place their brains at the disposal of their bosses. We could look forward to a job market where prospective employees either enhance their brains or confront discrimination against unaugmented cognitive ability.

Fortunately, as Richard Dees points out, “the campaigns for work-safety rules and for the 40-hour work week demonstrated that we need not bow to the massive power of the market.”13 Instead, at least in the short term, some form of compromise legislation remains possible. One promising model for such a law in this arena is the Genetic Information Nondiscrimination Act (GINA) of 2008.20 This measure, recently passed by the US House of Representatives, would prohibit the use of genetic information to discriminate in either insurance or employment; it was first proposed by President Bill Clinton in 2000, at the same time as Prime Minister Tony Blair pitched such a law for the UK.21 (President Bill Clinton had previously issued an executive order banning such discrimination in federal employment.) GINA makes a crucial distinction between genetic information and present characteristics. Under its rules, a prospective employer could neither test for APOE4, a gene thought to be implicated in Alzheimer disease, nor use knowledge of a potential employee’s APOE4 status—such as information gained from medical records—in making a hiring decision. However, testing prospective employees’ memory skills would still be perfectly permissible under the statute. An analogous distinction might make good sense with regard to neurocognitive enhancement: prohibiting forced enhancement should be prohibited, while employers should be permitted to continue outcome-based assessments. While it might be good policy to prevent airlines from requiring pilots to train using donepezil, these same companies could still test the performance of pilots in simulated emergencies when rendering employment-related decisions. Some pilots might still choose to dope up on donepezil, giving themselves an advantage, but those choosing not to do so would not be excluded from employment via a bright-line test. This compromise distinction might prove ineffective in circumstances where the enhancement confers an extraordinary advantage—but for the time being, most of the cognitive benefits enumerated by Chatterjee appear to be moderate. Although an enhanced individual might garner an advantage, an unenhanced yet highly talented individual can often still perform at a comparable level.

The one exception to this general prohibition might be in circumstances where legislators specifically authorise certain forms of mandatory neurocognitive enhancement for the public good. In such cases, the democratic machinery of society—rather than self-interested employers—would conduct the moral balancing test between private freedom and public safety. Furthermore, this option should be permitted only in cases where the need is compelling and no other reasonable alternatives exist to achieve the same policy end. For example, requiring medical residents to consume methylphenidate to stay alert is not a compelling societal need because reduced hours of service could easily achieve the desired results. However, solidiers on active military duty, confronting circumstances where sleep deprivation is unavoidable, might more reasonably be expected to use such drugs. Shifting the control of such enhancement authority into public hands may not prevent all abuse, but at least this approach minimises the likelihood of coercive practices motivated by individual self-interest or private economic gain.

If policy-makers intend to prevent private-sector employees from facing unreasonable pressures to indulge in cosmetic neurology, the time for legislative action is now. In moving quickly towards passage of the Genetic Information Nondiscrimination Act, the USA is trying to place itself ahead of the curve when it comes to the potential ethical pitfalls of the genetic revolution. However, failure to adopt similar legislation regarding neurocognitive enhancement discrimination—before such discrimination becomes widespread—reflects considerable shortsightedness. Of course, lawmakers may not wish to act at all. They may prefer a world in which neuroenhancement is the occupational norm and medical residents are tested for mandatory amphetamines before being permitted onto hospital wards. But if policy-makers do intend to intervene, they should do so before neurocosmetic technology gains an economic foothold and the neurologically enhanced workforce really does become an inevitability. Eventually, without preventive legislative action, employers will begin to demand that their employees accept neurological enhancement as a condition for employment or promotion—and the working stiffs of the world will not have the financial power to resist. That’s a no-brainer.

REFERENCES

Footnotes

  • Competing interests: None declared.

  • i Chatterjee enumerated the four concerns as relating to safety, character and individuality, distributive justice and coercion. Ronald Bailey had earlier laid out eight objections,14 which neatly overlap with those advanced by Chatterjee. See also Chatterjee (2004).3