Article Text

Download PDFPDF

Unconscious emotional reasoning and the therapeutic misconception
  1. A Charuvastra1,
  2. S R Marder2
  1. 1
    NYU Child Study Center, New York City, New York, USA
  2. 2
    UCLA Psychiatry & Biobehavioral Science, West Los Angeles Healthcare Center, Los Angeles, California, USA
  1. Dr A Charuvastra, NYU Child Study Center, 577 First Ave. New York, New York 10016, USA; ACharuvastra{at}gmail.com

Abstract

The “therapeutic misconception” describes a process whereby research volunteers misinterpret the intentions of researchers and the nature of clinical research. This misinterpretation leads research volunteers to falsely attribute a therapeutic potential to clinical research, and compromises informed decision making, therefore compromising the ethical integrity of a clinical experiment. We review recent evidence from the neurobiology of social cognition to provide a novel framework for thinking about the therapeutic misconception. We argue that the neurobiology of social cognition should be considered in any ethical analysis of how people make decisions about participating in clinical trials. The neurobiology of social cognition also suggests how the complicated dynamics of the doctor-patient relationship may unavoidably interfere with the process of obtaining informed consent. Following this argument we suggest new ways to prevent or at least mitigate the therapeutic misconception.

  • therapeutic misconception
  • psychiatry
  • social cognition
  • clinical trial
  • ethics

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The term “therapeutic misconception” describes a phenomenon that interferes with the process of informed consent in clinical trials where the human subjects of an experiment are also patients. First described by Appelbaum et al in 1982, it provides a lens with which to further scrutinise the ethical limits of human experimentation.1 2 The concept of the therapeutic misconception comes into play when there are no obvious external influences on a patient’s decision to participate in research (eg, family influences compromising a patient’s autonomy), and there are no obvious impediments to a patient reading and understanding an informed consent document (eg, dementia or mental retardation). Under the therapeutic misconception, even with ideal conditions patients may not appreciate the distinction between clinical care and clinical research. Patients may misconstrue a therapeutic intention when asked to participate in a randomised clinical trial, and therefore discount the possibility of receiving either no therapy or sub-optimal therapy, despite an apparent comprehension of all the terms in the informed consent document. Several studies have shown that such misconceptions are common among research participants, and are difficult to dispel.3 4 Researchers themselves do not always accept or appreciate the distinction between care-giving and research, leaving both patients and researchers vulnerable to the therapeutic misconception.5

A recent editorial argued that the “therapeutic misconception will not go away” because decision making about participation in clinical experiments reflects not just reason but also emotion, and that certain types of “cognitive/emotional states” can lead to misconceptions about clinical trials.6 The misconception is specifically therapeutic because it involves subjects ascribing positive, helpful intentions to researchers, when such intentions may be absent. The misconception may be related to emotions subjects have towards researchers, since clinical trial subjects are also patients, and researchers are also often care-givers, and the doctor-patient relationship contains considerable emotion. Finally, the misconception may be shared between subject and researcher because researchers may themselves believe they are acting in the best interests of the patient. This paper argues that ethical inquiry into the therapeutic misconception should understand how beliefs and action are linked to emotional states, and that this understanding should take into account findings from psychology and neuroscience.7 8

EMOTION AND COGNITION IN BIOETHICS

Within the field of bioethics, emotion is often viewed as a source of irrationality, or an impediment to “rational decision making”. This view of emotion may reflect the influence of principlism, which as an ethical framework does not address the relationship between emotion and ethical reasoning.9 10 The therapeutic misconception compromises patient autonomy by depriving individuals of the necessary information to make an independent decision, and thus is the kind of problem with which principlism is most concerned, since among the four principles of autonomy, beneficence, non-malfeasance, and justice, autonomy is the “first among equals.”11 One approach to mitigate the therapeutic misconception is to improve explanations of the nature and purpose of clinical experiments.12 But unless an understanding of method actually reduces emotional influence in decision making, we believe that further explanations will not reduce the therapeutic misconception.

Other ethical approaches acknowledge emotions as important and valid determinants of ethical behaviour. Daniel Callahan writes, “Our reasons ordinarily embody and express some emotions just as our emotions embody some cognitive judgments. Our untutored emotions are useful signals, repugnance or gut attraction a possibly meaningful flag to catch our moral attention, requiring closer examination.”9 The therapeutic misconception provides a starting place to better understand how emotions influence morally important decisions.

The evaluation of information is affected by the source of the information—evaluation always requires a social judgment. We evaluate facts reported in the National Enquirer differently from how we evaluate facts in the New York Times, even though the information may be formally the same. This is known as the “frame effect.” Beyond what patients read about the risks and benefits of a research protocol, other beliefs about the people requesting their participation will influence their decisions. Common sense suggests that a subject’s judgment about whether a researcher is familiar, or trustworthy, or just nice to be around, are all powerful influences on the process of deciding to participate in research. Social psychology has long since shown that unconscious social evaluations are robust and reliable.13 14

Yet contemporary ethical analyses of informed consent tend to ignore both “common sense” and social psychology. We believe several factors contribute to this. First, due to the speculative excesses of psychoanalytic psychiatry, many scientists no longer consider subconscious processes to be legitimate objects of inquiry. A related issue is the tremendous bias in science to give authority to things that are visible and measurable.15 Only with the recent invention of functional MRI has it been possible to measure brain activity associated with unconscious and conscious emotional processes. As it is widely believed that such brain activity is the neural substrate of emotion, this technology provides a way to measure the physical bases of emotional processes and further define the relationship between neurophysiology and subjective experience. In contrast, for many years we have been measuring the rational functions of the brain with standardised and validated neuropsychological tests (eg, the Weschler Intelligence Test). Compared to the nuances of assessing and quantifying a person’s inner emotional state, describing and quantifying a person’s cognitive impairment has been more straightforward. Not that it is impossible to be scientific about inner subjective states, only that it is difficult. Thus, most ethical analyses of mental impairment address cognitive problems (eg, how differing degrees of mental retardation affect autonomy) rather than emotional problems.

Finally, many bioethical arguments assume that persons are aware of and can identify their beliefs before they make decisions. Consider, for example, the “standard” case of the Jehovah’s Witness in need of a transfusion.16 17 How does an ethicist approach this scenario when the patient exhibits ambivalence about the transfusion, perhaps changing his mind several times without being able to say why? Perhaps the patient “secretly” feels no trust in his faith but is “inhibited” in asking for a transfusion by communal norms. In the standard scenario, the patient knows his feeling, and reveals it in private to his physician. But imagine a second scenario, where the feeling of mistrust or doubt in faith is “subconscious” because the communal norms are so strongly felt that the patient cannot acknowledge his doubts. The patient’s behaviour is the same in both scenarios. The difference is that the standard case is “solvable” with respect to autonomy under most bioethical frameworks—the patient is able to reveal his true desire within the confines of doctor-patient confidentiality, and autonomy is preserved. The second scenario is unsolvable with respect to autonomy until the patient’s subconscious belief goes away –either eliminated, perhaps through the reassurance of a trusted minister, or coming into consciousness, perhaps through the simple questioning in private by a considerate physician.18 Thus, one possible reason bioethics has ignored evidence of unconscious emotional reasoning is that bioethical frameworks do not work very well if they admit that unconscious beliefs exist or have moral significance.

Recent experiments in social neuroscience force a re-examination of unconscious emotional reasoning. Functional MRI data provide physical evidence that the human brain processes social information for emotionally salient details, but that this emotional salience is not necessarily available to conscious awareness. Psychological methods implicitly measure automatic judgments and non-conscious bias, but functional MRI methods allow direct measures of non-conscious emotional processes. Direct measures reveal connections between brain systems involved in reward, fear, memory and emotional awareness that are active even when subjects are not consciously making decisions or appraisals. The experiments described below suggest models of decision making that may not be apparent from psychological studies, and suggest new methods for studying how persons make decisions about emotionally or ethically provocative situations.

TRUST, FAMILIARITY, AND ATTRACTION

Evaluations of a person, in terms of whether they are threatening or trustworthy, can bypass conscious awareness, or even be contradictory to conscious valuations. With regard to race there are clearly learned emotional responses to people of different races. The fearful aspect of these responses is mediated by the amygdala, an area of the subcortical brain involved in detecting emotionally relevant information present in contextual and social emotional cues.19 The appraisal of trust is also emotional, as it reflects a feeling of “non-threat”. Neuroimaging studies of social cognition and racial bias show that amygdala activation is frequently subconscious.2022 Racial bias may be an unconscious but powerful influence on how a patient hears what a researcher has to say about the benefits and risks of a clinical trial. Another study of how people perceive trustworthiness in faces found that a person’s amygdala activates in proportion to how “untrustworthy” that person finds a given face. This activation occurs even when subjects are not asked to assess the trustworthiness of the face. Furthermore, this unconscious activation is not a function of familiarity with the faces.23

Locating the neural correlates of such social judgments is significant for several reasons. First, it demonstrates a physical basis for the concept of unconscious emotional reasoning and shows that we can measure such processes physically. Second, such studies reveal new levels of complexity to social appraisal. Certain brain areas are selectively active during explicit evaluations that call upon consciously held beliefs, while other areas (eg, the amygdala) are activated even when a person is not consciously making evaluations. Third, emotional biases are constantly present. Amygdala activation occurs even when people are not consciously assessing the trustworthiness of a face. In a different study of “good-bad” evaluations of morally provocative topics (eg, “murder” “abortion” “welfare”) the amygdala was always active in proportion to subsequent conscious reports of the emotional intensity associated with each topic, even when subjects were not consciously making evaluations.24 Social neuroscience shows that brain areas associated with emotion are activated simultaneously with areas associated with reasoning. These findings suggest that we reframe the therapeutic misconception as an empirical question about cognitive/emotional processing. To what degree can individuals modulate their automatic emotional responses when making complex decisions in ethically provocative situations?

Experiments studying the experience of familiarity provide an example of how neuroimaging might improve our understanding of the complexity of social decision making. We know without looking at brain states that familiarity can lead to positive evaluations of people.25 Neuroimaging allows us to dissect the complex process of recognising a familiar face and to ask questions about how this may predispose to a therapeutic misconception. For example, evaluating unfamiliar faces activates the amygdala, consistent with the idea that the amygdala signals potential threats and evokes a state of alertness to threat.2629 One implication relevant to the therapeutic misconception is that experiencing someone as familiar also involves some anticipation of that person’s intentions based on memories of the familiar person. The physical basis of emotional influence due to familiarity may be measurable by looking at amygdala activation in relation to areas of the brain involved in the recognition and representation of the intentions of others (eg, posterior superior temporal sulcus or anterior paracingulate cortex).30 We may be able to quantify how a feeling of familiarity is related to a sense of security and safety, and how these two feeling states are related to ethically relevant decision making. The unconscious evaluation of familiarity and trustworthiness may be as important as the conscious rational evaluation of the facts exchanged during the process of informed consent, and may explain why patients have difficulty concluding that they may receive suboptimal treatment from treatment providers (eg, doctors) in clinical trials.

Another example of how social neuroscience may aid ethical analysis comes from a study of facial beauty. Common sense tells us that study subjects will prefer to work with an attractive researcher over an unattractive researcher. But can we even characterise this factor of “attractiveness”, and if so, how much importance should it have in our ethical analysis? It would be simpler to dismiss attractiveness (or the attraction a subject feels) as an unquantifiable subjective state. However, functional neuroimaging provides physical evidence that, at least among heterosexual males, the perception of beauty triggers the neural circuitry of reward in measurable ways.31 Seeing faces that are sexually attractive selectively triggers the nucleus accumbens, a part of the subcortical brain that is involved in dopaminergic pathways and pathways related to reward and pleasure. These findings are consistent with the idea that there is a common subcortical and cortical network for processing rewarding stimuli. Attractiveness appears to trigger this network in a way similar to other rewarding stimuli, such as drugs, food, or money. Interactions with attractive research staff may therefore be reinforcing at an unconscious level. This suggests how the rewards of participating in research may be unrelated to any treatment received or other benefit listed in the informed consent document, but may be just as real to the participant. Similarly, the cost of not participating may not be apparent to researchers, but may be felt as a loss of reward by the subject, and therefore influence cognitive rationalisations that allow the subject to consent to participation.

SOCIAL NEUROSCIENCE – WHY IT MATTERS TO ETHICS

People first learn to evaluate treatment providers in a care-giving context, often at an early age with their paediatricians, and the vast majority of treatment encounters over a person’s life will occur in a care-giving context. Subsequent evaluations of any offers of treatment will rely upon already established social knowledge about getting treatment. Recent experiments on the ventromedial prefrontal cortex, which links the amygdala to “higher” brain regions involved in reasoning and remembering, bear this out. This region is activated during anticipatory states and by outcomes associated with reward or punishment.3234 Ventromedial prefrontal cortex damage may have no consequence for intellectual function but results in patients making personally disadvantageous decisions. The region links feeling states associated with past decisions to contemplations about future decisions of a similar nature. Individuals with ventromedial prefrontal cortex damage seem not to evoke appropriate feeling states as they contemplate various behavioural choices in new situations, and fail to generate typical anxiety reactions in tasks where they ponder potentially risky choices.35 36 The evocation of past feeling states may be as important for decision making as the capacity for abstract reasoning. Indeed, a person with a cognitively intact but emotionally impaired brain may be severely compromised in situations requiring informed consent. Thus, when a patient is reading an informed consent document, he is also seeing a person in a white coat and appreciating that he is in a hospital or medical centre, and his evaluation of the intention of the researcher and the benefits and risks of his relationship with this researcher will reflect to some degree all his prior social encounters with similar people in similar white coats in similar settings. A therapeutic misconception is even more likely to take place if the person proposing the research is someone the patient already knows, and especially if it is someone the patient already receives care from.

Social neuroscience experiments generally involve normal subjects, but patients with medical and mental illness are the persons making informed consent decisions about clinical trials. Two considerations exist regarding potential differences between patients and normal subjects. First, these experimental findings should generalise to patients because social evaluations of threat, attractiveness, and safety are fundamental parts of human behaviour likely to be preserved across a wide range of medical and psychiatric impairments. Second, if persons with potentially impaired decision making capacity require special protection then it should be a priority to determine if patients with mental illnesses are different from “normal controls” in how they evaluate threat or reward, and whether these differences predispose them even further to therapeutic misconceptions.

These results provide a different framework for analysing the therapeutic misconception. The study volunteer may be emotionally experiencing a researcher as rewarding, trustworthy, safe, and familiar, and these emotional experiences may be more salient (yet unconscious) than the factual content conveyed in the researcher’s speech. Subsequent conscious beliefs may simply serve to rationalise these unconscious feelings. Despite having extensive discussions about the possibility of receiving no treatment in a randomised placebo controlled trial, a patient may continue to express the sincere belief that he will receive treatment, because his trust in the doctor leads him to conclude that the doctor would never put him at risk. A wide range of social information provides the frame within which patients then evaluate the verbal information a researcher provides. How this frame changes the perception of the picture inside the frame is an important ethical question. This is known as the frame effect and a very recent study has explored the neurobiology of this effect in economic situations. Studying how an economic context affects a subject’s risk-taking or risk-aversion, the authors show that how a risk-decision is framed (with a positive or negative emotional valence) has a pervasive effect on decision making. Activation of the amygdala was driven by the combination of a subject’s risk-taking and the emotional context of the frame, suggesting that the amygdala is involved in integrating the positive or negative emotional context of the frame during decisions to take certain risks.7

We are not, however, suggesting a kind of emotional determinism. Rather, we are arguing for a more complicated view of decision making that acknowledges the interplay between reason and emotion. For example, in the frame effect study, amygdala activity did not predict susceptibility to a frame effect. Rather, a robust inverse correlation existed between activation in the orbito-medial prefrontal cortex and susceptibility to the frame effect. The orbito-medial prefrontal cortex is reciprocally connected to the amygdala and is involved in integrating and evaluating the incentive value of predicted future outcomes to guide behavioural choices. Greater activity in this part of the brain may be associated with more awareness of one’s own emotional feelings. Possibly, more “rational” subjects have more activity in this part of the brain, inhibiting their more automatic responses while evaluating the meaning of different choices at more abstract levels. Similarly, in studies of racial bias, when Black faces were presented to White subjects long enough to allow some conscious awareness and thought, amygdala activation was significantly reduced. This reduction was associated with greater activity in frontal cortex areas associated with emotion regulation. There appear to be distinct neural pathways for automatic and more consciously controlled processing of social information, and there is some conscious regulation of automatic responses to social stimuli.20 A remaining empirical question is to what degree can the automatic amygdala response, which may result in a feeling of mistrust or fear, be regulated by conscious thought, and furthermore, how does the activation or suppression of the amygdala correlate with actual decisions made in ethically relevant situations?

CONCLUSIONS AND FUTURE DIRECTIONS

Social neuroscience research suggests that our patients have difficulty making isolated and neutral evaluations of fact when it comes to research participation. Empirical ethics research also shows that patients place enormous amounts of unqualified trust in clinical researchers, often because their relationship to the researchers occurs in a therapeutic context.37 Patients may be influenced by a whole host of internal, often unconscious, pressures. Some of these pressures are, undoubtedly, a patient’s values and morals, and are indisputably important for decision making. But other pressures may be the kinds of biases that might interfere with a patient’s well-being.

This analysis suggests that an effective way to address the therapeutic misconception would be to explore and correct patients’ feelings about the researchers, rather than their beliefs about research. Social evaluations involve assessments of intentions, trustworthiness, familiarity, and potential reward. Consent documents could stress all of these aspects, rather than solely focusing on the methods of research. For example, a consent document could clearly state the researcher’s intention to gather data, rather than to provide optimal care, and reasons why the subject should trust the researcher despite this intention (eg, a list of the researcher’s prior experiments and the number of prior adverse events), or reasons why the subject might not trust the researcher (eg, familiarity can breed misplaced trust). Such a radical reworking of consent documents might be warranted if current interventions that stress education about the nature of the research methods fail to improve therapeutic misconceptions. Additionally, patients may be misled by researchers’ therapeutic misconceptions. Interventions to specifically assess and address the intentions and self-perceptions of researchers may help both patients and researchers in the process of informed consent. A number of structural changes may alter the social-emotional frame and thus decrease emotional biases on the part of patients. Perhaps only unfamiliar or unattractive staff should recruit and obtain consent, or perhaps staff could simply wear a distinct kind of uniform to clearly distinguish research staff from clinical staff. Contrary to the view of many institutional review boards, offering more money to subjects may actually improve the ethical status of research, by making it more obvious that research is qualitatively different from treatment. Paying subjects for lunch and transportation may seem like an extension of clinical care—the researcher and patient are exchanging “favours”—but paying subjects several hundred dollars makes it clear that what is happening is different from usual care.

Functional neuroimaging provides a method for generating testable hypotheses about how persons make ethically important decisions, and how these decisions are influenced by various environmental, emotional, and cognitive factors. Future research should extend investigation of the frame effect to ethically interesting scenarios involving medical rather than economic risks.7 Non-imaging research could also simply assess differences in subject’s willingness to give consent to proposed experiments while varying aspects of the frame, including the qualities of the researcher obtaining consent (eg, height, “attractiveness”, white coat or no coat, age, race, or gender) Similarly, we could study how deliberately provoked emotional states (eg, provoking a negative or positive mood state with words or images) bias subject decision making for informed consent. Applying neuroimaging to such experiments might help us better understand both the nature of moral reasoning in practice, and how, if at all, these processes differ from other kinds of mental processes already studied. These questions are important because even when patients have all the factual information about a research project, their decisions will be strongly influenced by emotional factors inherent in the doctor/researcher-patient/subject relationship—trust, familiarity, and attraction.

Acknowledgments

We would like to thank K Wells for valuable feedback on early drafts of this paper.

REFERENCES

Footnotes

  • Competing interests: None.

Other content recommended for you