Article Text

Download PDFPDF

‘What's Psychology got to do with it?’ Applying psychological theory to understanding failures in modern healthcare settings
Free
  1. Michelle Rydon-Grange
  1. Correspondence to Dr Michelle Rydon-Grange, North Wales Clinical Psychology Programme, School of Psychology, Bangor University, Bangor, Gwynedd LL572DG, UK; michellerydongrange{at}gmail.com

Abstract

The National Health Service (NHS) has, for over four decades, been beset with numerous ‘scandals’ relating to poor patient care across several diverse clinical contexts. Ensuing inquiries proceed as though each scandal is unique, with recommendations highlighting the need for more staff training, a change of culture within the NHS based upon a ‘duty of candour’, and proposed criminal sanctions for employees believed to breach good patient care. However, mistakes reoccur and failings in patient safety continue. While inquiries describe what went awry in each case, questions of how and why such failures came to be remain unanswered. Psychology has a role in answering these questions. Applying psychological theory can guide an understanding of the causes that lead to catastrophic failures in healthcare settings. Indeed, what is often neglected in inquiries is the role of human behaviour in contributing to these failures. Drawing upon behavioural, social and cognitive theories, a psychological analysis of key factors, typically present in clinical contexts where serious failures of care occur, is presented. Applying theory and models from the field of psychology can guide further understanding of the precipitants to poor care.

  • Psychology
  • Applied and Professional Ethics
  • Behavioural Research

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The value of psychological theory in safety-critical industries such as aviation and nuclear power has long been acknowledged and is based upon the recognition that certain employee behaviours are required to maintain safety. The significant contribution psychological theory can make in illuminating the pathways leading to failings in healthcare settings has only recently been recognised.1–3 Several areas of psychology have been hypothesised as relevant in helping understand the often incomprehensible breaches in patient care that occur in our hospitals. For example, Newdick and Danbury2 draw upon cognitive psychology to help better understand how clinicians and managers make complex decisions within the healthcare setting, which may unintentionally encourage an organisational culture that puts patient safety at risk. Whitby and Gracias3 reference behavioural theory by explaining that in modern healthcare settings, rewards are reserved for almost all non-caring activities, such as compliance with performance indicators and teaching. They argue that on the contrary, there are no rewards available for providing decent patient care. Under these circumstances, they state it is understandable that good patient care may wane. Kapur1 refers to a gamut of psychological studies, including those from social psychology, drawing parallels between the well-established finding that humans stand by and fail to help victims in critical situations, with the inaction of clinicians in substandard healthcare settings.

The reasons for the occurrence of these ‘scandals’ have been picked over countless times across numerous inquiries, often prompting more regulation, policies and protocols, but achieving very little in terms of a real understanding of the causes of poor care. Perhaps this is because what is often neglected in inquiries into patient failings is the role of human behaviour in contributing to these failures. For example, Saunders4 refers to the situation at Mid-Staffordshire as ‘a failure of practical ethics’. Thus, it seems pertinent to question, what predisposes employees—managers, clinicians and nurses alike—to behave in a manner so contrary to the caring, safe and reliable patient experience they wish to provide? What are the psychological processes that might facilitate a withdrawal from compassionate care? What are the psychological and behavioural consequences for employees when they work in an environment characterised by a lack of effective leadership, low morale and a dogged focus upon targets? What are the ‘psychological risk indicators’ in a healthcare setting, and how might they contribute towards and maintain poor clinical standards? Psychological theory can help shed light on how human behaviour operates under such circumstances.

At first glance, several National Health Service (NHS) high-profile inquiries have very little in common in terms of the clinical contexts in which failures of care occurred. Despite their apparent diversity, the conditions under which lapses in patient care happen share striking similarities. Typically, when failures occur they do so under working conditions characterised by: (1) a lack of leadership, (2) an organisational culture based upon fear of condemnation and (3) low staff morale. The results of several inquiries (eg, Duerden report,5 Francis report;6 Bristol report,7 Stoke Mandeville Hospital report,8) into failings in patient safety in the NHS share these three features. These high-profile inquiries are the focus of the remainder of this paper.

That the majority of healthcare professionals care greatly about their patients and are dedicated and motivated employees of the NHS is presupposed. How can we then, more readily understand employee behaviour that appears so contrary to the ethos of a caring, compassionate and healing NHS? As Saunders identifies, the crucial question is: “why do good people do bad things?”.4 Several psychological factors are hypothesised as relevant in explaining the pathways leading to failures in patient care. This paper is presented in three sections, each focusing on one of the three features noted above, and relevant psychological theories are applied with the aim of creating a better understanding of how and why healthcare scandals occur.

Lack of leadership

The Bystander effect and diffusion of responsibility

When everyone is responsible, no one is responsible.9

Each high-profile inquiry cites a ‘lack of effective…leadership’,7 confusion over job responsibilities, and complex and unclear accountability arrangements (locally and nationally) as significant causal factors leading to breaches in patient care. Essentially, employees were unclear regarding who had responsibility for what. In most cases, that there were significant problems with the control of infection,5 ,8 inadequate standards of patient care,6 alarmingly high mortality rates,7 and concerns about governance and staffing,6 ,7 was already known, but why did nobody—Board members, clinicians or nurses—intervene to clarify responsibilities, or pursue their concerns with vigor? Empirical studies have shed light on the psychological mechanisms underlying human inaction in a critical situation.10 The bystander effect refers to the phenomenon that an individual's likelihood of intervening decreases when passive bystanders are present.11 ,12 Research shows the bystander effect to be a robust phenomenon, observed across many domains (eg, serious emergencies13). Significant moderator variables include the number of bystanders (higher numbers leads to less help), the ambiguity of the situation (high ambiguity leads to less help) and the similarity of the bystander to the victim (greater similarity leads to more help14).

While there are no studies examining the bystander effect in a clinical context, staff behaviour in such a setting may be readily explained by this psychological phenomenon. First, the large number of staff employed in a main hospital such as Stoke Mandeville, Ysbyty Glan Clwyd (YGC) in North Wales, or Mid-Staffordshire Hospital, made it less likely that any individual member would take action. Second, the Kennedy8 and Duerden5 reports state there were conflicting messages regarding healthcare-associated infection (HCAI) statistics in YGC and Mid-Staffordshire, creating a high-ambiguity situation, which according to bystander theory inhibits human intervention. Finally, the extent to which hospital employees identify with their patients, much as bystanders do (or do not) with victims, may contribute to the passivity of hospital staff. For example, the commodification of patients in modern healthcare settings,2 and the overemphasis on targets and bottom-line achievements is likely to breed depersonalisation. Patients become anonymous thereby cultivating an environment where staff are able to psychologically distance themselves from their patients. Under such circumstances, the process of becoming a bystander is likely facilitated.

A psychological process related to the bystander effect—diffusion of responsibility11—seems particularly relevant to the context outlined in a number of inquiries. Diffusion of responsibility refers to the tendency to (subjectively) divide the personal responsibility to intervene in a critical situation by the number of bystanders—the more bystanders, the less personal responsibility any individual bystander will feel. Similarly, the individual bystander will only feel responsible for a portion of the cost to the victim associated with non-intervention. Consequently, diffusion of responsibility has been used to explain empirical findings showing that members of a group feel less responsible for negative consequences11 ,13 compared with when acting alone (ie, high responsibility conditions). Using diffusion of responsibility may explain why nobody intervened in infection control matters in YGC or Stoke Mandeville; it was assumed that someone else would take responsibility. Additionally, due to the number of staff in a busy main hospital, even with the knowledge that the infection prevention and control (IPC) service was weak, staff were likely to feel little responsibility for this.

A culture of fear, blaming and shaming

Mid-Staffordshire, Stoke Mandeville and YGC were all hospitals under pressure—pressure to achieve Foundation Trust status, and in the case of Stoke Mandeville and YGC, pressure to reduce rates of HCAIs. In the scramble to achieve these targets, unwelcome news was either downplayed or blatantly ignored. For example, according to the Duerden review, one of the core factors contributing to the outbreak at YGC was the “false assurance and complacency”5 from hospital staff regarding HCAI rates. The review continued, “there were not thought to be serious issues with infection rates”.5 However, HCAI rates in YGC were high—higher than in most other Welsh hospitals.15 In spite of this, assurances regarding HCAI rates were made. Similarly, Bristol Royal Infirmary was ‘awash with data’,7 but statistics indicating there was an unacceptably high mortality rate spanning several years at the paediatric cardiac surgery unit, were ignored. Accepting the position that healthcare professionals are dedicated to protecting patient safety, the key question is, what motivated employees to provide such false assurance(s) and compromise patient safety? What would be the consequences of calling attention to what the statistics were really saying?

A recent review into improving the safety of NHS patients in England stated, “fear is toxic to…safety…make sure pride and joy in work, not fear, infuse the NHS” (ref. 9, emphasis added). This recognises that an organisational culture based upon fear of reproach—typical in the NHS7 ,8—inhibits safe practice. As Berwick stated, “‘better not to know’ becomes the order of the day”.9 Given the increasing recognition that fear of reproof pervades the NHS, it seems pertinent to question how working in such an organisation might impact upon its employees’ behaviour. Under such circumstances, are employees motivated by honesty and transparency? Or is the motivation to suppress and downplay unwelcome information, which might negatively impact upon personal welfare, greater? Behavioural theory16 can help illuminate the motivations underlying staff behaviour, even when such behaviour clearly undermines patient safety.

Aversive control and negative reinforcement

A person who has been punished is not…simply less inclined to behave in a given way; at best, he learns how to avoid punishment.16

Behavioural theory posits that all human behaviour is learnt through interaction with our environment. Decades of psychological experimentation have demonstrated that when placed in an environment characterised by the presence of an aversive stimulus (eg, an electric shock), avoidance behaviour is learnt (eg, an animal will learn to successfully avoid an electric shock by performing a specific behaviour, such as running from one side of a compartment to another). According to behavioural theory, avoidance learning occurs, and is maintained through a negative reinforcement contingency.i Because avoiding the aversive stimulus becomes the primary motivator for behaviour, any response that leads to the successful removal or termination of the aversive stimulus is more likely to occur in the future.

Akin to animals in Skinner boxes,ii hospital staff behaviour in these hospitals was governed by aversive contingencies—fear of blame, vilification and subsequent public shaming, all operated to suppress unwelcome news, including the true incidence of HCAI or mortality rates. As such, professionally questionable behaviours, such as misreporting key statistics, become negatively reinforced; such behaviours served to avoid the negative consequences associated with the truth. Thus, the lack of clinical transparency so detrimental to patient safety in these hospitals can be explained through the process of negative reinforcement. Moreover, it is likely that the suppression of bad news was negatively reinforced across all levels of the organisation (from ward to Board), whereby members of the Board were also negatively reinforced to avoid censure, which may have conflicted with a desire for transparent and timely reporting of statistical data.

Exhortations for an ‘end [to] NHS blame games’,17 to be replaced by ‘a culture of openness and learning from mistakes…’,17 would indeed change the current environmental contingencies operating in the NHS, such that staff would no longer be negatively reinforced for misreporting or taking comfort from inaccurate figures. However, the emphasis on meeting targets in modern healthcare systems, and the negative consequences associated with not doing so, may preclude such a change in culture. Indeed, until this changes, the recent introduction of ‘whistleblowing’ procedures in the NHS are unlikely to effect any change. Thus, according to behavioural theory, where aversive control is used to manage a system, inevitably, self-preservation (ie, avoidance of blame and shame), through negative reinforcement, becomes an organisational norm.

Low morale

According to the Duerden review, at the time of the Clostridium difficile outbreak, the IPC service at YGC was “…short staffed, (and) low in morale…”,5—a situation not anomalous to this hospital. For example, the Francis report6 also noted “low morale among staff”.6 Several other high-profile inquiries into hospital failings highlight the ubiquitous nature of low staff morale in modern healthcare settings.6–8 Thus, it seems pertinent to question: through what process does low morale develop? How is it maintained? The theory of ‘learned helplessness’ can help address these questions.

Learned helplessness

Learned helplessness,18 describes the finding that animals, when exposed to uncontrollable electric shocks, show (A) reduced behavioural initiation (ie, passivity), (B) deficits in subsequent escape/avoidance learning and (C) emotional stress. In one of their earliest experiments, Seligman and Maier,18 examined the effects of escapable shocks (eg, pressing a button terminates shock) compared with inescapable shocks (eg, pressing a button did not terminate shock) on subsequent escape and avoidance learning in dogs. Results showed that dogs in the escapable shock condition learnt to avoid shock by pressing a button; that is, reinforcement (ie, termination of the shock) was contingentiii upon their response (ie, pushing the button). Conversely, dogs in the inescapable shock condition eventually ceased responding. Compared with dogs in the ‘escapable’ condition, these dogs learnt that their environment was uncontrollable—button pressing was independent of the shocks they received. In further experiments, dogs that had been exposed to inescapable shock subsequently failed to learn shock-termination responses when placed in escapable shock conditions.19 Seligman and Maier18 argued that the mechanism responsible for impairments in learning was the non-contingent relationship between responding and reinforcement—if a person learns that their responses have no effect upon subsequent reinforcement in their environment, they may display behaviours associated with learned helplessness, such as passivity, detachment and apathy.

While learned helplessness theory has typically been applied to explain the development and maintenance of psychopathology in clinical populations, it has more recently been applied to organisations.20 For example, a recent study21 revealed significant relationships between self-reported learned helplessness and employee perceptions of procedural and distributive justice, emotional exhaustion and cynicism in a sample of 217 banking employees. Earlier models of helplessness in the workplace22 suggest it leads to negative attitudes and behaviours, such as job dissatisfaction and turnover intentions.

Many factors highlighted in each of these inquiries suggest learned helplessness had become a significant factor in these hospitals. The detachment, apathy and passivity of staff was clear—general disengagement from key IPC audits (hand hygiene, environmental cleaning, intravenous line care, etc), disenfranchised clinicians reluctant to take on senior managerial roles, high levels of sickness absence and a withdrawal from basic patient care—patients were not fed, were left in soiled bedding and call bells went unanswered. As Francis6 noted, many factors such as lack of resources and an uncompromising focus upon targets make it difficult to provide appropriate care. These factors may operate to ‘numb’ staff into believing that nothing can ever be done, lapsing into ‘fatalistic acquiescence’.6 This ‘numbing’ process may also take hold when employees feel not listened to—a feature common to all of these inquiries. Indeed, the non-contingency between staff behaviour through the repeated raising of concerns—be they regarding mortality rates, unsafe practices or high HCAI figures—and environmental outcomes (eg, Boards failing to respond to these concerns in a timely manner) in these hospitals potentially precipitated and maintained learned helplessness—staff had tried, but failed to be heard. Unfortunately, it isn't hyperbole to state that staff had tried for years to raise concerns regarding poor care in their workplaces, only to be met with silence or ‘threats and humiliation’8 from senior managers, which ultimately may have led to the perception of uncontrollability and learned helplessness.

Cognitive dissonance

Perhaps some of the most pressing questions arising from these inquiries concern why suboptimal decisions were made: why were clinicians and nurses so passive and pliant in a system focused upon targets and self-preservation to the detriment of patient care?23 How did healthcare professionals reconcile their values and beliefs in good patient care, with events occurring at their hospitals? How did they justify their inaction in the face of egregious lapses in patient care, and the misreporting of key statistics? Cognitive dissonance theory can help shed light on these questions.

Cognitive dissonance states that when an individual holds cognitions that are inconsistent with one another, he will experience dissonance. Dissonance is an unpleasant psychological state. Humans are naturally motivated to reduce dissonance, which is achieved by either: (1) removing the dissonant cognitions, (2) adding new consonant cognitions or (3) reducing the significance of dissonant cognitions. Moreover, the relationship between the motivation to reduce dissonance and the magnitude of dissonance is linear, such that as one increases so does the other.

In one of the earliest studies to experimentally investigate the theory, Festinger and Carlsmith24 required participants who undertook a tedious experimental task to describe it as interesting in order to persuade another participant to complete the task, thus inducing two dissonant cognitions: one, ‘the task is dull’, and two, ‘I convinced another participant that the task would be interesting’. Festinger and Carlsmith24 argued that dissonance could be reduced by cognitively restructuring one's evaluation of the task: ‘the task is more interesting than I first thought’. Additionally, the magnitude of dissonance between the two cognitions was experimentally manipulated (participants were paid a minimal ($1) or substantial ($20) sum for telling the next participant that the task was interesting). Only those participants in the minimal payment condition showed evidence of dissonance reduction (ie, reporting more favourable attitudes towards the task, compared with participants in the substantial payment condition). According to Festinger and Carlsmith,24 the payment of $1 was insufficient to justify deceiving a potential participant, thus participants added a consonant cognition—“I said the task was interesting, not because I was paid a lot of money, but because I believe it was”.

Thus, cognitive dissonance theory implies that a powerful motivation to maintain cognitive consistency can give rise to unsound, and occasionally maladaptive, behaviour. For example, behaviour (eg, contributing to an unsafe patient environment by misreporting HCAI statistics) that contradicts a belief (eg, ‘I care about patients and their safety’) generates a state of psychological discomfort, which has the motivational power to change cognitions; for example, a clinician might believe, ‘nobody else seems bothered about patient safety’ in an effort to reduce the significance of dissonant cognitions. By applying cognitive dissonance theory to events that unfolded at these hospitals, incomprehensible behaviour becomes understandable. Equally, a subjective ethical value system that is incongruent with the ethical culture of the employing organisation can result in cognitive dissonance, impacting negatively upon job satisfaction. Healthcare professionals at each of these hospitals—embedded within an organisational culture where patient safety appeared to be of the lowest priority—are likely to have experienced cognitive dissonance, and subsequently succumbed to the powerful incentive to maintain cognitive consistency through whatever means necessary. Using a dissonance framework, employee behaviours such as misreporting statistics, failure to implement IPC procedures (eg, hand hygiene), and a general withdrawal from activities contributing to good patient care—all so contrary to patient safety—can be more readily understood.

Conclusions

Unacceptable behaviour was allowed to flourish in the hospitals at the centre of several high-profile inquiries—patients were not given help to meet their basic needs, clinicians disengaged from key IPC practices and managers ignored, silenced or misreported statistics they did not want to hear. This paper has highlighted how psychology can help ‘articulate the unacceptable’. As Newdick and Danbury2 point out, we need a ‘better understanding of the circumstances that can lead to…outcomes’ such as those observed in these high-profile inquiries. Applying psychological theories may underpin the ‘better understanding’ so desperately needed. Building a thorough understanding of the precipitants to poor care, be they systemic or individual, or both, should be a priority. Premature recommendations, based upon a superficial understanding of what went wrong and why, may achieve little more than demoralisation when subsequent failures occur—and they inevitably have. If progress is to be made, we need to rethink the way in which we respond to failures when they occur. Currently, contingencies in a typical hospital environment operate to negatively reinforce the silencing of unwelcome news. Even when such news is shared, it is often not acted upon, and learned helplessness becomes an organisational risk. Complex and unclear accountability structures within hospitals facilitate diffusion of responsibility, as clinicians struggle with morally loaded decisions (eg, ‘do I speak up and risk censure, or do I stay silent and potentially collude with a system putting patients at risk?’). Based upon the premise that the vast majority of employees who work within the NHS do so with a commitment towards providing the very best care for their patients, a debate around factors that motivate or discourage certain staff behaviours within healthcare organisations is needed. Psychology can contribute to this debate.

Acknowledgments

The author thanks Professor Robert Jones for commenting on earlier drafts of this manuscript.

References

Footnotes

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • i Negative reinforcement is defined as the occurrence of a behaviour that is followed by the removal of a stimulus. Behaviour that functions to avoid an aversive stimulus is strengthened through negative reinforcement.

  • ii A Skinner box (also referred to as an Operant Conditioning Chamber) is a laboratory apparatus used in the experimental analysis of animal behaviour. An animal is placed in the chamber and receives reinforcement (eg, food/water) upon performing a specific behaviour (eg, pressing a lever in response to a sound or light signal). In some cases, the chamber delivers a punishment (eg, mild electric shock) to missed or incorrect responses.

  • iii A contingency refers to a relationship between a response (eg, pressing a lever) and a consequence (eg, delivery of a food pellet) in which the consequence is presented only if the response occurs.

Linked Articles

Other content recommended for you