Article Text
Abstract
When it comes to using patient data from the National Health Service (NHS) for research, we are often told that it is a matter of trust: we need to trust, we need to build trust, we need to restore trust. Various policy papers and reports articulate and develop these ideas and make very important contributions to public dialogue on the trustworthiness of our research institutions. But these documents and policies are apparently constructed with little sustained reflection on the nature of trust and trustworthiness, and therefore are missing important features that matter for how we manage concerns related to trust. We suggest that what we mean by ‘trust’ and ‘trustworthiness’ matters and should affect the policies and guidance that govern data sharing in the NHS. We offer a number of initial, general reflections on the way in which some of these features might affect our approach to principles, policies and strategies that are related to sharing patient data for research. This paper is the outcome of a ‘public ethics’ coproduction activity which involved members of the public and two academic ethicists. Our task was to consider collectively the accounts of trust developed by philosophers as they applied in the context of the NHS and to coproduce an argumentative position relevant to this context.
- information technology
- interests of health personnel/institutions
- public health ethics
- research ethics
- confidentiality/privacy
Data availability statement
No data are available.
Statistics from Altmetric.com
- information technology
- interests of health personnel/institutions
- public health ethics
- research ethics
- confidentiality/privacy
When it comes to using patient data from the National Health Service (NHS) for research, we are often told that it is a matter of trust: we need to trust, we need to build trust, we need to restore trust. Our research institutions need to be trustworthy. Often the response to worries about trust is governance, regulation and oversight. This response seems to make sense as when there are big crises of data misuse like Facebook/Cambridge Analytica, Google DeepMind or Care.data.
The UK government’s policy paper ‘The future of healthcare: our vision for digital, data and technology in health and care’ released in 2018 has as one of its guiding principles: ‘a critical need to build and maintain public trust’.1 As a part of the Data Ethics Framework presented in the paper, the sixth principle suggests that researchers should make their work transparent and accountable. An Academy of Medical Sciences report published in November 2018 suggests that a radical culture change in the NHS is needed in order to retain public trust and that a guiding principle should be to ‘maintain trustworthiness in the responsible and effective stewardship of patient data within the NHS’. Throughout the report it is made clear that ‘transparency, accountability and communication are … essential’.2
Each of these examples demonstrates the centrality of contemporary concerns about trust and trustworthiness. Policy papers and reports like this can contribute to increasing public dialogue about the trustworthiness of our social institutions like the NHS, which in turn can increase public trust in those institutions. But these documents and policies are apparently constructed with little reflection on the nature of trust and trustworthiness, and therefore are missing important features that matter for how we manage concerns related to trust. We suggest below that what we mean by ‘trust’ and ‘trustworthiness’ matters and should affect the policies and guidance that govern data sharing in the NHS.
Over two weekends in March 2019, we gathered as a group of 11 members of the public (not representatively sampled) and two academic ethicists to collectively consider how philosophical accounts of trust might apply within the context of data sharing in the NHS.1 This paper is the outcome of this ‘public ethics’ coproduction activity. For 3 days, we thought together about what trust and trustworthiness are, rather than whether we trusted. This was not an exercise in surveying attitudes designed to elicit ‘what the public think’3 or a citizen’s-jury style activity designed to make an authoritative decision about policy.4 Our task was to consider collectively the accounts of trust developed by philosophers as they applied in the context of the NHS and coproduce an argumentative position relevant to this context. As a method, this is, as far as we know, unique; we drew inspiration from deliberative methods,5 but worked towards more of a theoretical contribution rather than a collective decision.6 What follows are considerations that arose from our many discussions, which we agreed were key issues worth articulating.
O’Neill’s Reith Lectures
In the BBC Reith Lectures of 2002, Onora O’Neill laid down a challenge about how we maintain, ensure and cultivate trust in societal institutions.7 This challenge was to avoid trying to address deficiencies of trust in institutions by using strategies that defeat and undermine our ability to trust them. The difficulty she pointed to is that accountability, openness and transparency are ways in which we reduce uncertainty, lower the risk of harm and maintain control. All of these things run counter to trust and, in her view, prevent trust because they remove conditions that are required for trust. In short, she argued, it seems that if we require accountability, openness and transparency, then we are not prepared to trust. The examples mentioned above seem to ignore O’Neill’s challenge by directly and straightforwardly linking trust and trustworthiness with accountability, openness and transparency.
In our discussions, we noticed that while accountability, openness and transparency do clearly run counter to trust, they may still be important features of how institutions demonstrate that they are trustworthy: through demonstrating that they are reliable.8 Being reliable is an important part of being trustworthy and one obvious way for institutions to demonstrate reliability is by being accountable, open and transparent. So, we might think that there is an important, ongoing cycle which involves accountability, openness and transparency, which demonstrates reliability and allows people to have the grounds for trusting. However, being trustworthy is not only about being reliable. Trustworthy institutions should also have the appropriate values and commitments: for example, for the NHS, a commitment to patient care and improving patient care above all else.
The nature of trust and trustworthiness
Philosophers like O’Neill have been thinking about the nature of trust and trustworthiness for some time. There is some consensus about key features of trust and its relationship to trustworthiness that can and should make a significant difference to the policies and strategies that we adopt in using patient data for research.
Trusting a person or an institution involves uncertainty and risk.9 Trusting someone involves a ‘leap of faith’. When we trust our doctor, it involves the risk that we will not become healthy or that we might even get worse. Because of our uncertainty and lack of expertise when it comes to medicine, we put our trust in clinicians to act in our interests.10
Trusting is not the same as relying or depending. We rely on things that are regular, fixed and determined, and we feel angry or disappointed when they are not that way (eg, when the bus does not arrive at 09:22 as usual).11 Trusting is associated with gratitude when it is vindicated and betrayal when it is not.12 Reliance can be thought of as a ‘dependence based on the likely prediction of the other’s behaviour’ while trust involves both reliance and the sense that the trusted individual or institution has the appropriate commitments and values.13 In some cases, reliance is appropriate: we rely on buses and bus drivers, but do not trust them. We distinguish between trusting a person and needing assurances or guarantees. When we trust a person or an institution, that trust is likely to survive a number of ‘failures’, so long as they are not too significant or of a certain kind. When we rely on a person or an institution, they often cease to be reliable after one ‘failure’.9
Trust and trustworthiness can, and often do, come apart. As O’Neill emphasises, being trustworthy does not guarantee that trust will be granted, and when trust is placed, it is not always in trustworthy individuals or institutions.7 Trustworthiness depends on features of the object of trust, whether they are in fact reliable in the right kind of way and have appropriate commitments and values, while the act of trusting will depend on features of the person placing their trust, and will be shaped by their experiences, beliefs and knowledge.
Trust, trustworthiness and NHS research
While philosophers have primarily thought about trust in the context of interpersonal relationships, these features of trust and trustworthiness, we suggest, can help us think through their role in the context of using patient data for research in the NHS. Each brings out an important consideration about this setting.
Complexity: trusting institutions
Just as uncertainty and risk underlie the act of trusting another person, these features matter in the context of institutions. However, trusting an individual looks different from trusting an institution.14 While one’s trust in an individual is generally only impacted by the actions of that individual, one’s trust in an institution can be impacted by several different factors. This is especially true in the case of the NHS (and research in the NHS context), a vast and complex institution that involves many distinct but related sectors: information governance, data protection, information technology structures, research ethics committees, researchers and research groups and clinical staff (doctors, nurses and administrators). The larger and more complex the institution, the more uncertainty exists about how to understand its behaviour, and the more complex judgements of trust become. For example, it can be difficult to say when a particular individual is acting ‘on their own’ or when their behaviour reflects on the institution. This can matter for trust; we may not trust the individual, but we may continue to trust the institution if we distinguish the two. In such a complex system, each interaction with the system can impact inclinations towards trust or distrust, and one act by a single actor can impact perceptions of overall trustworthiness.
This complexity means that we need to pay much more attention to the different roles and functions that different individuals and groups play within the NHS and its research activities. It also means that what it is to be trustworthy will differ for these groups and individuals. A complex system needs to be well understood by those who might trust it and before we can collectively try to make policies for the better15—and this is all the more important when what is at stake is the trustworthiness of the institution itself.
The appropriateness of trust
As mentioned above, sometimes relying on an individual or institution is more appropriate than trusting them/it. Particularly in cases in which consistent and guaranteed performance is required, we may be better off relying rather than trusting. For example, the data systems and infrastructure which house patient data should be as secure as possible as well as being as robust and well curated as possible. We want assurances that these systems function in a way that protects patient data from misuse, error and corruption. Having such assurances is not a matter of trust or trustworthiness, but one of reliability.
This is in sharp contrast to how we think about the role of the data controllers and, in particular, the role that the Caldicott Guardian plays in relation to data sharing in the NHS. The Caldicott Guardian makes decisions about who is to be granted access to what patients’ data and under which conditions. These decisions are guided by a set of principles16 but importantly, require judgement: the principles need to be applied in particular cases and various risks and benefits need to be assessed, weighed and judged with each application. This means uncertainty: the way in which these principles are interpreted in specific cases is variable and so uncertain to those outside the process. Given this uncertainty, trustworthiness is crucial for Caldicott Guardians. Importantly, this is why the character of those individuals who are given the role of the Caldicott Guardian matters: they must be demonstrably trustworthy, not merely reliable. They must have the right commitments and values.17
Trustworthiness is also an appropriate aim for clinician researchers: they serve their current patients as clinicians and future patients as researchers. They must routinely make decisions in the context of uncertainty and risk. Research clearly takes place in the context of uncertainty and is designed to reduce risks (or understand them) through the acquisition of knowledge.18 In the clinical setting, a decision is often required about whether the offer of participation in research is appropriate for a particular patient. Because of this uncertainty, it is also important for clinician researchers to be trustworthy, and not merely reliable; patients must be able to trust that their doctor will act in their best interest, and according to their wishes and values, even when the future is unclear.
However, the same is not true for the corporate partners of the NHS. While we recognise the importance of commercial involvement in the research endeavours of the NHS, it seems plain that this involvement should be reliable, as well as carefully governed and overseen. We should not be required to trust corporate partners but should be confident relying on them or the structures which govern them. This is not a negative conclusion, but it does acknowledge the clear but necessary conflict of interest between healthcare and commercial interests.
Distinguishing trust from trustworthiness
Finally, it is important to clearly distinguish between trust and trustworthiness in thinking about data sharing in the NHS. While the NHS can strive to be trustworthy with respect to the use of data for research, there is no guarantee that it will be trusted. Trusting requires a leap of faith. It requires accepting the uncertainty and the risk and trusting that the other will act to the best of their ability and most definitely, in good faith. Not everyone can take this leap even when it is warranted, and some people are less inclined to trust in particular circumstances.19
This means that the NHS and the research institutions that are associated with it should strive to be trustworthy and should be trustworthy. In general, thinking about the values associated with research in the NHS (like the direct relationship to patient benefit) and developing processes that reflect and secure these values, works towards trustworthiness. The guidance offered around the Data Ethics Framework1 and the Wellcome Trust’s Understanding Patient Data initiative20 provides a good starting point in articulating such values and processes.
Significantly, no matter how trustworthy the NHS is, some people will still not trust. And this may not be anyone’s fault. Things will not always go as we expect. Things will go wrong and sometimes this will be someone’s fault and other times it will not. If we require guarantees against anything going wrong, then trust is not the right thing to aim at. When we trust, we have faith: things can go wrong that do not undermine our trust. How should the NHS handle those who do not trust it despite their best efforts? It seems to us that this is a case where no one should be disregarded. Part of what is associated with being trustworthy is not being prepared to leave behind those who do not trust us.
Conclusions: being careful about trust
Overall, it is easy to make claims about trust and trustworthiness and about what needs to be done in order to maintain or restore trust. But there is a danger that significant complexities can be overlooked particularly when we do not attend to important features of trust and trustworthiness. As O’Neill has suggested, policies and strategies which ignore these features run the risk of damaging the very institutions they are designed to protect. We have largely taken a lead from her suggestions in an effort to encourage reflection on what we are aiming at when we think about how to manage patient data in the NHS.
We have pointed to a number of complexities which we think make an important difference to the way in which we think about what we aim for when we aim at trust or trustworthiness and indeed, in some cases whether trust or trustworthiness are appropriate aims. The size and complexity of the NHS makes it difficult to understand when and where to place trust in it. Related to this, in some cases trustworthiness is appropriate, as with the Caldicott Guardian, but sometimes we require guarantees and reliability, as with data systems. A complex system needs to be well understood by those who might trust it and before we can collectively try to make policies for the better—and this is all the more important when what is at stake is the trustworthiness of the institution itself.
We should also be sensitive to the distinction between trust and trustworthiness: institutions should aim to be trustworthy, but when people do not trust, it is not always an institutional failure. This is a comment on how we handle failures of trust: not everyone will always trust the NHS, but we should not build our strategies with this as our aim.
Clear and significant steps have been taken in the attempt to explicate processes and values that are important parts of trustworthiness for the NHS in this context. Frameworks, like the Data Ethics Framework and the Understanding Patient Data initiative, need to be framed more clearly in terms of trustworthiness or reliability as appropriate: between those contexts where the institution aims at trust and those where it provides guarantees.
Data availability statement
No data are available.
Ethics statements
Patient consent for publication
Acknowledgments
MS and PF are grateful for the support of the Oxford NIHR Biomedical Research Centre. This research was fiunded by the Oxford NIHR Biomedical Research Centre.
Footnotes
Twitter @mark_sheehan_ox
Contributors The workshop was organised and led by MS and PF. MS drafted an initial series of bullet points arising from the discussion and then drafted the first version of the paper. All authors contributed to the discussion over the 3 days of the workshop and shaped the ideas that went into the final version. All authors comment on the paper across the series of drafts.
Funding The workshop was funded by the Oxford NIHR Biomedical Research Centre.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
↵One participant withdrew after the first day. Two others have not replied since the meeting.
Other content recommended for you
- The unintended consequences of COVID-19 vaccine policy: why mandates, passports and restrictions may cause more harm than good
- The social licence for research: why care.data ran into trouble
- Trust and experiences of National Health Service healthcare do not fully explain demographic disparities in coronavirus vaccination uptake in the UK: a cross-sectional study
- How should we think about clinical data ownership?
- Trust and the Goldacre Review: why trusted research environments are not about trust
- Should free-text data in electronic medical records be shared for research? A citizens’ jury study in the UK
- To trust or not to trust (in doctors)? That is the question
- Direct access to potential research participants for a cohort study using a confidentiality waiver included in UK National Health Service legal statutes
- Non-commercial clinical trials of a medicinal product: can they survive the current process of research approvals in the UK?
- Determining the need for ethical review: a three-stage Delphi study