Article Text
Statistics from Altmetric.com
In our recent paper ‘Trust and the Goldacre Review: Why TREs are not about trust’1 we argue that trusted research environments (TREs) reduce the need for trust in the use and sharing of health data, and that referring to these data storage systems as ‘trusted’ raises a number of concerns. Recent replies to our paper have raised several objections to this argument. In this reply, we seek to build on the arguments presented in our original paper, address some of the misunderstanding of our position expressed in these replies, and sketch out where further research is needed.
Trustworthiness signalling
One of the central arguments of our original paper was that the language we use to describe data-sharing initiatives matters, because the way things like TREs are presented and framed makes a difference to what people can, and should, expect from them. However, there is a more basic and obvious problem with referring to these institutions as ‘trusted’: it begs the question about whether these institutions are, in fact, trusted. We cannot know in advance of these initiatives being implemented, established, and appropriate assessments completed, whether they are actually trusted. Further questions abound: whom might TREs eventually be trusted by? If some people end up trusting them, but not others, is it appropriate to call them ‘trusted’? At best, this is simply a case of wishful thinking (and a more accurate name would be ‘hopefully-TREs’). More problematically, this may be an instance of ‘trustworthiness-signalling’: an attempt to convince others to trust, without meeting the requirements for genuine trustworthiness. By calling TREs ‘trusted’, the implication seems to be that placing our trust in them is perfectly fine: they are the kinds of things that are trusted by others and can safely be trusted by us. But not only does simply calling something ‘trusted’ …
Footnotes
Twitter @rjmilne
Contributors MG provided substantial contributions to the conception and design of the work, and wrote the initial and subsequent drafts. RM and MS provided substantial contributions to the conception and design of the work, and critically revised the initial draft for important intellectual content. PF provided substantial contributions to the conception and design of the work. All authors have given final approval of the published version.
Funding This study was funded Wellcome Trust (108413/A/15/D, 203132/Z/16/Z).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Linked Articles
Read the full text or download the PDF:
Other content recommended for you
- Trust and the Goldacre Review: why trusted research environments are not about trust
- Machine learning models, trusted research environments and UK health data: ensuring a safe and beneficial future for AI development in healthcare
- Multiple modes of data sharing can facilitate secondary use of sensitive health data for research
- Verification and trust in healthcare
- Trusted research environments are definitely about trust
- Navigating data governance associated with real-world data for public benefit: an overview in the UK and future considerations
- Data for sale: trust, confidence and sharing health data with commercial companies
- Public opinion on sharing data from health services for clinical and research purposes without explicit consent: an anonymous online survey in the UK
- Linked electronic health records for research on a nationwide cohort of more than 54 million people in England: data resource
- Big health data: the need to earn public trust