Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
In our recent paper ‘Trust and the Goldacre Review: Why TREs are not about trust’1 we argue that trusted research environments (TREs) reduce the need for trust in the use and sharing of health data, and that referring to these data storage systems as ‘trusted’ raises a number of concerns. Recent replies to our paper have raised several objections to this argument. In this reply, we seek to build on the arguments presented in our original paper, address some of the misunderstanding of our position expressed in these replies, and sketch out where further research is needed.
One of the central arguments of our original paper was that the language we use to describe data-sharing initiatives matters, because the way things like TREs are presented and framed makes a difference to what people can, and should, expect from them. However, there is a more basic and obvious problem with referring to these institutions as ‘trusted’: it begs the question about whether these institutions are, in fact, trusted. We cannot know in advance of these initiatives being implemented, established, and appropriate assessments completed, whether they are actually trusted. Further questions abound: whom might TREs eventually be trusted by? If some people end up trusting them, but not others, is it appropriate to call them ‘trusted’? At best, this is simply a case of wishful thinking (and a more accurate name would be ‘hopefully-TREs’). More problematically, this may be an instance of ‘trustworthiness-signalling’: an attempt to convince others to trust, without meeting the requirements for genuine trustworthiness. By calling TREs ‘trusted’, the implication seems to be that placing our trust in them is perfectly fine: they are the kinds of things that are trusted by others and can safely be trusted by us. But not only does simply calling something ‘trusted’ …
Contributors MG provided substantial contributions to the conception and design of the work, and wrote the initial and subsequent drafts. RM and MS provided substantial contributions to the conception and design of the work, and critically revised the initial draft for important intellectual content. PF provided substantial contributions to the conception and design of the work. All authors have given final approval of the published version.
Funding This study was funded Wellcome Trust (108413/A/15/D, 203132/Z/16/Z).
Competing interests None declared.
Provenance and peer review Not commissioned; internally peer reviewed.
Read the full text or download the PDF:
Other content recommended for you
- Trust and the Goldacre Review: why trusted research environments are not about trust
- Verification and trust in healthcare
- Trusted research environments are definitely about trust
- Data for sale: trust, confidence and sharing health data with commercial companies
- Linked electronic health records for research on a nationwide cohort of more than 54 million people in England: data resource
- Public opinion on sharing data from health services for clinical and research purposes without explicit consent: an anonymous online survey in the UK
- Big health data: the need to earn public trust
- Infrastructure and operating processes of PIONEER, the HDR-UK Data Hub in Acute Care and the workings of the Data Trust Committee: a protocol paper
- Making consent for electronic health and social care data research fit for purpose in the 21st century
- Ethics briefing