Background: Concern has been expressed about the process of consent to clinical trials, particularly in phase I “first-in-man” trials. Trial participant information sheets are often lengthy and technical. Content-based readability testing of sheets, which is often required to obtain research ethics approval for trials in the USA, is limited and cannot indicate how information will perform.
Methods: An independent-groups design was used to study the user-testing performance of the participant information sheet from the phase I TGN1412 trial. Members of the public were asked to read it, then find and demonstrate understanding of 21 key aspects of the trial. The participant information sheet was then rewritten, redesigned and tested on 20 members of the public, using the same 21-item questionnaire.
Results: On the original TGN1412 participant information sheet, participants could not find answers and some of the found information was not understood. Six of 21 questions, including those relating to placebo, follow-up visits and the emergency phone number, were found by eight or fewer of 10 participants. The revised information sheet performed better, with the answers to 17 of 21 questions found and understood by all 20 participants.
Conclusions: Tests showed that the TGN1412 participant information sheet may not inform participants adequately for consent. Revising its content and design led to significant improvements. Writers of materials for trial participants should take account of good practice in information design. Performance-based user testing may be a useful method to indicate strengths and weaknesses in trial materials.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
A series of research papers and commentaries over the past decade have expressed concern about the process of consent for participants in clinical trials. For example, Jenkins and colleagues1 observed participants being recruited to cancer trials and noted that important information was often not stated by the recruiting doctor and, in most cases, the patient’s understanding of the trial was not checked when taking consent. Studies that have surveyed patients’ understanding at the end of a trial have found suboptimal comprehension, such as one-fifth not knowing the name of the medicine being tested2 and 30–40% of patients not knowing that they could withdraw at any time.3 4 A recent study of participants in five clinical trials found that, while almost all participants were satisfied with the consent process, one-third did not understand that the primary purpose of the trial was research.5 These findings are confirmed by two secondary data studies. First, a systematic review of communication and informed consent in phase I cancer trials6 suggested that some aspects relevant to participants, such as risks and benefits associated with the tested medicine and the right to withdraw consent, were understood less well than others. The review concluded that “patients do not appear to be adequately informed of the aims of phase I trials” (p304).6 A second literature review showed that understanding of trials was poorer among older patients and those with fewer years in education.7
The concern about participant understanding of a trial was heightened as a result of the serious adverse events experienced by six healthy volunteers in the TeGenero (TGN1412) phase I trial at Northwick Park.8 This followed other serious incidents among participants in phase I trials in the USA.9 The Expert Scientific Group on Phase I Trials, formed as a result of the TGN1412 incident, reported that, although the process of informed consent and clarity of participant information were “not within [the Group’s] remit” they were “extremely important” and should be “taken up as a high priority and considered in detail”.10 The Royal Statistical Society report on the incident criticised the information sheets provided to participants, particularly for the use of difficult, technical words and a lack of clarity about the treatment allocation schedule.11
For phase I trials, in which detailed and often complex information has to be provided to healthy volunteers, the role of the participant information sheet would seem to be particularly important. Several studies have reviewed and tested such participant information sheets by using so-called “readability formulae”, such as the Flesch–Kincaid Grade Level.12 One review of 200 participant information sheets by readability formulae in the 1980s found they were written at the “college graduate level”.13 A review of more recent sheets suggests that readability has improved but that most still require too high a standard of literacy in readers.14 Readability formulae are often used in the USA to assess the participant information sheet, not least because many institutional review boards require that sheets score at a certain level of readability before they will approve the trial.15
Readability formulae are easy to use and are available on many mainstream word processors, and a result can be obtained quickly. The scores they generate largely depend on word and sentence length; therefore, writers of complex information can attain a lower (ie, easier) readability score simply by shortening sentences and words. However, formulae cannot assess meaning—the sentence “intravenously given be will drug the” will attain the same readability score as “the drug will be given intravenously”—and, crucially, they cannot indicate how a piece of information will perform. In the case of clinical trials, for example, they can give no sense of whether potential participants would be likely to understand information about trial procedures, possible safety issues or randomisation. The limited value of readability formulae in assessing the quality of the participant information sheet has led to recommendations to replace them with a method of performance-based testing.16 17
Testing how written patient information performs is a relatively recent activity.18 19 It has gained impetus since 2005, when European Union (EU) law required medicine manufacturers to test the patient information leaflets (which now come inside every medicine pack).20 Without a successful and documented test, authorisation to market a new drug will not be granted. Consequently, in the past 4 years, medicine leaflets across Europe, including several thousand in the UK alone, have been tested for readability using a performance-based method. Almost universally the method used is one referred to as “user testing”, and is described in EU guidance documents.20 User testing involves potential medicine users reading the information materials under test and then being asked to find and show understanding of 12–15 items of information. (The term user testing can be misunderstood—it is the users who are testing the information, rather than the users being tested.) Participants are usually potential users of the information, rather than those currently taking the medicine, who would bring prior knowledge to the testing. Perhaps most importantly, user testing is intended to be formative and iterative, with the information materials being revised after rounds of 10 participants, to remedy any significant flaws that each round of testing identifies. The EU legislation has applied user testing to medicine leaflets in a summative way. The EU standard is that the final iteration of the leaflet should have been tested on two rounds of 10 people, that each item of information is found by at least 90% participants and that, of those found, 90% are understood.21 This process has been applied to medicine information leaflets in Australia since the 1990s.22
This paper reports the adaptation of the user-testing method to the participant information sheet provided to healthy volunteers in the TGN1412 trial, to assess whether members of the public could find and understand key pieces of information related to the trial. The test results were then used to inform the revision of the participant information sheet.
Thus this study tests the participant information sheet provided to participants in the trial and also the application of user testing to a form of patient information other than a medicine information leaflet.
An independent-groups design was used, with each participant seeing only one version of the information.
Thirty healthy members of the public were recruited via media advertising and promotional flyers. Participants were men aged 18–40 years, to match the volunteers in the actual TGN1412 trial. We excluded people who had taken part in any medicine trial or readability testing study in the previous 6 months. We aimed to ensure that each round of 10 participants had a similar profile in terms of likely influences on testing: age, educational attainment and occupation type.
The original TGN1412 trial participant information sheet, comprising 11 pages of single-sided A4 paper containing 5588 words (see online only figs 1, 2 and 3 for example sections). This was obtained from the CIRCARE (Citizens for Responsible Care and Research) website.23 All content identifying individuals or organisations involved in the trial was replaced with pseudonyms.
A revised version of the TGN1412 participant information sheet, retaining its meaning but with revised format, appearance and wording. See below for further detail on this revision process (see online only figs 4, 5 and 6 for example sections).
Participants’ ability to find and understand 21 key points of information in the sheets (see table 1). The 21 items were drawn from four categories of information, being those that would apply to trials of any phase:
the nature and purpose of the trial (three questions);
the process and meaning of consent (six);
trial procedures (seven);
safety and efficacy of the tested medicine (five).
The authors independently selected the “key points” for questions, based on the predefined categories, with any differences reconciled by consensus. The questionnaire was then written, based on the selected items of information. As is normal practice in user testing, questions were arranged so that their order did not correspond with the order of the information in the participant information sheet. Each of the 21 items was scored for finding information: “yes”, “no” or “found with difficulty” (ie, taking more than 3 minutes) and, if found, for understanding (“yes” or “no”). The time taken to read the information sheets and to answer all the questions was measured.
The study comprised three stages:
Testing of the original information. The information was tested using participants who were asked to imagine that they had expressed an interest in being part of a drug trial. They were told that, after having time to read the information through, they would be asked some questions about it. They were left alone to read the sheets and were asked to tell the interviewer when they had finished reading. Then each of the 21 user-test questions was put in turn and the participants were asked, first, to find the answer to the question in the information sheet and, second, to give their answer and, where required, to explain what the information meant. No upper time limit was placed on answering each question and the interviewer moved on only if the participants requested or when it became clear that they could not find the answer. After the 21 structured questions, participants were asked for their opinions on the information sheets, with particular focus on the wording and appearance. Interviews were audio-recorded and transcribed.
Rewording and redesign of the participant information sheet. Revision of the information was based on three sources: participants’ user-test questionnaire data and their opinions on the information sheet; best practice in information wording and design;24 and the authors’ experience and expertise in information writing and design. Care was taken to retain the original meaning of the information. The revised information was 5682 words long, in Linotype Frutiger Next font and presented as an eight-page folded booklet of A4 size (210×297 mm). Examples of the changes are illustrated in online only figs 4, 5 and 6 and included:
adding both a summary of the most important points and a brief table of contents to the first page;
introduction of 10 section headings;
addition of page numbers;
shortened sentences (where necessary);
use of lay language;
removal of text repetition;
where related information had previously been dispersed, bringing it together under a relevant heading;
use of typographical bullets to indicate lists.
Testing of the revised participant information sheet (as per stage (1)). After completion of the second and third rounds of testing, participants were asked to briefly read a copy of the original participant information sheet and were asked which of the two versions they preferred and why.
The original information was tested on 10 male participants, aged 19–38 years (mean 29 years). Three of the 10 participants were higher education graduates, and three participants were unemployed or had occupations that did not involve regular use of written documents.
Participants took a mean of 22 minutes (range 10–35) to read the information sheet. The period taken to complete the 21 structured questions was a mean of 40 minutes (range 22–68 minutes).
Participants took a long time to find answers to many questions (indicated by “difficulty” ratings) (see table 1). Six questions were found and understood by eight or fewer of the 10 participants. These questions related to:
the presence of a placebo group,
action required if medically insured,
informing the GP of participation,
the number of follow-up clinic visits, and
the post-discharge emergency telephone number.
The problems noted in finding and understanding scores, the time taken to answer questions and participants’ evaluative comments all indicated that the original information sheet was not performing well. This meant that revision of the information was required, followed by further rounds of testing.
The revised information (see online only figs 4, 5 and 6 for examples of changes) was tested on two rounds of 10 male participants. The 20 participants were aged in the range 19–38 years (mean 29 years). Six (of 20) participants were higher education graduates, and seven (of 20) participants were unemployed or had occupations that did not involve regular use of written documents.
The 20 participants took a mean of 22 minutes to read the document, almost the same time as taken for the original version. The period taken to answer the 21 structured questions was a mean of 25 minutes (range 12–47 minutes), a mean reduction of 15 minutes compared with the original version.
Combining the data for rounds 2 and 3 (see table 1), the answers to 17 of the questions were found and understood by all 20 participants. No item caused such difficulty as to require revision of the document. The data from the first round of 10 participants who read the revised sheet showed that it was performing well. We tested the sheet on a second round of 10 participants in order to confirm the data pattern.
Participants’ comments about the revised participant information sheet were mostly favourable.
After having the opportunity to review the original version of the participant information sheet, the 20 participants were asked for their preference. The revised version was preferred by 17 of the 20 participants.
Performance-based readability testing of the participant information sheet for the TGN1412 trial showed that it performed poorly. Members of the public found it hard to find information on important aspects of the trial, including its nature and purpose, consent, trial procedures and information about the tested medicine. When information was found, it was not always understood. Some important aspects of the trial, including the presence of a placebo group, the number of follow-up visits they would have to make and the emergency telephone number, were not found and understood by three out of 10 participants, meaning it scored below the conventional threshold set for user testing of medicine information leaflets.
Revising the document—by rewriting and redesigning it, while retaining its meaning—led to meaningful improvements in its performance, as shown by the further rounds of testing. Almost all pieces of information were found by all 20 participants, and almost all the found information was understood correctly. This supports the views of Ancker17 and others that the analysis of the participant information sheet needs to be based on its performance, rather than on a number obtained from a readability formula. For, at worst, documents can be manipulated to perform better on readability formulae, with little or no real change in the ease with which participants can use the information.
As outlined in the Methods section above, revision of the participant information sheet was based on three sources: best practice in information design and writing, the authors’ expertise and the data that resulted from testing the original version. One pertinent question is whether testing data are necessary, or whether a document can be adequately revised by drawing on expert knowledge and best practice alone. The published studies suggest that testing would be required: experts are not good at predicting the problems that readers will face when trying to read and understand a document.25 The revision of the TGN1412 materials in this study did make explicit reference to the participant data obtained; for example, we paid attention to the detail on clinic follow-up visits (see online only figs 5 and 6) because the testing data showed that participants misunderstood this information in the original participant information sheet.
For three questions (numbers 6, 8 and 13) out of 21, proportionally more people could not find (or had difficulty in finding) information in the revised sheet than in the original version. The same applies to one question (number 12) for “understanding”. It is hard to explain these data other than as being due to variation due to individual participants: there is no obvious amendment that could be made to the revised version of the participant information sheet to improve scores. However, the difference in scores between the original and revised versions is very slight for these questions. In our experience, it is rare for a document to perform in user testing such that all questionnaire items are found and understood by all participants. The emphasis in the interpretation should be on the pattern of scores obtained across the questionnaire.
The performance of the original TGN1412 participant information sheet would raise a question about the extent to which a participant in this phase I trial would have been adequately informed before giving consent. The large amount of information in the TGN1412 participant information sheet might have had a predictable impact on people’s ability to read and understand it. The process of obtaining consent should include the significant involvement of a clinician. However, even when a clinician is involved, adequate and effective supporting written information is necessary, as all the information can never be imparted in the time that a consultation would allow. In addition, most people quickly forget a third to a half of the information they are given in, for example, a medical consultation.26 27
Further evidence from the field of prescribed medicines is that patients prefer to receive information on treatments in a spoken form from a clinician; they want the written information to be a complement.24 In the process of giving consent, a recent systematic review reported that extended discussion is the most effective intervention in improving patient understanding.28 A trial’s written participant information sheet could be seen to perform several functions: before consent, it informs the potential participant and provokes questions; after consent, it acts as a memory aid and a record for the participant of what they have agreed to do. The aspiration of health information design and writing is that a document should be good enough for the individual patient or trial participant to read and understand it without the need to seek clarification from a clinician. The user-testing method evaluates the performance of written information under such circumstances.
The method used in user testing could be criticised for not using the quantitative approach used in testing the drugs themselves in a clinical trial. Rather, it is a qualitative and iterative process, well established in the field of information design.18 When used in practice, it has been shown that the main deficiencies in a document can be identified after interviewing just 10 potential users.19 However, it would require a study comparing the original and revised participant information sheet versions, and including random allocation of participants, to confirm the pattern of scores reported in this study.
It is important to note that this study does not address the appropriateness of the content of the TGN1412 participant information sheet: it simply shows that the information that the writers wished to communicate to the participants was not well understood in the original document. There was some criticism that the information provided did not give the true picture and underplayed the risks.11 Such concerns need to be addressed through a process of content assessment, followed by user testing to ensure that the appropriate information can be found and understood.
Several previous intervention studies have examined the effect of making changes to trial participant information sheets. Shortening the information about an asthma trial led to greater understanding of a number of aspects (including randomisation, duration and benefits).29 Rewriting participant information into an “easy-to-read” format led to greater satisfaction and less anxiety among participants,30 although understanding of the information was not assessed. However, one study found no effect of offering different explanations of aspects of trials, such as equipoise and random allocation.31
This study suggests that the materials provided for participants in this phase I trial might fail to inform them adequately. It is not known whether the TGN1412 participant information sheet was longer or more complex than those used in other phase I trials, since the sheets are commercially sensitive and are hard to obtain. However, anecdotal evidence suggests that phase I sheets are frequently long and complex and that investigators struggle to write for a lay audience. It would not be surprising if the sheets for other phase I trials performed in user testing as the TGN1412 sheets did, but only further research would give an answer.
Some phase III trials now involve potential participants in the development of trial materials; indeed it may happen in as many as one-third of trials.32 Such involvement can result in significant amendments to the participant information.33 The value of performance-based testing, when combined with the application of good writing and information design, is that it provides a structure for the evaluation and assessment of participant materials. Its use might lead to meaningful increases in the proportion of trial participants who understand how the trial will run and what they need to do to fulfil their obligations and can give adequately informed consent. As the Expert Scientific Group on Phase I Trials commented, these aspects should “be taken up as a high priority”.10
We thank Jean Godino, Sarah Hilton and Simon O’Hare for their help with participant recruitment and data collection. We also acknowledge Professor Jenny Hewison, Professor John Young and two anonymous reviewers, whose comments on previous versions of this article have greatly improved it.
▸ Figs 1–6 are published online only at http://jme.bmj.com/content/vol35/issue9
Competing interests DKR is a director of LUTO Research Ltd, a University of Leeds spin-out company that provides information writing and testing services to the pharmaceutical industry. JS is chair of an NHS research ethics committee, but the views expressed are those of the authors, not necessarily those of the National Research Ethics Service. BP provides graphic design services to the pharmaceutical industry and the NHS.