Ethical concerns about randomising persons to a no-treatment arm in the context of Ebola epidemic led to consideration of alternative designs. The stepped wedge (SW) design, in which participants or clusters are randomised to receive an intervention at different time points, gained popularity. Common arguments in favour of using this design are (1) when an intervention is likely to do more good than harm, (2) all participants should receive the experimental intervention at some time point during the study and (3) the design might be preferable for practical reasons. We examine these assumptions when considering Ebola vaccine research. First, based on the claim that a stepped wedge design is indicated when it is likely that the intervention will do more good than harm, we reviewed published and ongoing SW trials to explore previous use of this design to test experimental drugs or vaccines, and found that SW design has never been used for trials of experimental drugs or vaccines. Given that Ebola vaccines were all experimental with no prior efficacy data, the use of a stepped wedge design would have been unprecedented. Second, we show that it is rarely true that all participants receive the intervention in SW studies, but rather, depending on certain design features, all clusters receive the intervention. Third, we explore whether the SW design is appealing for feasibility reasons and point out that there is significant complexity. In the setting of the Ebola epidemic, spatiotemporal variation may have posed problematic challenges to a stepped wedge design for vaccine research. Finally, we propose a set of points to consider for scientific reviewers and ethics committees regarding proposals for SW designs.
- Research Ethics
- Clinical trials
- Policy Guidelines/Inst. Review Boards/Review Cttes.
- Drugs and Drug Industry
Statistics from Altmetric.com
- Research Ethics
- Clinical trials
- Policy Guidelines/Inst. Review Boards/Review Cttes.
- Drugs and Drug Industry
Interest in developing safe and effective therapeutic and vaccine interventions for Ebola virus disease (EVD) increased in late 2014 as the Ebola epidemic continued to cause devastating illness and many deaths in West Africa. Some argued that although placebo randomised controlled trials are recognised as the gold standard for evaluating the safety and efficacy of new interventions, in the setting of a public health emergency like Ebola, using a non-treatment control group in research raises ethical concern.1 ,2 A spate of controversy and commentary ensued.1–5 Some recommended consideration of alternative designs, including the less familiar stepped wedge (SW) design.6 ,7
In SW trials, clusters or individuals are randomised to receive the rolled out intervention at different predetermined study time points. The first systematic review of SW designs described their advantages as follows:
“by the end of the study, all participants will have received the intervention, although the order in which participants receive the intervention is determined at random. The design is particularly relevant where it is predicted that the intervention will do more good than harm (making a parallel design, in which certain participants do not receive the intervention unethical) and/or where, for logistical, practical or financial reasons, it is impossible to deliver the intervention simultaneously to all participants”.8These major assumptions about SW designs—that they are useful for interventions likely to do more good than harm;7 allow all participants access to experimental interventions,7 ,9 ,10 thus impacting acceptability;11 and have practical and logistical advantages—were appealing in the case of Ebola vaccine research.7 Although SW designs have been primarily used to evaluate intervention implementation or to collect effectiveness data while rolling out an intervention,12 SW has also been proposed as relevant to addressing ethical tensions in the midst of pandemics because they expand access to interventions compared with traditional parallel randomised trials.13
Our interest in the SW design arose from its consideration as an option in the design of EVD vaccine trials. Three randomised Ebola vaccine trials were ultimately started,14 and although a stepped wedge design was considered in one of these trials,15 none of the three used a stepped wedge design. As the Ebola epidemic in West Africa has now ended, it is a good time to consider lessons learned about feasible and ethical trial designs in the setting of an emergency to help prepare for future public health emergencies. In this paper, we aim to provide stakeholders with a better practical understanding of SW trials by examining the aforementioned assumptions about SW designs for vaccine research and their application in discussions about Ebola vaccine trials. Finally, we propose a set of points to consider for multiple stakeholders, such as scientific reviewers and ethics committees, regarding proposals for SW designs.
Grounds for considering that an experimental Ebola vaccine would do more good than harm
The most frequently reported rationale for the use of a stepped wedge design in the existing literature is a belief or preliminary evidence of a beneficial effect of the intervention.8 ,12 ,16 A large percentage of identified studies described the motivation for using SW as “a belief or empirical evidence suggesting that the intervention would do more good than harm; denying the intervention to any participant was therefore regarded as unethical or socially or politically unacceptable”.12 To examine whether this idea of doing more good than harm might apply to a trial of an experimental Ebola vaccine, we review the usual developmental path of experimental vaccines and describe how SW designs have previously been used in testing vaccines or drugs.
Investigational preventive vaccines are usually tested on large numbers of healthy individuals who, although at risk of the targeted infection, may never become infected even without a vaccine. The specific vaccine trial design differs based on the phase of development15 ,17 (table 1). When data from phase I or II studies suggest that an experimental vaccine is safe and immunogenicity is promising, phase III preventive vaccine trials study whether the vaccine prevents new infections in healthy volunteers at risk of infection. To obtain licensure, vaccines are most commonly tested against placebo in phase III randomised trials when no effective method of preventing infection is available. Phase III efficacy trials are needed and justifiable since immunogenicity does not always translate into efficacy, and additional safety issues undetected in earlier phases might still arise. A recent example was an exploratory efficacy trial (phase IIB) comparing an adenovirus vector vaccine against placebo to protect against HIV infection, where the incidence of HIV infection was higher in vaccinated subjects than in the placebo group.18 ,19
To understand the experience of using SW trials to study experimental drugs and vaccines, we examined published literature and clinical trial registries. We identified completed trials through published reviews of SW trials8 ,12 ,16 and planned ongoing trials through three publicly available registries: (1) ClinicalTrials.gov database, (2) the ‘International Standard Randomized Controlled Trial Number’ (ISRCTN) registry and (3) the European Union Clinical Trials Register. We searched for all records as of 2 April 2015 containing the word ‘step’ (or ‘stepped’) and ‘wedge’. AD searched the registries and selected the SW studies; both authors independently identified and reviewed studies of drugs or vaccines. Three reviews reported protocols or published descriptions of 12 SW trials by 2006,8 an additional 18 by 201012 and 37 SW studies published between 2010 and 2014.16 Our registry search retrieved another 75 studies: 59 from Clinicaltrials.gov, 24 from the ISRCTN and 0 from the European Union Clinical Trials Register; 8 were not using a stepped wedge design and were excluded.
Among >140 publications or registrations reviewed, the majority of SW studies evaluated diverse educational, health services or public health interventions, only 6 studies involved drugs and 1 study involved a vaccine. These seven drug and vaccine studies were in the field of infectious disease and involved testing the implementation of drugs or vaccines for which evidence of efficacy was already established, thus supporting the idea that the intervention was likely to do more good than harm (table 2). There were no clinical trials of experimental drugs or vaccines using a stepped wedge design and no SW studies used a double-blind placebo-controlled design.
This finding is relevant to design considerations for Ebola vaccines, and whether a trial could be motivated by a belief or evidence that the vaccine candidate would do more good than harm in the setting of the EVD epidemic. At the time SW designs were of potential interest for studying experimental Ebola vaccines, there were very few data about the safety or immunogenicity of any of the vaccine candidates. Therefore, proposals to use SW designs were based on hope (rather than preliminary data) that the vaccines might do more good than harm. Risk–benefit assessment in vaccine research, grounded in ethical principles of beneficence and non-maleficence, typically involves considering the severity of the condition, the baseline risk of infection, existing alternatives, and expected risks and benefits of the experimental vaccine itself. The risk of Ebola infection, a highly lethal disease, may have altered the threshold of potential acceptable risk to study participants and led some to promote study designs where most participants would access the experimental vaccine,20 despite very limited data about safety and efficacy. Indeed, some encouraged designs in which all participants would receive the experimental intervention.10 The rationale for extending access to experimental Ebola vaccines was more likely due to the psychological tension of enrolling controls in the setting of a lethal epidemic than to evidence about the benefit of the experimental vaccines: indeed, such evidence was lacking at the time the randomised trials started.
Access to the experimental vaccine for participants in the SW design
Some commentators expected that for Ebola vaccine research people would be more comfortable with a stepped wedge design because “everyone in such a study gets the Ebola vaccine”.20 However, this is a misunderstanding: it is rarely true that all participants receive the intervention in a stepped wedge study. Further, the proportion of participants who will eventually receive the intervention depends on three major features of SW designs: the unit of randomisation, the method of recruitment and follow-up and the primary end point. Understanding which participants eventually receive the intervention in a stepped wedge study requires characterising the study according to these three features. In addition to impacting who receives the intervention, these three features influence the risk/benefit ratio for individual participants and participating communities, subject selection and the informed consent process, and are thus key study features for scientific and ethical trial review.
In most SW studies, the unit of randomisation is the cluster, that is, the timing of the intervention is allocated by site (such as a village or a health post) rather than by individual. Individual randomisation in SW trials is rare: only one of the seven SW studies studying a drug or vaccine used individual randomisation.21 The second feature that affects which participants receive the intervention is the process of participant recruitment and follow-up. Different subtypes of SW trials include (1) closed cohort designs, (2) repeated cross-sectional designs and (3) open cohort designs.22 ,23 In a closed cohort SW study, all participants are identified at the start of the study, each participant serves as a control before receiving the intervention later in the trial and each participant provides longitudinal follow-up and repeated measurements of the outcome. This type of design can be implemented with cluster or individual randomisation. In a closed cohort SW design, each participant who remains in the study may receive the experimental intervention by the end of the study. An example is shown in box 1. In contrast, in the repeated cross-sectional SW design (also called continuous recruitment short exposure design) different participants are recruited at each step of the study. Participants are part of either the control group or the experimental group depending on when the intervention is rolled out to their cluster. Each cluster, but not each participant, receives the intervention by the end of the trial. An example in which half of the cohort of infants were not vaccinated is shown in box 2. A third recruitment design called open cohort or mixed design is possible, where subjects might be in the control group or might access the intervention, or both, depending on when they are enrolled in the study. A third feature that influences who receives the intervention in a stepped wedge trial is the type of primary end point. Certain trials measure terminal events as end points, that is, events that one can experience only once, such as death or an incurable infection (eg, HIV infection). Other trials measure end points that might differ over time, such as HIV viral load. Terminal events impact the probability of receiving an experimental vaccine in a closed cohort SW design: each participant would receive the vaccine only if she has not experienced the outcome during the control period. All participants except those who become infected might receive the vaccine.
Isoniazid preventive treatment for tuberculosis in HIV+ subjects
This individually randomised closed cohort stepped wedge study was implemented in South Africa in 1999.21 At that time, isoniazid preventive treatment (IPT) was recommended for the prevention of tuberculosis in HIV-infected persons by WHO but was not implemented, mainly due to operational obstacles. The study aimed to implement the IPT among HIV+ employees of a gold mining company. Starting in September 1999, HIV+ individuals were identified and invited to be screened for tuberculosis and receive the IPT for 6 months. The invitation was randomly staggered over time, so that between September 1999 and September 2001, each individual contributed data both before and after being offered the intervention. Preintervention data were obtained from the cohort between September 1999 and their first visit at the clinic, and subsequent data were post intervention. The outcome was the incidence of tuberculosis infection, and individuals could have several episodes over the follow-up period. Each participant could access the IPT, and stepwise enrolment resulted in participants contributing varying time to the control data and the experimental data.
However, some participants finally contributed preintervention control data but did not contribute postintervention data for reasons such as employment termination, death or because they developed contraindications for the IPT. As a result, 1655 subjects contributed preintervention data, 1016 subjects who attended the clinic at least once contributed postintervention data, but only 679 subjects started the IPT. Attrition over time is a potential limit in the interpretation of such study results.
The Gambia Hepatitis Study: a repeated cross-sectional stepped wedge (SW) design
The only published example of a vaccine trial using a stepped wedge design is a trial of hepatitis B virus (HBV) vaccine in Gambia8 ,12 that aimed to evaluate the effectiveness of the HBV vaccine in infancy in preventing liver cancer later in life.45 ,46 At the time it was designed in the late 1980s, data existed showing the efficacy of the vaccine for acute HBV infection. However, there were no data about durability of vaccine-induced immunity beyond 5 years, nor about long-term impact of the vaccine on cirrhosis and liver cancer. Despite sufficiently strong indirect evidence to justify deploying a vaccination strategy in infants, limited availability and costs precluded immediate implementation in the whole country.
This study was implemented to deploy HBV vaccination over 4 years and to study the long-term effectiveness of the vaccine on liver cancer and cirrhosis, up to 30–40 years later. Data were recorded for all infants born during this 4-year period to enable linkage with surveillance data such as cancer registries. The HBV vaccine was added to the routine vaccines of 17 teams (clusters) in a random order to ensure comparability. As a result, approximately 60 000 infants born over that 4-year period were vaccinated with HBV vaccine, and an equivalent number served as controls and were not vaccinated (figure 1). The study is still ongoing, and the recent publication of results of immunogenicity does not discuss whether later HBV vaccination was indicated.47
A cluster randomised, closed cohort SW trial was considered in preliminary discussions in one Ebola vaccine trial.15 Such a design would have required identifying participants in clusters such as villages or health posts, and randomising each cluster to be vaccinated at different times after enrolment. Concern was expressed about spatiotemporal variation in the epidemic between clusters, making such a study challenging even if feasible. Given that EVD infection occurs only once and thus is a terminal event, a closed cohort SW trial would not guarantee that all subjects would access the experimental vaccine, and some participants would develop infection during their control period of time.
Finally, none of the three Ebola vaccine studies used a stepped wedge design; nevertheless, two of the three used an immediate versus delayed intervention design (boxes 3a and 3b), and the third used a parallel design (box 3c). All three Ebola vaccine trials were randomised and employed an unvaccinated control group. In the two immediate versus delayed intervention trials, participants in the control group had delayed access to the experimental vaccine,24 unless they acquired infection during the control period. Neither a stepped wedge design nor an immediate versus delayed vaccination design prevents this necessary condition of controlled trials testing vaccine efficacy: some individuals will be at risk before they access the experimental intervention, even if the majority eventually receive it. Rid and colleagues illustrated this point regarding one Ebola vaccine trial: although the design mitigated the tension of having unvaccinated controls by eventually vaccinating control group participants, control group individuals were at risk of infection during 21 days before being vaccinated, and indeed some became infected.25 Of note, the immediate versus delayed trials targeted individuals who were at higher risk of infection than the participants of the parallel randomised trial. Interestingly, this allowed for a smaller sample size in the immediate versus delayed trials. As a result, the absolute number of planned vaccinated participants remained lower in the immediate versus delayed trials than in the parallel trial.
Randomised controlled vaccine trials implemented during the 2014 Ebola virus disease epidemic
(a) ‘Ebola ca suffit’ trial48 in Guinea is testing the efficacy of recombinant, replication-competent vesicular stomatitis virus-based vaccine expressing a surface glycoprotein of Zaire Ebola virus (rVSV-ZEBOV) by immediately vaccinating contacts of an index case compared with waiting 3 weeks to vaccinate contacts.49 ,50 It is an open-label randomised trial, where the randomisation unit is the index infected case, whose contacts are randomised either to immediate or deferred vaccination. This strategy is called a ‘Ring’ trial as it is testing the efficacy of vaccinating contacts around the case, a strategy similar to that used for smallpox eradication. Participants are followed 84 days after vaccination. The trial was designed to include 190 rings (95 rings per arm) of 50 subjects each, with an interim analysis after 100 rings. The 21-day window was chosen because epidemiological data “suggested that a 21 day delay [was] the incubation period in which 95% of Ebola cases arise; therefore this time window could be sufficient to determine efficacy, while meeting the requirement to minimize study participants’ time without vaccination”.50 An interim analysis showed promising results,49 and all participants were subsequently vaccinated immediately upon enrolment. The trial is no longer enrolling, and the final analysis is being completed.51
(b) Sierra Leone Trial to Introduce a Vaccine against Ebola (STRIVE) was testing the effect of immediate vaccination of health workers with rVSVΔG-ZEBOV compared with vaccination after 3–6 months waiting time in an open-label randomised trial.52–54 Subjects are individually randomised, and the follow-up is 6 months after vaccination. STRIVE was designed as a phase II/III trial, with an expected enrolment of 6000, and the study is no longer recruiting. This trial was initially envisioned as a stepped wedge design, but finally did not adopt a stepped wedge design due to both the waning of the epidemic and power considerations.15
(c) Partnership for Research on Ebola Vaccines in Liberia is a phase II/III parallel double-blind placebo-controlled randomised trial.55 ,56 The trial was designed to randomise individuals to test two vaccines (VSVG-ZEBOV and ChAd3-EBO Z) versus placebo (1:1:1), with 12 months follow-up. The trial was expected to enrol 27 000 healthy volunteers. Due to the waning of the epidemic, this trial has been modified into a safety/immunogenicity trial and is still recruiting.
Potential practical advantages might be in tension with the complexity of a stepped wedge design
The third most frequent reason offered for use of SW designs is feasibility or practicality, often the advantage of implementing the protocol intervention in one site at a time because of limited supply or other constraints. For example, in the Gambia Hepatitis B study described above, not enough vaccine was available to vaccinate everyone at one time (box 2).
The argument of practicality may be appealing but needs to be balanced with other important aspects of study feasibility. Recent qualitative data report challenges in the conduct of cluster SW studies.26 For instance, phased implementation is not necessarily easy, and roll-out requires significant coordination. If implementation takes longer than expected, it might lead to cumulative delays in later clusters. The last units to be rolled out might even drop out. Challenges also arise if there is change in the way the intervention is administered on the basis of experience due to a learning curve or intervention fatigue. Prost et al26 note that “phased implementation pose challenges that need to be appraised on a case-by case basis, and it is not entirely clear that SW trials win in terms of logistic convenience”.
In addition, other methodological points are important when considering study feasibility and validity as the SW design is complex. For instance, the collection of more control data at the start of a stepped wedge study than later in the study is a potential source of bias specific to SW studies. Indeed if some units drop out before being allocated to the intervention, this might threaten the comparability of the initial randomisation. In addition, the burden of collecting repeated measurements overtime, particularly if the study involves clinical visits, could increase the risk of missing data.
Planning SW trials requires comparative analyses about power, sample size and duration between SW designs and other designs (such as parallel cluster or individually randomised controlled trials) when choosing between several designs. Indeed, existing literature does not provide definitive answers about the trade-offs of SW designs for each specific trial setting. First, although some simulation studies have compared SW designs with other designs,23 ,27–36 most compared cluster SW repeated cross-sectional studies to cluster parallel studies, and show that “if substantial cluster-level effects are present (that is, larger intra-cluster correlations) or the clusters are large, the stepped wedge design will be more powerful than a parallel design”.23 However, few data are available for closed cohort SW designs or individual randomisation.27 ,28 Second, published simulation studies do not cover every relevant scenario and SW designs differ from one another in various ways: in addition to the design distinctions described earlier, they differ on multiple specific parameters, such as the number of steps, number of clusters per step, time between steps, duration of the pre-roll-out, roll-out, post-roll-out periods.22 Three additional aspects should be addressed: intraclass correlation coefficient for cluster studies, correlation of observations among repeated measurements and a method to account for the potential confounding effect of changes over time. The last is particularly important in SW studies as changes in the conditions being studied might occur over time and be a source of potential bias in the comparison of control and intervention data. Furthermore, when considering the use of SW design for experimental drugs or vaccines, interim analysis and stopping rules for efficacy, safety and futility might be necessary. To our knowledge, no methods to implement interim analysis in a stepped wedge design have yet been studied. Therefore, case-specific simulation studies and analysis of the advantages and disadvantages of the SW design may be necessary to appropriately evaluate the trade-offs in the choice between designs.24 ,26 ,27 ,37 Overall, extra effort and methodological support are necessary to guarantee that a stepped wedge trial is appropriately designed, implemented and analysed in order to have reliable and useful conclusions.
The appropriateness of SW design for testing experimental interventions in the setting of a pandemic had been questioned,13 ,38 ,39 but no specific simulation data supported these analyses. A simulation study done in the setting of a potential Ebola vaccine trial in healthcare workers compared a parallel risk-prioritised roll-out implementation (individuals in geographic zones with higher incidence of infection randomised first) to a closed cohort cluster SW design.28 This simulation concluded that the parallel design was more powerful than the cluster SW design, and that the power of the cluster SW design would have been undermined by spatiotemporal variation in the incidence of infection. The impact of spatiotemporal variation in Ebola cannot be overstated; in a stepped wedge design, more data are collected from controls at the start of the study, and data about the intervention increase as the intervention is rolled out. Therefore, any phenomenon that varies incidence or risk over time and location, such as waning of the epidemic due to its natural course or differential implementation of case containment strategies, could challenge the evaluation of the effect of the experimental vaccine. While statistical models are recommended to account for spatiotemporal trends,23 it is unclear whether these models could capture substantial changes and appropriately control for bias in a setting with such major spatiotemporal variations.
Finding the appropriate trial design to test experimental vaccines in the setting of a public health emergency such as EVD is essential for scientific, ethical and sociopolitical reasons. The controversy over the appropriate design for testing Ebola vaccines was confounded by disagreement and some misunderstanding of the various designs under discussion. In clinical research, there is an inherent struggle between the need to efficiently and rigorously answer questions that are valuable for society and future patients and the need to respect and protect the welfare of the individuals invited to participate in this endeavour. This struggle is heightened in the setting of a deadly epidemic, where concern for those who are sick and dying or at risk of a lethal disease pulls strongly towards helping them and protecting them from the burdens of contributing to research from which they might not personally benefit.
Our analysis focuses on the SW design. This design was appealed to as a possible way to mitigate tension between scientific and ethical concerns. Yet, there were several misunderstandings about the advantages and limitations of SW designs. In our review of past and current studies, we found that SW designs have not previously been used to test the safety and efficacy of experimental drugs or vaccines and have been used when there was evidence that an intervention was likely to do more good than harm. Although several publications enforced the idea that in a stepped wedge design “by the end of the study, all participants will have received the intervention”,7–10 it is not true of all SW designs: access to the intervention depends on the unit of randomisation, methods of recruitment and follow-up, and study end points. Most commonly in SW trials a cluster-repeated cross-sectional SW design is used, and all clusters but only a fraction of participants ultimately receive the intervention; in a closed cohort SW, all participants would eventually receive the intervention, provided they do not become infected first. Finally, designing and conducting a stepped wedge study is complex and challenging.
SW designs were considered for Ebola vaccine trials in West Africa, but ultimately not adopted. Previous commentators described the SW design as one that could address ethical tensions in the midst of pandemics, yet that analysis did not fully account for the potential threats of spatiotemporal bias on validity and the misguided belief that SW designs ensure that all participants receive the intervention.13 The proposal of a stepped wedge design to study an experimental vaccine or drug in the future would be extremely innovative as, based on our review, no precedent exists for studying experimental vaccines or drugs using this design. However, there are some limitations to our review. For example, while our registry search strategy was highly specific for SW studies, it might have underestimated the total number of ongoing SW studies because some trials use another description (such as ‘phased implementation’, ‘waiting list design’ or ‘one-way cross-over’8 ,23); another inherent limitation is the level of detail provided about study design in the registries. Nevertheless, the three published systematic reviews provide a comprehensive review of published SW studies.
Finally, to help stakeholders when reviewing scientific and ethical aspects of a stepped wedge design proposal, we propose a list of points to consider (box 4). A more general research ethics framework should also be used40 given that our analysis focused on three frequent underlying assumptions about SW designs. Our list of points to consider is preliminary, and we hope that in the future general ethical recommendations, such as the Ottawa statement, and an extension of the Consolidated Standards of Reporting Trials statement for reporting SW studies will be developed.23 ,42–44 Of note, we were considering the case of vaccine trials for Ebola when no vaccine was available, thus we did not question the justification for a randomised controlled trial in this setting; equipoise and justification for randomisation in SW studies have been discussed elsewhere.26 ,41
Points to consider in the ethical and scientific analysis of a stepped wedge (SW) design
Justification and rationale for the choice of a stepped wedge design:
Is the research question clear? What is the goal? (eg, efficacy, effectiveness or implementation)
Is the design appropriate for answering a socially valuable question in a rigorous way? Are pros and cons of the design appropriately evaluated37?
What is the rationale to believe that the intervention will do more good than harm? What is the preliminary evidence?
Is randomisation at the level of cluster or individual? “Researchers should clearly justify their choice of cluster rather than individual randomization. Acceptable reasons include the evaluation of a cluster level intervention or group effects of an intervention; the need to avoid experimental contamination, reduce costs, enhance compliance, or secure cooperation of investigators; and administrative convenience”.43 ,44
Simulations study/appropriate sample size calculations comparing different designs should support the choice of the design. The analysis should provide details about
the sample size necessary for a given effect size and power, the number of cluster/participants in each group, the number of steps and the study duration.
The impact of spatiotemporal variations
Practical conduct of the study and analysis:
Who will access the intervention?
All participants? (closed cohort)
Most participants but not all? (closed cohort with terminal event as end point)
All clusters but a subset of the participants? (repeated cross-sectional design)
Who is giving consent (individuals, community)? Is the consent process explicit about who will access the intervention?
What are the risks/benefits/burdens for participants and/or clusters? For example,
Is the burden of the collection of repeated measurements acceptable in closed cohorts?
What is the risk/benefit balance of exposing all cluster (or most participants, if applicable) to the intervention by the end of the study?
Is the study feasible? Are there sufficient resources to conduct it as designed, for consent, follow-up, data collection and analysis?
Is there a compelling rationale for access to the intervention for all clusters or all participants?
Is interim analysis and/or a data safety monitoring board needed?
Does the statistical analysis account for the specifics of a stepped wedge study? Data analysis needs to account for clustering, potential confounding effect of calendar time, carry-over effect and repeated measures. The impact of censoring due to terminal events or drop-outs must be considered in closed cohorts.23 ,42 ,63
In response to heated discussions about SW and other designs for investigating experimental vaccines in the midst of the EVD epidemic, our paper addresses and corrects some common assumptions about the use of SW designs. Our review should help stakeholders better understand SW designs, identify key elements for justification of a stepped wedge design and help them review study proposals and informed consent documents in the future.
The authors thank Seema Shah and Joe Millum for comments on earlier drafts of this paper.
Contributors AD and CG both contributed to the design of the study. AD did the review of the literature, extracted the data from the literature and the trial registries. AD and CG independently reviewed the eligible studies to select the drug/vaccines studies. Both extracted independently the data of the drug/vaccines studies. AD first drafted the manuscript. Both authors contributed to the final version and validated the manuscript.
Disclaimer The views expressed are those of the authors and do not necessarily reflect those of the Clinical Center, the National Institutes of Health, the Public Health Service or the Department of Health and Human Services.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Data sharing statement The list of stepped wedge studies collected through the registries search is accessible to researchers by contacting the first author.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.