Article Text
Statistics from Altmetric.com
In 1973, Rittel and Webber coined the term ‘wicked problems’, which they viewed as pervasive in the context of social and policy planning.1 Wicked problems have 10 defining characteristics: (1) they are not amenable to definitive formulation; (2) it is not obvious when they have been solved; (3) solutions are not true or false, but good or bad; (4) there is no immediate, or ultimate, test of a solution; (5) every implemented solution is consequential, it leaves traces that cannot be undone; (6) there are no criteria to prove that all potential solutions have been identified and considered; (7) every wicked problem is essentially unique; (8) every wicked problem can be considered to be a symptom of another problem; (9) a wicked problem can be explained in numerous ways and the choice of explanation determines what will count as a solution and (10) the actors are liable for the consequences of the actions they generate.1
One needs only a passing familiarity with the history of HIV prevention research, and with the intellectual traditions of research ethics, to appreciate that the perils and opportunities arising from proposals to conduct research with people who inject drugs (PWID) in some of the most precarious social and political circumstances around the world and the challenges associated with implementing the findings satisfy Rittel's and Webber's criteria for ‘wicked problems’. HIV prevention research has contributed important new knowledge about the feasibility, efficacy or relative efficacy of various prevention strategies in a variety of contexts around the world. But the pathways and timelines for how this knowledge has contributed to improvements in public health practice and/or the establishment of policies that ensure unfettered access to appropriate healthcare services for PWID are less clear and decidedly non-linear. One account of the transition from trial to policy sums it up concisely: “far from being strictly evidence-driven, HIV prevention policies result from a politically negotiated aggregation of competing, frequently non-optimizing rationalities”.2
The kinds of labyrinthine challenges reflected in the wicked problems criteria are precisely what led to the emergence of ‘implementation science’, which, in essence, is about trying to use research strategies to gain a better understanding of the complex array of structural and human factors that can determine whether new programmes or interventions will work as intended. But implementation science is also an acknowledgement that the range of scientific questions that ‘count’ as legitimate and significant in HIV prevention and many other fields and the methods we use to address and analyse them are tightly constrained by convention. In particular, our current obsession with ensuring that every human action is ‘evidence-based’ is rooted in a narrow conception of evidence promulgated successfully by the evidence-based medicine (EBM) movement,3 with frustratingly little energy devoted to understanding what the claim actually means in any given context.4 As a result, the question ‘what is the comparative efficacy of Intervention X vs Intervention Y at reducing the incidence of HIV infection among PWID?’ is viewed as providing higher quality evidence than the questions ‘what social and political interests are blocking the successful implementation of policies to prevent HIV among PWID in Context Z that have already been shown to be effective in other settings?’ and ‘what conditions would need to be true for these barriers to be overcome?’ The hierarchical tenets of EBM dictate that the methods required to pursue the latter questions are, de facto, inferior to those required to answer the former in terms of the quality of evidence they are able to produce, despite warnings by some of the founders of the EBM movement against this very conclusion.5 We have a great deal of ‘high quality’ evidence about the relative efficacy of various interventions, but seemingly very little of the ‘lower quality’ variety that might help us understand why the necessary changes in policy and practice have been slow to materialise in some settings.
This excursion into the simmering dispute about the hegemony of EBM is necessary because of the dependent relationship between research ethics and conventions of research methodology. The core of the problem described by (authors) in their paper, ‘Addressing ethical challenges in HIV prevention research with people who inject drugs’ 6 is that there is no way to know, in advance, whether conducting HIV prevention trials with PWID will result in net benefits to individual participants and/or improved services and more hospitable policies for PWID more generally, in the settings in question.
Where should we expect to find the answers to these questions? Despite the emphasis in international research ethics for the past 20 years on ensuring that research is relevant and responsive to host country priorities and contributes to improved healthcare and research capacity in low-income and middle-income countries hosting research, there has been almost no discernible literature about the extent to which these ethical aspirations have actually been realised. There is no doubt that such contributions have been made and that many of them have been practically and ethically significant, but the insights have not been systematically disseminated and therefore we have no easy access to this information. Whether this deficit is a by-product of deeply entrenched methodological biases, or inordinate confidence that elegant conceptual analyses will automatically precipitate real-world solutions, or the simple inability of the field to produce the necessary data, or a combination of many factors, we simply lack the evidence we need to support or challenge the validity and utility of the analyses we typically employ in research ethics.
Some commentators have argued that HIV-driven implementation science is “challenging academic institutions to look beyond their traditional core roles and consider how to contribute more fully to the public good”.7 The question for research ethics is whether we are also actively embracing this challenge. Research ethics needs its own implementation science. The authors' proposal to consult or engage with PWID and relevant advocacy organisations for “substantive discussions about the relevance and acceptability of the research plan and execution” [6, p. 32] is a sensible way to begin but deserves a few specific points of elaboration.
First, the substantive discussions with PWID and advocacy organisations about the relevance and acceptability of the research plan and execution are presumably valuable, in part, because they make evident perspectives and interests of stakeholders that might not otherwise be clear or obvious. In other words, community (or stakeholder) engagement (CE) is a way to generate a unique species of evidence about the way trials and interventions affect the interests of stakeholders. Second, although engaging with PWID and relevant advocacy organisations will invariably produce explanations about why politicians and policymakers are opposed to the improvement of services for PWID, recent findings suggest that these presumptions can, in some instances, be inaccurate2 and inadequately reflect the complex processes that determine the meaning and implications of any intervention for stakeholders.8 Wicked problems come with an extensive web of stakeholders and a complex set of interests. CE offers a strategy and logic for engaging—or attempting to engage—with the full range of relevant stakeholders, including opponents, in order to gain a better understanding of how their interests are affected and whether there are any potentially viable pathways to success.
Third, CE offers a way to expand our current thinking about the ethics of HIV prevention research and to contribute to the transition for a more effective implementation science. In our own work on community engagement, we have argued that there is a prima facie obligation on the part of researchers to engage with any individual or organisation that has a legitimate interest in the conduct or outcomes of the proposed research, in order to: (1) identify non-obvious interests and factors that may affect the feasibility or ethical integrity of the research; (2) extend working notions of respect in research beyond respect for the autonomy of individual participants to recognise that research has implications for other stakeholders as well and to identify opportunities to be responsive to their interests and (3) enhance the legitimacy of the research.9 In the case of the proposed HIV prevention trials with PWID, this broader account of stakeholder engagement offers the distinct advantage of moving beyond assumptions and aspirations about whether trials will prompt constructive action, to provide the opportunity to ask stakeholders—including opponents—about how their interests might be affected and about what other interests and hazards might lie undetected in the path to better policies and services. This is inevitably what happens when implementation is successful, but this occurs too infrequently. CE needs to be pursued more deliberately and systematically, perhaps especially in the most difficult circumstances, where the prospects of success seem most remote.
In these ways, CE affords us the opportunity to produce a unique set of insights about interests that begin to move us more plausibly into the inner workings of the ‘wicked problems’ we aim to study and the real-world ethics—competing interests and values—that can determine the ultimate impact and value of HIV prevention trials with PWID but that are invariably non-obvious in their nature or complexity during the design and roll-out of trials. This is consistent with the pragmatic ethos of implementation science7 and with the recognition that conventional HIV prevention trials may answer important questions, but that these might not be the questions that ultimately lead to effective implementation. Looking beyond the dominant research ethics paradigm that focuses exclusively on the welfare of individual research participants, to better understand the full landscape of interests at stake, might be the most ethical path we can pursue in research ethics for HIV prevention trials with PWID. The authors are right to push us in that direction.
Footnotes
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.