This is a guest blog written by Dylan Kneale from the EPPI-Centre at the Institute of Education, UCL following his recent visit to Exeter.
Last month I was delighted to visit Exeter for the first time and present in the ESMI Guest Lecture Series. Not only was I really impressed with the welcome I was given (which included a trip to an unofficial Harry Potter themed bar…another first!) but it gave me the opportunity to share some of the ideas and questions we’ve been coming up with at the EPPI-Centre around the generalisability of evidence from systematic reviews of public health interventions.
The presentation included a number of questions and challenges that have emerged from our work including:
- whether we are conceptualising generalisability in the right way;
- how as reviewers we should do more to describe the context in which interventions take place;
- how the methods we currently have to hand to explore and understand generalisability are lacking or are open to misuse or misinterpretation;
- and how using sources of large data might offer a route to help us in exploring the generalisability of evidence further.
This matters because one of the emergent findings of a project currently underway on decision-making in public health is that although systematic reviews are highly regarded and trusted, they are not frequently utilised directly. As we have argued elsewhere, the production of a systematic review that is not useful or used in decision-making increases potential research wastage.
Part of the reason for this potential wastage may be down to the challenge that we, as evidence generators, set to decision-makers in producing a systematic review that often summarises the global evidence base around the effectiveness of an intervention, with the expectation of influencing and informing decisions in local areas. These local areas may have a unique set of needs and challenges. The question of ‘will it work in my area?’ is one that is best left to the public health practitioners, who are ultimately best placed to make this decision (and who may also seek the support of knowledge brokers). But is there more, that we as systematic reviewers, could and should be doing to support local decision-makers in making this assessment? Should we be doing more to unpack the generalisability of the studies included in a review and explore how the transferability of the effect could vary according to context of implementation?
Among many of the technical questions around how to do this, my visit to Exeter helped me realise that the underlying questions we are ultimately asking revolve around the purpose of a systematic review and considering where the role of a systematic reviewer ends. Is a systematic review purely about taking stock of the scientific literature or should reviews be more applied in nature?
It is clear that there are many reasons why systematic review evidence can fail to fulfil its potential contribution to public health decision-making (and why other sources may be more appropriate), but failing to convey the generalisability of the evidence from systematic reviews is likely to be an important factor leading to low usage.
Addressing questions about generalisability could mean starting with an incremental approach, beginning with better reporting of intervention contexts, which is unlikely to challenge the current remit of systematic reviews or reviewers.
But whether the additional steps that were proposed in the lecture, including the use of additional secondary data to explore the generalisability of the evidence, represents a fundamental departure to the remit of systematic reviews and reviewers, or a natural extension, is an issue I’ll happily leave untouched…at least until 2018!
Kneale D, Rojas-García A, Raine R, Thomas J: The use of evidence in English local public health decision-making. Implementation Science, 2017, 12(1):53.