Sensibly employed, targeted reviews have a valuable role alongside systematic approaches
It’s no secret that systematic literature reviews (SLR) are a comprehensive undertaking. Cochrane reviews, the gold standard, can be mindbogglingly exhaustive: searching databases large and niche alike with broad strategies; including hundreds of studies for data extraction; even contacting authors directly to request additional data. It’s hard not to be simultaneously impressed and daunted by their dedication when perusing methodologies. Even more so when they may be a thankless task.
Fortunately, the world of business and industry allows for some pragmatism, but there is still considerable expense involved. By the book, every record needs to be reviewed independently, at least twice, by humans (usually), with data scrutinised and painstakingly extracted into complex grids, checked for errors and omissions by another individual, and then analysed in depth. This all requires time. And lots of it. And time is money, as the cliché goes.
SLRs are certainly popular. Most health technology assessment (HTA) and regulatory outfits demand them of manufacturers when submitting new technologies. It is rare to find a meta-analysis or indirect treatment comparison that doesn’t employ one to initially identify data. At the time of writing, I conducted a cursory query of “systematic review” versus “targeted” or “pragmatic review” in titles and abstracts of records in MEDLINE, the bibliographic database that underpins the National Institutes of Health’s PubMed platform. This returned hits two orders of magnitude in favour of SLRs. Tens of thousands more are published each year. The scientific community, it seems, values the grit and rigour of the systematic approach.
One reason for their popularity, I think, is that systematic can be misconstrued as definitive. As in, when conducting an SLR, any research question can be answered. Anyone who has conducted an SLR or even skimmed the abstract of one can attest this isn’t quite true. The scientific literature is heterogeneous, packed with variations in standard of care, diseases that are poorly understood, and patients (and researchers) that are very much human. Instead, “S” in “SLR” should instead be interpreted to mean comprehensive, transparent, and reproducible; taking best possible efforts to identify data in support of your decision problem, minimising potential bias associated with SLR conduct and analysis as far as is reasonable. But often, it’s not a review that’s required: it’s an answer. An SLR is merely a means to an end. Could a targeted literature review (TLR) be a better fit?
The term targeted is often dismissed as querying Google Scholar or PubMed until you find the data you’re after. Whilst this is a possible approach to discovery (“desk research” in the parlance), targeted as a term is perhaps similarly misconstrued as systematic is. TLRs can still make use of a structured approach, including search terms not unlike that you would see in an SLR. Relevant supplementary sources may still be searched, omitting, say, conference abstracts or registries of clinical studies (which, in my experience, tend to bring limited value). And reviewers are still beholden to eligibility criteria adhering to PICO (Population, Intervention, Comparator, Outcomes). Whilst all records aren’t reviewed in duplicate, you can still employ someone else to adjudicate tricky decisions and review what data are being included. There are no restrictions on switching it up to a systematic approach: come data extraction, with a clutch of relevant studies, it pays to adopt a more deliberate approach at this stage, using a dual approach to ensure the accurate recording of data. Many boxes ticked, and the greatest upshot of conducting a TLR: it’s quicker and cheaper.
So when do TLRs make most sense? When speed and cost are important, clearly. And while SLRs benefit from narrow, well-defined hypotheses, TLRs can additionally venture into the broad and woolly, questions aplenty or the complete unknown. Indeed, a common product we pitch at Costello Medical are “scoping reviews”. In effect, these are TLRs for uncertain scopes coupled with a diminished appetite for the risk associated with a costly yet speculative SLR that may yield nothing. From this prospecting, if you strike lucky, an upgrade to an SLR beckons; if desolate, nothing ventured, nothing gained, and perhaps even a potential gap to be addressed with future research. Areas and indications that are unwieldy in size and breadth (such as oncology), where simply an understanding of themes or trends is required, also play to the strengths of a looser approach.
Of course, there are areas where SLRs are king. Submit a TLR of clinical evidence to the National Institute for Health and Care Excellence (NICE) and the assigned external assessment group (EAG) is unlikely to be merciful in their criticism. And evidently, the barrier for publication of a TLR is higher than that for an SLR. Whilst meta-analyses informed by TLRs can be published (as my colleagues have demonstrated), downstream effects mean that flaws attributable to missed evidence or reviewer bias may be compounded later that cannot be simply remedied. It’s easier to defend missed data as an aberration in an SLR (we went to all reasonable lengths to identify data) versus a TLR (an SLR was not reasonable because timelines and budget did not allow it). With that said, TLRs need not be a one-off, disposable undertaking. Conducted transparently and methodically, if you later choose augment your search strategy with additional terms, add another reviewer to every record, and add in quality appraisals, for instance – is the ensuing output distinguishable from an SLR? So I ask you, next time you’re faced with research question, you ask yourself: do I need an SLR, or do I need an answer?