Spotlight on Real World Evidence
Real world evidence (RWE) took centre stage at this year’s ISPOR International, with a packed program of sessions covering everything from artificial intelligence (AI)1 to Z-codes.2 There was a noticeable shift from previous years: the overall acceptability of RWE has now been established, and the conversation is moving onto detailed discussions of best practice, as well as cutting-edge methods and technology.
Learnings from the RWE Summit
Before the main conference started, we were delighted to attend the pre-conference RWE summit. A clear theme from the summit was that if it’s worth doing RWE then it’s worth doing it properly. Unsurprisingly there remains uncertainty about what that means in practice, although regulatory and health technology assessment (HTA) bodies are working hard to firm up their guidance on suitable RWE for their own contexts. Sebastian Schneeweiss, Professor of Medicine and Epidemiology at Harvard Medical School, noted that “everybody wants the most accurate evidence, but some have to make more compromises than others” – a welcome dose of reality!3
Full results from the RCT-DUPLICATE study were published shortly before ISPOR and were discussed at the RWE summit.4 The study found that if you can closely emulate both the study design and the measurement of variables from a randomised controlled trial (RCT), then the results of RWE often match the RCT very closely. However, available datasets still vary widely in quality and suitability for decision-making and this is often the limiting factor in achieving high-quality RWE. There is a need for better documentation from data owners on the data provenance, curation methods, and the accuracy of key variables in the dataset to help researchers understand the intrinsic quality of a dataset; however, this will always need to be supplemented with a thorough, study-specific assessment. Despite some attendees pleading for more alignment between different agencies on issues such as specific fit for purpose datasets, there was pushback from the agency representatives: for example, John Concato from the US Food and Drug Administration (FDA) confirmed that they will not certify any dataset as “good enough” because fitness for purpose is specific to the research question and context of your own study, and Laurie Lambert from the Canadian Agency for Drugs and Technologies in Health (CADTH) pointed out that a study can have excellent design and use high quality data but still not be fit for Canadian decision-making.
With this in mind, another strong focus in the discussion was on keeping thorough documentation of decisions made throughout the whole process of a RWE study. The framework for any study should be built around envisaging the “perfect” data and methods, but then acknowledging (a) how far away you are from that target study, and (b) how much it matters: if the effect size of a study is large and there are only a handful of expected confounders, then more uncertainty in the data may be tolerated.