Review of Guidelines for Direct and Indirect Comparisons for Joint Clinical Assessment: Familiar Territory Through a Cautious Lens

The European Union (EU) Joint Clinical Assessment (JCA) aims to harmonise health technology assessment processes (HTA) at an EU level. Underlying the JCA process is an attempt to expedite access to new health technologies, which will demand increased collaboration between cross-functional teams and introduce considerable time pressure to develop a compelling submission.

Evidence synthesis based on clinical study data is often required in the HTA process in order to obtain an estimate of the treatment effect of the intervention relative to relevant comparators from available evidence. To this end, statistical methods have been developed to enable direct comparisons informed by multiple studies (meta-analysis) and indirect treatment comparisons (ITCs).

In line with international standards of evidence-based medicine, the JCA process requires that direct and indirect comparisons are based on the findings of a rigorous clinical systematic literature review (SLR). We have previously outlined how a narrow time window of three months between finalisation of the PICO scope and the JCA submission date may reshape the approach to a JCA SLR, but this will also require that direct and indirect comparisons based on a JCA SLR must be conducted efficiently, as well as in accordance with clear guidelines.

As such, the JCA’s methodological and practical guidelines1, 2 adopt existing EUnetHTA 21 methods guidelines and individual practical guidelines,3, 4 without any meaningful changes to the content of these guidelines. These guidelines highlight:

  • The assumptions underlying direct and indirect comparison methods, along with the way in which they should be assessed
  • A wide range of complex methods when assumptions are broken, and conservative interpretation of results, for example the use of prediction intervals (for random effects models) and testing against a shifted null hypothesis (for population-adjusted methods)

Methodological and Practical Guidelines

Focused on the most commonly-used analyses, the JCA’s methodological guidelines describe direct and indirect comparisons, outlining their main assumptions, strengths, and weaknesses.1 Referencing industry-standard methods presented in the Cochrane Handbook or National Institute for Health and Care Excellence (NICE) Decision Support Unit (DSU) Technical Support Documents (TSDs),5-8 the guidelines place particular emphasis on acknowledging that the further away analyses are from anchored comparisons of similar randomised control trials (RCTs), the more difficult it is for underlying assumptions to be met.

These assumptions are given as similarity, homogeneity and consistency. There is some distinction in the use of terminology for these assumptions in other guidelines such as the Cochrane Handbook,8 but the key ideas are the same and the assumptions are described in Table 1.

Table 1. Key assumptions of ITCs

Table 1: Key Assumptions of ITCs

As such, it would be reasonable to expect heightened scrutiny around justifications for the use and robustness of more complex approaches (e.g. ITCs with population adjustment) that are often required when the assumptions for standard ITCs are not met in the absence of common comparators or in the presence of dissimilar trials. An overview of the methods for treatment comparisons and their underlying assumptions are provided in Figure 1.

The majority of the JCA’s practical guidelines concern ITCs, and provide a large number of reporting requirements, including for more advanced population-adjusted methods such as matching adjusted indirect comparisons (MAICs) and simulated treatment comparison (STCs) which should serve as a useful reference when conducting analyses to support a JCA submission. The requirement in the practical guidelines for conclusive statements on each of the assumptions may be supported by sensitivity analyses that explore the impact of removing studies from analyses, analysing subgroup data or conducting population-adjusted methods.

Figure 1. Methods for treatment comparisons and their underlying assumptions

Figure 1: Methods for Treatment Comparisons

Abbreviations: IPD, individual patient data; ITC, indirect treatment comparison; MAIC, matching adjusted indirect comparison; RCT, randomised control trial; STC, simulated treatment comparison

Cautious Interpretation

Some requirements do stand out as calling for more conservative reporting of results than is often seen in practice. For example, without adequate justification that each study estimates the same common treatment effect in an ITC, the guidelines advise that a random effects model should be used. The use of fixed effects models for analyses of sparse networks of evidence (i.e. only one trial informing each pairwise comparison) still needs to be justified against the common treatment effect assumption and should not be used as a default. For direct comparisons, when random effects models perform poorly due to small numbers of studies, the guidelines also point to using a qualitative summary of study results over a fixed effects model that cannot be fully justified. In our experience, NICE takes a less preferential approach, generally requiring that both fixed and random effects models are fitted, model fit statistics such as the deviance information criterion are assessed, and that the submitting manufacturer then justifies their preferred choice of model. Moreover, while the Cochrane Handbook expresses the value of prediction intervals for random effects models, these are rarely seen in the literature, but the JCA will require that these are generated and assessed.

In addition, submitting manufacturers will be required to demonstrate that plausible bias arising from population-adjusted methods, for example due to missing covariates in the model (e.g. an STC regression model), are accounted for. A “shifted null hypothesis” is proposed for testing estimated treatment effects arising from population-adjusted analyses, where instead of testing the hypothesis that an outcome is “better” for one treatment compared to another, it is required to test that the outcome is “better” by a particular threshold. However, it is left to the manufacturer to determine this threshold and justify that it is large enough to account for bias arising from the method. Since this will require a large treatment effect beyond achieving conventional statistical significance, submitting manufacturers relying on population-adjusted methods for treatment comparisons will likely be more limited in their ability to assert that results support a clear benefit for the effectiveness of their product.

Outlook

In general, the JCA requirements are clearly outlined and despite the thorough nature of the approaches required do not introduce significant extra demands for a carefully conducted analysis relative to existing industry-standard approaches. The challenge will come in allying evidence synthesis analysis with the overall JCA process, integrating it with the narrow time window for the SLR and the need to address multiple PICOs, as well as ensuring results are meaningful and carefully reported as per the guidelines to support a compelling value story.

 

References

  1. Member State Coordination Group on Health Technology Assessment. Methodological Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons. Available here. Last accessed: July 2024.
  2. Member State Coordination Group on Health Technology Assessment. Practical Guideline for Quantitative Evidence Synthesis: Direct and Indirect Comparisons. Available here. Last accessed: July 2024.
  3. EUnetHTA 21. Methods Guideline. D4.3.2: Direct and Indirect comparisons. (Version 1.0, 29.07.2022). Available here. Last accessed: July 2024.
  4. EUnetHTA 21. Individual Practical Guideline Document. D4.3.1: Direct and Indirect comparisons. (Version 1.0, 16.12.2022). Available here. Last accessed: July 2024.
  5. Dias S, Welton NJ, Sutton AJ, et al. NICE DSU Technical Support Document 2: a generalised linear modelling framework for pairwise and network meta-analysis of randomised controlled trials. 2011.
  6. Dias S, Sutton AJ, Welton NJ, et al. NICE DSU Technical Support Document 3: Heterogeneity: Subgroups, Meta-Regression, Bias and Bias-Adjustment. Report by the Decision Support Unit 2011.
  7. Dias S, Welton NJ, Sutton AJ, et al. NICE DSU Technical Support Document 4: Inconsistency in Networks of Evidence Based on Randomised Controlled Trials. 2014.
  8. Higgins JPT, Thomas J, Chandler J, et al. Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane 2023.
  9. Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen. General Methods Version 7.0 (translation is based on the German document Allgemeine Methoden (Version 7.0) of 19 September 2023). Available here. Last accessed: July 2024.

If you would like to discuss the above approaches in more detail, or how Costello Medical can support you with the upcoming JCA process, please do not hesitate to contact Andrei Karlsson, Senior Statistician (LinkedIn) or Tristan Curteis, Consultant Statistician – Statistics Lead (LinkedIn). The views/opinions expressed here are their own and do not necessarily reflect those of Costello Medical’s clients/affiliated partners.