We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We compared the Institute for Clinical and Economic Review’s (ICER) ratings of comparative clinical effectiveness with the German Federal Joint Committee’s (G-BA) added benefit ratings, and explored what factors, including the evidence base, may explain disagreement between the two organizations.
Methods
Drugs were included if they were assessed by ICER under its 2020–2023 Value Assessment Framework and had a corresponding assessment by G-BA as of March 2023 for the same indication, patient population, and comparator drug. To compare assessments, we modified ICER’s proposed crosswalk between G-BA and ICER benefit ratings to account for G-BA’s extent and certainty ratings. We also determined whether each assessment pair was based on similar or dissimilar evidence. Assessment pairs exhibiting disagreement based on the modified crosswalk despite a similar evidence base were qualitatively analyzed to identify reasons for disagreement.
Results
We identified 15 assessment pairs and seven out of fifteen were based on similar evidence. G-BA and ICER assessments disagreed for each of these drugs. For 4/7 drugs, G-BA (but not ICER) determined the evidence was unsuitable for assessment: for 2/4 drugs, G-BA concluded the key trials did not appropriately assess the comparator therapy; for 1/4, G-BA did not accept results of a before-and-after study due to non-comparable study settings; for 1/4, G-BA determined follow-up in the key trial was too short. Among assessment pairs where both organizations assessed the evidence, reasons for disagreement included concerns about long-term safety, generalizability, and study design.
Conclusions
This study underscores the role of value judgments within assessments of clinical effectiveness. These judgments are not always transparently presented in assessment summaries. The lack of clarity regarding these value-based decisions underscores the need for improvements in transparency and communication, which are essential for promoting a more robust health technology assessment process and supporting transferability of assessments across jurisdictions.
High-cost gene therapies strain the sustainability of healthcare budgets. Despite the potential long-term savings promised by certain gene therapies, realizing these savings faces challenges due to uncertainties regarding the treatment’s durability and a lesser-discussed factor: the true potential for cost offset. Our study aims to assess the cost-offset uncertainty for US Medicaid regarding recently approved gene therapies in hemophilia A and B.
Methods
The analysis used 2018 to 2022 Colorado Department of Health Care Policy & Financing data to determine direct costs of standard of care (factor replacement therapy or emicizumab). Cost-simulation models over five- and ten-year time horizons estimated Colorado Medicaid costs if patients switched to gene therapy (valoctocogene roxaparvovec or etranacogene dezaparvovec) versus maintaining standard of care. Patients were included if aged 18 and over with ICD-10-CM codes D66 (hemophilia A) and D67 (hemophilia B). In the base case, severe hemophilia A was defined as requiring greater than or equal to six yearly factor VIII or emicizumab claims and moderate/severe hemophilia B requiring greater than or equal to four factor IX replacement therapy claims annually.
Results
Annual standard-of-care costs were USD426,000 (SD USD353,000) for hemophilia A and USD546,000 (SD USD542,000) for hemophilia B. Valoctocogene roxaparvovec (hemophilia A) had incremental costs of USD880,000 at five years and −USD481,000 at 10 years. Sensitivity analysis revealed a 23 percent chance of break-even within five years and 48 percent within 10 years. Etranacogene dezaparvovec (hemophilia B) showed incremental costs of USD429,000 at five years and −USD2,490,000 at 10 years. Simulation indicated a 32 percent chance of break-even within five years and 59 percent within 10 years. Varying eligibility (≥4 to ≥15 standard-of-care claims) notably affected break-even; for example, valoctocogene roxaparvovec: 40 percent to 77 percent chance of break-even in 10 years.
Conclusions
Our study highlights significant cost variation in the standard of care of patients eligible for gene therapies, adding to the uncertainty surrounding cost estimation and highlighting the importance of addressing this factor in risk-sharing agreements. The impact of varying eligibility criteria on cost offsets emphasizes the importance of carefully defining eligibility when using real-world data in the context of health technology assessment.
Progress and innovation in artificial intelligence (AI)-based healthcare interventions continue to develop rapidly. However, there are limitations in the published health economic evaluations (HEEs) of AI interventions, including limited reporting on characteristics and development of algorithms. We developed an extension to the existing Consolidated Health Economic Evaluation Reporting Standards (CHEERS) to improve consistency, transparency, and reliability of the reporting of HEEs of AI interventions.
Methods
The Delphi method was used, following a prespecified study protocol. A steering group with expert oversight was formed to guide the development process. A long list of potential items was defined based on two recent systematic reviews of HEEs of AI-based interventions. The steering group identified and invited 119 experts to the three-stage survey. Participants were asked to score each item on a nine-point Likert scale, and they were also able to provide free-text comments. The final checklist was piloted on a random sample of nine HEEs of AI-based interventions.
Results
Three stages of the Delphi survey were completed by 58, 42, and 31 multidisciplinary respondents, respectively, including HTA specialists, health economists, AI experts, and patient representatives. The CHEERS-AI extension includes 18 AI-specific reporting items. Ten are entirely new items, including considerations about user autonomy, validation of the AI component, and AI-specific uncertainty. In addition, elaborations on eight existing CHEERS items were added to emphasize important AI-specific nuances. Some participants highlighted that CHEERS-AI can provide key benefits; for example, it could clarify the misconception that the predictive algorithms supporting AI-driven healthcare interventions are available for use without cost.
Conclusions
CHEERS-AI can aid in improved reporting quality for researchers, editors, and reviewers conducting or assessing HEEs of AI interventions.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.