Hostname: page-component-77f85d65b8-5ngxj Total loading time: 0 Render date: 2026-03-27T16:16:39.843Z Has data issue: false hasContentIssue false

WHICH DATABASES SHOULD BE USED TO IDENTIFY STUDIES FOR SYSTEMATIC REVIEWS OF ECONOMIC EVALUATIONS?

Published online by Cambridge University Press:  16 November 2018

Mick Arber
Affiliation:
York Health Economics Consortium (YHEC)mick.arber@york.ac.uk
Julie Glanville
Affiliation:
York Health Economics Consortium (YHEC)
Jaana Isojarvi
Affiliation:
York Health Economics Consortium (YHEC)
Erin Baragula
Affiliation:
York Health Economics Consortium (YHEC)
Mary Edwards
Affiliation:
York Health Economics Consortium (YHEC)
Alison Shaw
Affiliation:
York Health Economics Consortium (YHEC)
Hannah Wood
Affiliation:
York Health Economics Consortium (YHEC)
Rights & Permissions [Opens in a new window]

Abstract

Objectives:

This study investigated which databases and which combinations of databases should be used to identify economic evaluations (EEs) to inform systematic reviews. It also investigated the characteristics of studies not identified in database searches and evaluated the success of MEDLINE search strategies used within typical reviews in retrieving EEs in MEDLINE.

Methods:

A quasi-gold standard (QGS) set of EEs was collected from reviews of EEs. The number of QGS records found in nine databases was calculated and the most efficient combination of databases was determined. The number and characteristics of QGS records not retrieved from the databases were collected. Reproducible MEDLINE strategies from the reviews were rerun to calculate the sensitivity and precision for each strategy in finding QGS records.

Results:

The QGS comprised 351 records. Across all databases, 337/351 (96 percent) QGS records were identified. Embase yielded the most records (314; 89 percent). Four databases were needed to retrieve all 337 references: Embase + Health Technology Assessment database + (MEDLINE or PubMed) + Scopus. Four percent (14/351) of records could not be found in any database. Twenty-nine of forty-one (71 percent) reviews reported a reproducible MEDLINE strategy. Ten of twenty-nine (34.5 percent) of the strategies missed at least one QGS record in MEDLINE. Across all twenty-nine MEDLINE searches, 25/143 records were missed (17.5 percent). Mean sensitivity was 89 percent and mean precision was 1.6 percent.

Conclusions:

Searching beyond key databases for published EEs may be inefficient, providing the search strategies in those key databases are adequately sensitive. Additional search approaches should be used to identify unpublished evidence (grey literature).

Information

Type
Method
Copyright
Copyright © Cambridge University Press 2018 
Figure 0

Table 1. Eligibility Criteria to Be Included in the Sample of Candidate Reviews

Figure 1

Table 2. Yield and Number of Unique References Identified for Each Database

Figure 2

Table 3. Yield for Specific Scenarios