Identifying Relevant Controlled Clinical Trials for Systematic Reviews Requires Searching Multiple Resources – and, Even Then, Comprehensiveness is Questionable

Gale Gabrielle Hannigan

Abstract


A review of:


Crumley, Ellen T., Wiebe, Natasha, Cramer, Kristie, Klassen, Terry P., Hartling, Lisa. “Which resources should be used to identify RCT/CCTs for systematic reviews: a systematic review.” BMC Medical Research Methodology 5:24 (2005) doi:10.1186/1471-2288-5-24 (available from: http://www.biomedcentral.com/1471-2288/5/24.


Objective – To determine the value of searching different resources to identify relevant controlled clinical trial reports for systematic reviews.


Design – Systematic review.


Methods – Seven electronic databases (MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, Web of Science, Cochrane Library) were searched to April 2004;four journals (Health Information & Libraries Journal - formerly Health Libraries Review, Hypothesis, Journal of the Medical Library Association - formerly Bulletin of the Medical Library Association, Medical Reference Services Quarterly were handsearched from 1990 to 2004; all abstracts of the Cochrane Colloquia (1993-2003) were handsearched; key authors were contacted and relevant article references screened. Two reviewers independently screened results for studies that compared two or more resources to find RCTs or CCTs using defined inclusion and exclusion criteria. Two reviewers assessed studies for quality using four criteria: adequate descriptions of what the search was attempting to identify, the methods used to search, the reference standard; and, evidence that bias was avoided in selection of relevant studies. Screening and assessment differences between reviewers were resolved through discussion. Using a standard form, one investigator extracted data for each study, such as study design, results (e.g., recall, precision); a second investigator checked these data. Authors were contacted to provide missing data. Results were grouped by resources compared and these comparisons were summarized using medians and ranges. Search strategies were categorized as Complex (using a combination of types of search terms), Cochrane (the Cochrane Highly Sensitive Search Strategy or HSSS), Simple (using five or fewer search terms which may include a combination of MeSH, Publication Type, keywords), and Index (using one or two terms to check/verify if the study is in the database).1


Main results – Sixty-four studies met criteria for inclusion in the analysis. Four major comparisons were: MEDLINE vs. handsearch (n=22), MEDLINE vs. MEDLINE + handsearch (n=12), MEDLINE vs. other reference standard (n=18), and EMBASE vs. reference standard (n=13). Thirteen other comparisons had only one or two studies each. The most common comparison was between MEDLINE vs. Handsearching. Data analyzed from 23 studies and 22 unique topic comparisons showed a 58% median for search recall (range=7-97%). Data for search precision based on 12 studies and 11 unique topic comparisons indicated a median of 31% (range=0.03-78%). Data based on more than four comparisons, shows no median recall more than 75% (range=18-90%) and no median precision more than 40% (range=13-83%). Recall was higher for Trial Registries vs. Reference Standard (89%, range=84-95%) but these numbers were based on two studies and four comparisons; one study with two comparisons measured precision (range=96-97%) for Trial Registries vs Reference Standard. Subgroup analyses indicate that Complex and Cochrane searches each achieve better recall and precision compared to Simple searches. Forty-two studies reported reasons why searches miss relevant studies. The reason cited most often for electronic databases was inadequate or inappropriate indexing.


Conclusion – The results of this systematic review indicate that no one resource results in particularly high recall or precision when searchers look for RCTs and CCTs.


Full Text: PDF



Evidence Based Library and Information Practice (EBLIP) | EBLIP on Twitter