Quality, methods, and recommendations of systematic reviews on measures of evidence-based practice: an umbrella review

JBI Evid Synth. 2022 Apr 1;20(4):1004-1073. doi: 10.11124/JBIES-21-00118.

Abstract

Objectives: The objective of the review was to estimate the quality of systematic reviews on evidence-based practice measures across health care professions and identify differences between systematic reviews regarding approaches used to assess the adequacy of evidence-based practice measures and recommended measures.

Introduction: Systematic reviews on the psychometric properties of evidence-based practice measures guide researchers, clinical managers, and educators in selecting an appropriate measure for use. The lack of psychometric standards specific to evidence-based practice measures, in addition to recent findings suggesting the low methodological quality of psychometric systematic reviews, calls into question the quality and methods of systematic reviews examining evidence-based practice measures.

Inclusion criteria: We included systematic reviews that identified measures that assessed evidence-based practice as a whole or of constituent parts (eg, knowledge, attitudes, skills, behaviors), and described the psychometric evidence for any health care professional group irrespective of assessment context (education or clinical practice).

Methods: We searched five databases (MEDLINE, Embase, CINAHL, PsycINFO, and ERIC) on January 18, 2021. Two independent reviewers conducted screening, data extraction, and quality appraisal following the JBI approach. A narrative synthesis was performed.

Results: Ten systematic reviews, published between 2006 and 2020, were included and focused on the following groups: all health care professionals (n = 3), nurses (n = 2), occupational therapists (n = 2), physical therapists (n = 1), medical students (n = 1), and family medicine residents (n = 1). The overall quality of the systematic reviews was low: none of the reviews assessed the quality of primary studies or adhered to methodological guidelines, and only one registered a protocol. Reporting of psychometric evidence and measurement characteristics differed. While all the systematic reviews discussed internal consistency, feasibility was only addressed by three. Many approaches were used to assess the adequacy of measures, and five systematic reviews referenced tools. Criteria for the adequacy of individual properties and measures varied, but mainly followed standards for patient-reported outcome measures or the Standards of Educational and Psychological Testing. There were 204 unique measures identified across 10 reviews. One review explicitly recommended measures for occupational therapists, three reviews identified adequate measures for all health care professionals, and one review identified measures for medical students. The 27 measures deemed adequate by these five systematic reviews are described.

Conclusions: Our results suggest a need to improve the overall methodological quality and reporting of systematic reviews on evidence-based practice measures to increase the trustworthiness of recommendations and allow comprehensive interpretation by end users. Risk of bias is common to all the included systematic reviews, as the quality of primary studies was not assessed. The diversity of tools and approaches used to evaluate the adequacy of evidence-based practice measures reflects tensions regarding the conceptualization of validity, suggesting a need to reflect on the most appropriate application of validity theory to evidence-based practice measures.

Systematic review registration number: PROSPERO CRD42020160874.

Publication types

  • Review
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Evidence-Based Practice*
  • Health Personnel*
  • Humans
  • Psychometrics
  • Systematic Reviews as Topic

Grants and funding