Current best evidence for clinical care (more info)
OBJECTIVE: To review and critically appraise published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of becoming infected with covid-19 or being admitted to hospital with the disease.
DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.
DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.
STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model.
DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).
RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models.
CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.
SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245.
READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity..
|Discipline / Specialty Area||Score|
|Family Medicine (FM)/General Practice (GP)||
|General Internal Medicine-Primary Care(US)||
|Pediatric Emergency Medicine||
|Occupational and Environmental Health||
The models are at high risk for bias and many have poor discrimination, so front-line clinicians cannot use this information.
In the few months since COVID-19 was first diagnosed and described, these authors found 27 publications describing 31 prediction models for the disease. The main finding was that the included studies were at high risk of bias and probably overestimated the accuracy of the models. Models for diagnostic accuracy had higher C statistics than those for prognosis in general, but I would not at this point endorse strict adherence or use of any of these unvalidated models.
This is a nice (and necessary) reminder that just because a study/review is published in a peer-review journal, doesn`t mean it`s ready for prime time. In fact, in some cases, it`s possible that early adoption may be harmful. The article makes a good argument that even during a global crisis (e.g., COVID19 pandemic), looking for new ways to do research in a quick turnaround fashion is important. It is just as important to ensure the research that`s published and potentially implemented is reliable.
The article reports that all reviewed studies were "appraised to have high risk of bias owing to a combination of poor reporting and poor methodological conduct for participant selection, predictor description, and statistical methods used." I gave it a decent score for newsworthiness because medical providers need to know that many of the hastily written articles may be misleading.
Extremely timely and useful information. Too early to draw conclusions about sensitivity and specificity of prediction models.
Sound advice to appropriately interpret diagnosis and prognosis prediction models for COVID-19. Until an adequately reported model is published, clinical judgment prevails.
This is an excellent review of the limitations of current data tools and models for COVID-19. There is more enthusiasm for data analytical tools than quality at this time, and this provides a handy guide to what's out there and what researchers should be doing to improve existing tools.
Good idea but data are too sparse, and what is available has significant methodologic concerns that limit clinical uptake of the results. This SR/MA demonstrates a need for more work in the field rather than having any impact on caring for COVID patients.
A thoughtful and rigorous systematic review limited by the poor quality of the studies included. It is almost certain, however, that it will be possible to construct predictive models to accurately identify patients presenting with symptomatic illness who have COVID-19, but the model will require laboratory studies and/or CT imaging of the chest.
Interesting compilation of prediction models. The variables that fall out in the models are not all that novel - they are pretty intuitive and well known. I do think the most interesting thing is that of these 31 models, only 1 used data from outside of China, which may explain why the prediction models are not very robust right now.
Even though, this is titled as a "quick" systematic review, it seems premature to analyze studies at this time.
The main relevance aspect is in being critical about the state of the knowledge on the topic, and limitations on current efforts to create diagnostic and prognostic rules.
This is such a rapidly moving subject that I believe every systematic review is going to be out of date by the time it is published.
Although modelling is important, it is no surprise that numerous models are emerging and most with challenges. This is relevant background, but it is most useful among those whose role is mathematical modelling as opposed to applied front-line decision-making.
Indicates problems with recently published models for diagnosis and prognosis in COVID-19. Relevant information about issues with current models, but not to those in direct clinical practice.
The studies refer to adult data and considering the different clinical picture and prognosis in children affected by C19, are not transferable to children.
Unfortunately, there was a high percentage of bias in the studies, so interpretation of their results is questionable. The review highlights that so little is known about this condition. Improved longer-term data collection is necessary to provide more accurate prediction models.
Anyone with fair medical knowledge understands this. More important for non-medical persons to understand.
This is a systematic review of scientific articles based on predicting the occurrence or progression of COVID-19 in a given patient. This was a good review of current literature on this hot topic and will provide any casual reader a good understanding of what`s out there to use for diagnostic or prognostic purposes. It also provides a needed caution regarding the high risk for bias in these early prediction models, as well as advice on how to conduct a study well. On the whole, I think it will be of interest to most clinicians in any front-line or hospital-based specialty, but these results will be ephemeral: by June 2020 the findings of this paper will be obsolete.