Individual model forecasts can be misleading, but together they are useful

  • PDF / 286,617 Bytes
  • 2 Pages / 595.276 x 790.866 pts Page_size
  • 88 Downloads / 196 Views

DOWNLOAD

REPORT


COMMENTARY​

Individual model forecasts can be misleading, but together they are useful Caroline O. Buckee1,3 · Michael A. Johansson2,3 Received: 11 July 2020 / Accepted: 17 July 2020 / Published online: 11 August 2020 © Springer Nature B.V. 2020

The broad use by media and governments of model forecasts to inform the COVID-19 response has been a prominent and controversial feature of the pandemic so far. In this issue, Chin et al. compare the accuracy of four high profile models that, early during the outbreak in the US, aimed to make quantitative predictions about deaths and Intensive Care Unit (ICU) bed utilization in New York [1]. They find that all four models, though different in approach, failed not only to accurately predict the number of deaths and ICU utilization but also to describe uncertainty appropriately, particularly during the critical early phase of the epidemic. While overcoming these methodological challenges is key, Chin et al. also call for systemic advances including improving data quality, evaluating forecasts in real-time before policy use, and developing multi-model approaches. The authors reveal substantial variability in “ground truth” data; epidemiological surveillance data used for both building and evaluating forecasting models. Coupled with uncertainty about basic epidemiological parameters of SARS-COV-2 as well as limitations in model frameworks, it is not surprising that such models have the potential to generate inaccurate forecasts. Improved data quality can certainly help improve model predictions, but forecasts are often needed in moments where surveillance systems that generate key data are new, imperfect, and rapidly changing. The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention. * Caroline O. Buckee [email protected] 1



Department of Epidemiology, Center for Communicable Disease Dynamics, Harvard TH Chan School of Public Health, Boston, USA

2



Division of Vector‑Borne Diseases, Centers for Disease Control and Prevention, San Juan, Puerto Rico

3

COVID‑19 Modeling Team, Centers for Disease Control and Prevention, Atlanta, GA, USA



These uncertainties need to be integrated into the forecast itself. Moreover, the additional sources of uncertainty associated with the model—parameter uncertainty and structural uncertainty—also need attention, and are often dealt with superficially. Taken together, these challenges may lead some to question the use of forecasts for policy making in the first place. But what the model comparison by Chin et al. highlights is an important principle that many in the research community have understood for some time: that no single model should be used by policy makers to respond to a rapidly changing, highly uncertain epidemic, regardless of the institution or modeling group from which it comes. Due to the multiple uncertainties described above, even models using the same underlying data often have results that diverge because th