COVID-19, AI enthusiasts, and toy datasets: radiology without radiologists

  • PDF / 154,747 Bytes
  • 2 Pages / 595.276 x 790.866 pts Page_size
  • 51 Downloads / 141 Views

DOWNLOAD

REPORT


LETTER TO THE EDITOR

COVID-19, AI enthusiasts, and toy datasets: radiology without radiologists H. R. Tizhoosh 1,2

&

Jennifer Fratesi 3

Received: 4 September 2020 / Revised: 23 September 2020 / Accepted: 2 November 2020 # The Author(s) 2020

In computer science, textbooks talk about the “garbage in, garbage out” concept (GIGO); i.e., low-quality input data generates unreliable output or “garbage.” GIGO becomes, even more, a pressing issue when we are dealing with highly complex data modalities, such as radiographs and computed tomography scans. The performance of any deep network directly depends on the quality of the dataset that it learns from. Reputable repositories like Cancer Imaging Archive [1] backed up with a large body of work by experts [2] is an example of reliable datasets. Adhering to DICOM standards and ensuring that images are properly linked to supporting metadata are obligatory to construct a well-curated dataset. In recent weeks, we are observing a trend to hastily use illcurated data to train deep networks for COVID-19. It seems AI enthusiasts impatiently create their own datasets of medical images without seeking clinical collaborators to guide them. These collections are rather “toy sets” through the manual gathering of publicly accessible images (e.g., online journals, and preprints on preprints non-peer-reviewed archives). Most of the time AI researchers—with no clinical or medical competency—create their own experimental “toy” datasets to run initial investigations and establish a framework for algorithmic challenges. To be clear, a “toy dataset” from the medical imaging perspective is not a toy just because it is very small and does not comply with DICOM standards, but more importantly because it has been created by engineers and computer scientists, and not by physicians and medical/clinical experts. Such datasets of COVID-19 images have been emerging on the Internet and * H. R. Tizhoosh [email protected] 1

Kimia Lab, University of Waterloo, Waterloo, Canada

2

Vector Institute, MaRS Centre, Toronto, Canada

3

Department of Medical Imaging, University Health Network, Toronto, Canada

used by AI enthusiasts to write blogs and non-peer-reviewed reports [3–7]. The training of the so-called COVID Nets happens with these toy datasets with no radiologist participation, and with no common validations such as “leave-one-out” testing. In an attempt to overcome the small data size, AI enthusiasts mix the few adult COVID-19 images scraped from the Internet with many pediatric (bacterial) pneumonia images [5, 6]; Are these COVID Nets learning anything meaningful? No one can curate a COVID-19 dataset in disregard of professional recommendations. The American College of Radiology (ACR) and Canadian Association of Radiology (CAR) currently do not recommend the use of x-ray or CT imaging to screen or diagnose COVID-19 infections [8] because of risks for spreading the infection, resource constraints, and added logistics. However, CT, in particular, may be useful to expedite care in symptomatic p