Adversarial radiomics: the rising of potential risks in medical imaging from adversarial learning

  • PDF / 205,044 Bytes
  • 3 Pages / 595.276 x 790.866 pts Page_size
  • 1 Downloads / 200 Views

DOWNLOAD

REPORT


EDITORIAL

Adversarial radiomics: the rising of potential risks in medical imaging from adversarial learning Andrea Barucci 1

&

Emanuele Neri 2

# Springer-Verlag GmbH Germany, part of Springer Nature 2020

Introduction Radiomics is defined as a “high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support” [1]. It is becoming a huge field of research of medical imaging and, de facto, is promising as a cornerstone in precision medicine along with the other omics science, having shown great potential in many different clinical applications [2–4]. One important step in the radiomics development is that it is progressively moving towards deep radiomics, namely the use of deep learning to automatically extract features from images, classify disease, and predict outcomes. This deep approach is probably overcoming the traditional radiomic approach using hand-crafted features (i.e., traditional radiomics) [5–7]. Despite the positive aspects and promises of a significant impact of radiomics in clinical practice, we are also aware of some risks to which this technology may be subject. For example, the lack of reproducibility in radiomic studies is a wellknown problem, basically depending on the complex mixing of the many different steps in the data acquisition, processing, and analysis, translating in a sort of pipeline fingerprint which can affect the results of the analysis, but the transfer of knowledge from other fields lets us perceive that there is a more devious threat [8, 9]. How will we change our trustworthiness This article is part of the Topical Collection on Advanced Image Analysis * Andrea Barucci [email protected] 1

CNR-IFAC Institute of Applied Physics “N. Carrara”, 50019 Sesto Fiorentino, Italy

2

Diagnostic and Interventional Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy

in radiomics if despite taking care of all sources of data confusion, just changing a few pixels on an image (not randomly), what looks the same to the human eye causes the state of the art classifiers to fail miserably? What about if perturbing some ad hoc pixel values in an image, we will be able to manipulate a diagnosis, resulting in a user-predefined category?

Adversarial machine learning We have been recently advised of such potential situation from the machine learning community, where the so-called adversarial machine learning field is developing from years. Adversarial machine learning is a technique employed in the field of machine learning which attempts to fool models through malicious input [10]. Many studies have shown how a small (and eventually carefully designed) perturbation of the data is able to totally deceive models, independently from the class of the algorithm [11, 12]. Today, we are aware that every machine learning domain needs to face this treat, healthcare in primis, where issues related to personal, ethical, financial, and legal consequ