Artificial intelligence in medicine and the disclosure of risks
- PDF / 659,869 Bytes
- 9 Pages / 595.276 x 790.866 pts Page_size
- 43 Downloads / 206 Views
ORIGINAL ARTICLE
Artificial intelligence in medicine and the disclosure of risks Maximilian Kiener1 Received: 24 May 2020 / Accepted: 7 October 2020 © The Author(s) 2020
Abstract This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation. Pace current clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation to warn him about potentially harmful consequences. To support this view, I argue, first, that the already widely accepted conditions in the evaluation of risks, i.e. the ‘nature’ and ‘likelihood’ of risks, speak in favour of disclosure and, second, that principled objections against the disclosure of these risks do not withstand scrutiny. Moreover, I also explain that these risks are exacerbated by pandemics like the COVID-19 crisis, which further emphasises their significance. Keywords Artificial intelligence · Medical disclosure · Risks · Informed consent · COVID-19
1 Introduction Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as driving a car or even performing complicated medical procedures. What is more, AI also outperforms humans in these tasks. On average, AI is the better driver and in some domains of medical diagnosis (Bathaee 2018), drug development (Arshadi et al. 2020), and even the execution of treatment and surgery (Ficuciello et al. 2019; Ho 2020), AI already is—or soon promises to be—better than trained medical professionals. Unfortunately, the best AI also tends to be the least transparent, often resulting in a ‘black box’ (Carabantes 2019). We can see which data go into the AI system and also which come out. We may even understand how such AI systems work in general terms, i.e. usually through so-called deep neural networks. Yet, we often cannot understand why, on a certain occasion, the AI system made a particular decision, arrived at a particular diagnosis, or performed a particular move in an operation (Bathaee 2018; Carabantes 2019; Coeckelbergh 2020). This is because of the sheer complexity * Maximilian Kiener [email protected] 1
The Queen’s College, Faculty of Philosophy, The University of Oxford, High Street, Oxford OX1AW, UK
of these systems, which may base a single output on as many as 23 million parameters, e.g. ‘Inception v3’ developed by Google (Wang et al. 2019), and the fact that AI systems constantly change their own algorithms without human supervision (Bathaee 2018; Price 2017). Although there is growing research on so-called “eXplainable AI” (“XAI”) (Samek 2019), many aspects of AI are still un-explainable and, given the increasing sophistication of AI, they are likely to remain so in the future
Data Loading...