Five things every clinician should know about AI ethics in intensive care
- PDF / 1,267,236 Bytes
- 3 Pages / 595.276 x 790.866 pts Page_size
- 74 Downloads / 189 Views
EDITORIAL
Five things every clinician should know about AI ethics in intensive care James A. Shaw1,2* , Nayha Sethi3 and Brian L. Block4 © 2020 Springer-Verlag GmbH Germany, part of Springer Nature
You have just admitted two patients to your intensive care unit (ICU) with coronavirus disease 2019 (COVID19), both needing intubation. You only have the resources to offer mechanical ventilation to one of them. In your view, both are equally ill and warrant a trial of mechanical ventilation. Your hospital uses artificial intelligence (AI) to make recommendations for the allocation of scarce resources, to reduce subjectivity and remove treating clinicians from triage decisions. Without showing the data or reason behind its decision, the algorithm recommends offering mechanical ventilation to one of the patients, who is White, rather than the other, who is Black. You wonder why the algorithm made this recommendation and whether it is morally “right”. As applications of AI become a routine part of clinical practice, intensive care clinicians will need to develop an understanding of the ethics and responsibilities that come with healthcare AI. In this brief paper, we outline five things every clinician should know to inform the ethical use of AI technologies in intensive care (see Fig. 1 for a summary). We highlight issues that clinicians must understand to engage in ethical deliberation about the uses of AI more generally. Readers seeking additional information and a principlist approach to issues of AI in healthcare would do well to read other articles in this special series on AI, or consult other authoritative publications [1, 2]. First, clinicians should have a basic fluency with the technology underlying AI because they will ultimately remain ethically and legally responsible for treatment decisions. As a general-purpose technology, AI refers to *Correspondence: [email protected] 1 Research Director of Artificial Intelligence, Ethics & Health, Joint Centre for Bioethics, University of Toronto, Toronto, Canada Full author information is available at the end of the article
computer algorithms that run complex computations on data using advanced statistical analyses [3]. These algorithms are generally trained on large datasets, which permit more accurate predictions than can be made with other methodologies. Healthcare applications of AI range from clinician-facing tools to predict clinical deterioration in the ICU to patient-facing applications such as automated chat functions (a chatbot) of which families can ask questions [3]. The purpose of becoming familiar with the technology underlying AI is not to become an expert in developing such technologies. Rather, practicing clinicians must understand what algorithms can and cannot do, promote the appropriate use of healthcare AI, and recognize when the technology is not performing as desired or expected. Second, clinicians should understand that patients and the public will not necessarily trust or embrace healthcare AI. A 2019 survey of members of the Canadian p
Data Loading...