Surrogates and Artificial Intelligence: Why AI Trumps Family

  • PDF / 597,327 Bytes
  • 11 Pages / 439.37 x 666.142 pts Page_size
  • 12 Downloads / 193 Views

DOWNLOAD

REPORT


Surrogates and Artificial Intelligence: Why AI Trumps Family Ryan Hubbard1   · Jake Greenblum2 Received: 15 March 2020 / Accepted: 9 September 2020 © Springer Nature B.V. 2020

Abstract The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm (AA). Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as surrogate decision-maker in cases where the patient has not issued a medical power of attorney. We argue that in such cases, and against the standard practice of vesting familial surrogates with decision making authority, the AA should have sole decision-making authority. This is because the AA will likely be better at predicting what treatment option the patient would have chosen. It would also be better at avoiding bias and, therefore, choosing in a more patient-centered manner. Furthermore, we argue that these considerations override any moral weight of the patient’s special relationship with their relatives. Keywords  Artificial intelligence (AI) · Decision-making · Surrogate · Biomedical · Ethics

Introduction Experts in artificial intelligence (AI) claim that the exponential growth and development of AI “promises to change the landscape of medical practice” (Yu et  al. 2018) This change will come with its own set of ethical questions, so it is crucial for bioethicists to anticipate the future integration of AI into medical practices. The increasing accuracy of algorithms to predict personal values and preferences raises the possibility that artificial intelligence technology could, in the near future, be * Ryan Hubbard [email protected] 1

Philosophy Department, Gulf Coast State College, Panama City, FL, USA

2

University of Texas Rio Grande Valley, Edinburg, TX, USA



13

Vol.:(0123456789)



R. Hubbard, J. Greenblum

developed to serve as a possible surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm (AA) (2018). Such an algorithm would mine medical research, electronic health records (EHR), sociodemographic data, and social network information to make predictions regarding patient medical preferences. This raises the following question: Should the AA or a relative serve as surrogate decision-maker? We argue that the common principles employed to make decisions for incapacitated patients recommend that all relevant stakeholders should defer to the AA’s recommendation when the following conditions are met: (a) an adult patient without decision-making capacity has not issued a medical power of attorney, (b) relevant values and preferences can be inferred from their digital footprint