The future of AI in critical care is augmented, not artificial, intelligence
- PDF / 567,510 Bytes
- 2 Pages / 595.276 x 790.866 pts Page_size
- 103 Downloads / 208 Views
Open Access
EDITORIAL
The future of AI in critical care is augmented, not artificial, intelligence Vincent X. Liu1,2*
The field of AI—artificial intelligence—has seen tremendous success over the past decade. Today, AI touches billions of lives each day through voice and text processing, computer vision, prediction algorithms, video games, and much more. Naturally, there has also been enormous interest in applying AI to health care and, in particular, to data-rich environments like the intensive care unit. Early examples of AI in healthcare and critical care have already shown great promise [1], but also raise concerns that can be mitigated with preparation and foresight [2–4]. Recently, I put my own life into the hands of AI: it nearly killed me and, later, it also saved me. This harrowing experience was a potent reminder for me, an AI practitioner, that we must work to ensure this technology’s formidable capabilities are used to produce ‘augmented’, rather than just ‘artificial’, intelligence. Augmented intelligence places clinicians and ultimately patients, rather than algorithms, at its center. Where we successfully bridge the interface of clinician and machine intelligence, we have vast potential to make healthcare more effective, efficient, and sustainable. This will also ensure that health AI is safe, reliable, and equitable for all patients. In December, I found myself driving a Tesla electric car from Seattle to the Bay Area. With its highly touted AI— the car’s technology deploys sensors, computer vision, and deep learning to drive under its own guidance—having logged billions of driving miles, I anticipated a seamless transition between myself and the vehicle. What I experienced instead was a life and death struggle for *Correspondence: [email protected] 1 Division of Research, Kaiser Permanente, 2000 Broadway, Oakland, CA 94612, USA Full list of author information is available at the end of the article
control. After activating the AI, the car accelerated and took control of the wheel. Surprised, I searched for a way to disengage the technology. My first slight turn of the wheel proved ineffective. A more forceful attempt was interpreted by the vehicle as a course deviation. The AI immediately countered my turn, hurtling us toward a concrete barrier. Back and forth, the car swerved as the AI and I fought for control. Only after coming to a full stop on a busy highway was I finally able to regain control. In the end, the AI worked precisely as it was designed, following its algorithms. Yet, in succeeding in its task, it failed to produce a safe driving environment for its user. Although rare, similar events have contributed to fatal car and airline accidents. In a recent example, aviation software algorithms left pilots struggling to take control of their aircraft, ultimately contributing to hundreds of deaths. Inexperience and a lack of training magnified the danger induced by AI-driven actions. The object lesson for critical care is that we must ensure that our clinicians are prepared to effectivel
Data Loading...