Hypotension Prediction Index: from proof-of-concept to proof-of-feasibility

  • PDF / 449,230 Bytes
  • 4 Pages / 595.276 x 790.866 pts Page_size
  • 96 Downloads / 219 Views

DOWNLOAD

REPORT


EDITORIAL

Hypotension Prediction Index: from proof‑of‑concept to proof‑of‑feasibility Ilonka N. de Keijzer1 · Jaap Jan Vos1 · Thomas W. L. Scheeren1 Received: 16 January 2020 / Accepted: 20 January 2020 © Springer Nature B.V. 2020

Intraoperative hypotension (IOH) is increasingly recognized as a major contributing factor associated with the development of postoperative complications in terms of renal [1–6], myocardial [6–8] and possibly, cerebral injury [9–11], despite substantial variability in literature regarding its exact definition [12, 13]. As IOH is not only associated with perioperative morbidity but with perioperative mortality [5, 14–17] as well—which is the 3rd greatest global contributor to deaths after ischemic heart disease and stroke [18]—efforts should be made to reduce both the incidence and duration of IOH. Hence, recently a consensus statement by the Perioperative Quality Initiative-3 workgroup [19] advises that for adults undergoing non-cardiac surgery, there is substantial evidence supporting that mean arterial pressure (MAP) should be kept above 60–70 mmHg in order to reduce postoperative myocardial and renal injury, and death. Given that even brief periods of IOH may be harmful—e.g. after induction of anesthesia and before surgical incision [1]—it may be beneficial to change our current practice from a reactive approach (by monitoring the patient`s actual hemodynamic status) [20, 21] to a proactive approach, by predicting vital signs [22], especially since (cumulatively) the longer a patients “spends” in IOH, the more likely it is that this will adversely affect outcome [14]. The current advances in medical technology include the use of machinelearning based algorithms [23, 24] to analyze large datasets in order to provide clinically useful information. Such predictive analytics may help in substantiating such a proactive approach.

1 In a nutshell: machine‑learning algorithm in (bio)medical research Arthur L. Samuels was in 1959 the first one to describe the concept of machine-learning. It was described as “the programming of a computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning”. The first machines were programmed to play checkers and in just a few hours the machines learned to play checkers better than the persons who programmed the computers [25]. Machine-learning algorithms differ substantially from traditional, rule-based algorithms. In such traditional algorithms, a pre-defined situation is handled by pre-defined criteria as set by the programmer. While this may be accurate, e.g. the administration of anesthetics using target-controlled infusion algorithms [23], its performance depends on the exact definitions and criteria, as set by the programmer. In machine learning, multiple input variables (features) are associated with output variable(s). Different forms of machine-learning exist, yet conceptually in medical practice, it allows the analysis of large (patient-derived) datasets for given output variables,