SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology

  • PDF / 1,253,849 Bytes
  • 7 Pages / 595.276 x 790.866 pts Page_size
  • 88 Downloads / 248 Views

DOWNLOAD

REPORT


and Other Interventional Techniques

SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology Sabrina Madad Zadeh1,2 · Tom Francois2 · Lilian Calvet2 · Pauline Chauvet1,2 · Michel Canis1,2 · Adrien Bartoli2 · Nicolas Bourdel1,2  Received: 4 June 2019 / Accepted: 24 December 2019 © Springer Science+Business Media, LLC, part of Springer Nature 2020

Abstract Background  In laparoscopy, the digital camera offers surgeons the opportunity to receive support from image-guided surgery systems. Such systems require image understanding, the ability for a computer to understand what the laparoscope sees. Image understanding has recently progressed owing to the emergence of artificial intelligence and especially deep learning techniques. However, the state of the art of deep learning in gynaecology only offers image-based detection, reporting the presence or absence of an anatomical structure, without finding its location. A solution to the localisation problem is given by the concept of semantic segmentation, giving the detection and pixel-level location of a structure in an image. The state-of-the-art results in semantic segmentation are achieved by deep learning, whose usage requires a massive amount of annotated data. We propose the first dataset dedicated to this task and the first evaluation of deep learning-based semantic segmentation in gynaecology. Methods  We used the deep learning method called Mask R-CNN. Our dataset has 461 laparoscopic images manually annotated with three classes: uterus, ovaries and surgical tools. We split our dataset in 361 images to train Mask R-CNN and 100 images to evaluate its performance. Results  The segmentation accuracy is reported in terms of percentage of overlap between the segmented regions from Mask R-CNN and the manually annotated ones. The accuracy is 84.5%, 29.6% and 54.5% for uterus, ovaries and surgical tools, respectively. An automatic detection of these structures was then inferred from the semantic segmentation results which led to state-of-the-art detection performance, except for the ovaries. Specifically, the detection accuracy is 97%, 24% and 86% for uterus, ovaries and surgical tools, respectively. Conclusion  Our preliminary results are very promising, given the relatively small size of our initial dataset. The creation of an international surgical database seems essential. Keywords  Laparoscopic surgery · Artificial intelligence · Deep learning · Gynaecological surgery Laparoscopic surgery has revolutionized surgery. In particular, it has brought a digital camera to the operating room. The acquired laparoscopic images provide a wealth of information, yet substantially underused, as processing this amount of information in real time exceeds the human brain’s ability. The increased computational capabilities and * Nicolas Bourdel [email protected] 1



Department of Gynaecological Surgery, CHU Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63000 Clermont‑Ferrand, France



EnCoV, Institut Pascal, CNRS, Université Clermont A