Surgical Tool Tracking and Pose Estimation in Retinal Microsurgery

Retinal Microsurgery (RM) is performed with small surgical tools which are observed through a microscope. Real-time estimation of the tool’s pose enables the application of various computer-assisted techniques such as augmented reality, with the potential

  • PDF / 1,784,901 Bytes
  • 8 Pages / 439.363 x 666.131 pts Page_size
  • 20 Downloads / 215 Views

DOWNLOAD

REPORT


Computer Aided Medical Procedures, Technische Universit¨ at M¨ unchen, Germany 2 Carl Zeiss Meditec AG, M¨ unchen, Germany 3 DISI, University of Bologna, Italy [email protected]

Abstract. Retinal Microsurgery (RM) is performed with small surgical tools which are observed through a microscope. Real-time estimation of the tool’s pose enables the application of various computer-assisted techniques such as augmented reality, with the potential of improving the clinical outcome. However, most existing methods are prone to fail in in-vivo sequences due to partial occlusions, illumination and appearance changes of the tool. To overcome these problems, we propose an algorithm for simultaneous tool tracking and pose estimation that is inspired by state-of-the-art computer vision techniques. Specifically, we introduce a method based on regression forests to track the tool tip and to recover the tool’s articulated pose. To demonstrate the performance of our algorithm, we evaluate on a dataset which comprises four real surgery sequences, and compare with the state-of-the-art methods on a publicly available dataset.

1

Introduction

Retinal Microsurgery (RM) is a delicate medical operation which requires extremely high handling precision of the utilized surgical instruments. Usually, the visual control for the surgeon is restricted to a limited 2D field of view through a microscope. Problems such as lens distortions and the lack of depth information or haptic feedback complicate the procedures further. Recent research aimed at assisting the surgeon by introducing smart imaging such as the Optical Coherence Tomography (OCT) [1], which visualizes subretinal structure information. In the current workflow, these devices have to be manually positioned on the region of interest, which is usually close to the tool tip. The ability of extracting the position of the surgical tool tip in real-time allows to carry out this positioning automatically. Other applications that require tool tracking include surgical motion analysis and visual servoing. The estimation of the articulated pose of the tool rather than its position alone allows us to measure the size of the anatomical structures in the video sequence. Additionally, it paves the way for advanced augmented reality applications which provide, for example, proximity c Springer International Publishing Switzerland 2015  N. Navab et al. (Eds.): MICCAI 2015, Part I, LNCS 9349, pp. 266–273, 2015. DOI: 10.1007/978-3-319-24553-9_33

Tool Tracking with Pose Estimation in RM

267

information of the tool tips to the retina. Despite recent advances, the vision based tracking of the tool tip’s location in in-vivo is still challenging, mainly due to lighting variation and variable instrument appearances. Moreover, tracking has to be real-time capable in order to be employed during a surgical procedure. These challenges have been addressed with different approaches, including colorbased [2] and geometry-based methods [3–5]. Other relevant works [6–8] focus on a specific tool model (e.g. vitrectom