Image Based Surgical Instrument Pose Estimation with Multi-class Labelling and Optical Flow
Image based detection, tracking and pose estimation of surgical instruments in minimally invasive surgery has a number of potential applications for computer assisted interventions. Recent developments in the field have resulted in advanced techniques for
- PDF / 2,586,549 Bytes
- 8 Pages / 439.363 x 666.131 pts Page_size
- 71 Downloads / 222 Views
Centre for Medical Image Computing, University College London, UK Division of Surgery and Interventional Science, UCL Medical School, UK
Abstract. Image based detection, tracking and pose estimation of surgical instruments in minimally invasive surgery has a number of potential applications for computer assisted interventions. Recent developments in the field have resulted in advanced techniques for 2D instrument detection in laparoscopic images, however, full 3D pose estimation remains a challenging and unsolved problem. In this paper, we present a novel method for estimating the 3D pose of robotic instruments, including axial rotation, by fusing information from large homogeneous regions and local optical flow features. We demonstrate the accuracy and robustness of this approach on ex vivo data with calibrated ground truth given by surgical robot kinematics which we will also make available to the community. Qualitative validation on in vivo data from robotic assisted prostatectomy further demonstrates that the technique can function in clinical scenarios.
1
Introduction
Robotic minimally invasive surgery can facilitate procedures in confined and difficult to access anatomical regions. However, accessing the anatomy with robotic instruments reduces the surgeon’s ability to sense force feedback from instrument-tissue interactions and the limited field of view of the surgical camera makes localization with respect to preoperative patient data challenging. Computer assisted interventions (CAI) can integrate additional information during the operation to help the surgeon and knowing the 3D position and orientation of the surgical instruments during surgery is a critical CAI element. The instrument pose can additionally be used in robotic surgery to provide control enhancements with dynamic motion constraints or to detect tool-tissue interactions and provide force feedback [13]. Image-based methods can potentially estimate instrument pose in the reference frame of the laparoscope without requiring electromagnetic or optical sensors [6,12]. This usually involves extracting image features such as edges, points or regions and then solving alignment cost functions which measure the agreement with parametrized models of the tool [10]. Gradient based methods © Springer International Publishing Switzerland 2015 N. Navab et al. (Eds.): MICCAI 2015, Part I, LNCS 9349, pp. 331–338, 2015. DOI: 10.1007/978-3-319-24553-9_41
332
M. Allan et al.
are often preferred but it is challenging to develop cost functions that do not easily become trapped in local minima and fail to find the correct pose [15,1]. [9] used gradient free optimization from color and texture features for articulated instruments but the chosen cost can be complex to optimize resulting in slow and often inaccurate solutions. Another alternative is to use Random Forests (RF) to detect instrument parts [14] which gives promising results and low computational cost but is only shown as a 2D tracking method. Using robot kinematic information from the joint encoders has
Data Loading...