Multi-atlas Based Segmentation Editing with Interaction-Guided Constraints

We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance feature

  • PDF / 660,782 Bytes
  • 9 Pages / 439.363 x 666.131 pts Page_size
  • 79 Downloads / 152 Views

DOWNLOAD

REPORT


Department of Radiology and BRIC, UNC at Chapel Hill, NC 27599, USA Department of Computer Science, UNC at Chapel Hill, NC 27599, USA

2

Abstract. We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and then locally search for the training label patches well-matched with each interaction combination and also the previous segmentation. Then, we estimate the new segmentation through the label fusion of selected label patches that have their weights defined with respect to their respective distances to the interactions. Since the label patches are found to be from different combinations in our method, various shape changes can be considered even with limited training labels and few user interactions. Since our method does not need image information or expensive learning steps, it can be conveniently used for most editing problems. To demonstrate the positive performance, we apply our method to editing the segmentation of three challenging data sets: prostate CT, brainstem CT, and hippocampus MR. The results show that our method outperforms the existing editing methods in all three data sets.

1

Introduction

Automatic segmentation methods have been proposed for various applications. However, these methods often generate erroneous results in some areas of an image caused by difficulties such as unclear target boundaries, large appearance variations and shape changes. If errors can be edited with a few user annotations after automated segmentation, the total segmentation time could be significantly reduced. Many interactive segmentation methods [1,2] have been proposed to address the editing problem. These methods can generate certain improved results within a few seconds by using distinct user guidance and simple appearance models. However, it is difficult to directly apply these methods to the editing problem, when allowing only limited annotations on a small number of erroneous parts. For example, the appearance model constructed by a few interactions is often limited to obtain the reliable result as shown in Fig. 1(b). Several methods have c Springer International Publishing Switzerland 2015  N. Navab et al. (Eds.): MICCAI 2015, Part III, LNCS 9351, pp. 198–206, 2015. DOI: 10.1007/978-3-319-24574-4_24

Multi-atlas Based Segmentation Editing

199

been proposed to incorporate high-level information from training data into the editing framework to improve performance. Schwarz et al. [3] learned active shape model (ASM) and then incorporated it into an editing framework. When any incorrect landmark point is edited by users, the adjacent landmark points are modified accordingly and regularized b