Learning-Based Shape Model Matching: Training Accurate Models with Minimal Manual Input
Recent work has shown that statistical model-based methods lead to accurate and robust results when applied to the segmentation of bone shapes from radiographs. To achieve good performance, model-based matching systems require large numbers of annotations
- PDF / 745,732 Bytes
- 8 Pages / 439.363 x 666.131 pts Page_size
- 75 Downloads / 171 Views
roduction
There are many research questions which can be answered by analysing the large databases of tens of thousands of medical images which are now becoming available, both from large studies and from growing electronic archives of clinical data. A key first step in such analysis is often locating the outlines of the structures of interest. Recently, it has been shown that robust and accurate annotations can be automatically obtained using shape-based model matching algorithms [2,4,8,9]. Unfortunately, building such models requires accurate annotation of large numbers of points on several hundred images. This creates a significant bottleneck, hampering the ability to analyse large datasets efficiently. This paper addresses the problem of building effective shape model matching systems from large sets of images with as little manual intervention as possible. c Springer International Publishing Switzerland 2015 N. Navab et al. (Eds.): MICCAI 2015, Part III, LNCS 9351, pp. 580–587, 2015. DOI: 10.1007/978-3-319-24574-4_69
Learning-Based Shape Model Matching
581
We use a pragmatic, but effective, approach in which we use a small number of manually annotated points on each image to initialise a dense groupwise nonrigid registration (GNR) [10] to establish correspondences across the set of images, allowing us to propagate a dense annotation from one image to all the rest and then to use these to build a detailed model to be matched to new images. One option to also consider for the generation of dense annotations is fully automatic GNR. However, while this can work well for some reasonably homogeneous datasets, problems often occur where images have been gathered from multiple sources. This tends to be the case in large epidemiological studies where images will have been collected retrospectively, from various centres and without consistent imaging protocols in place. Furthermore, even when the registration works well, models built from such automated correspondences are generally less effective than those built from careful manual annotations. This is because the registration is likely to smooth out details and fail to match to unusual variations. Fully automatic GNR often works well on examples close to the average but performs poorly on outliers. Many failures in the registration are one of two types, (a) gross failures where the registration has converged to the wrong place completely, or (b) localised failures where one part of the object has been poorly matched. Both types can be substantially mitigated if the user supplies a small number of manually annotated landmarks, integrating human expertise into the annotation procedure (see e. g. [6,11]). Although registration techniques are widely used to establish correspondences and build shape models (particularly in 3D data) [1,3,5], we make a number of key contributions. We show on the challenging, but representative, datasets that we examine that (i) model matching systems built from GNR initialised only with the correct initial pose (from two manually annotated po
Data Loading...