IOSUDA: an unsupervised domain adaptation with input and output space alignment for joint optic disc and cup segmentatio
- PDF / 22,061,263 Bytes
- 19 Pages / 595.224 x 790.955 pts Page_size
- 118 Downloads / 247 Views
IOSUDA: an unsupervised domain adaptation with input and output space alignment for joint optic disc and cup segmentation Chonglin Chen1 · Gang Wang1,2 Accepted: 16 September 2020 © Springer Science+Business Media, LLC, part of Springer Nature 2020
Abstract The segmentation of the optic disc (OD) and the optic cup (OC) is an important step for glaucoma diagnosis. Conventional deep neural network models appear good performance, but degradation when facing domain shift. In this paper, we propose a novel unsupervised domain adaptation framework, called Input and Output Space Unsupervised Domain Adaptation (IOSUDA), to reduce the performance degradation in joint OD and OC segmentation. Our framework achieves both the input and output space alignments. Precisely, we extract the shared content features and the style features of each domain through image translation. The shared content features are input to the segmentation network, then we conduct adversarial learning to promote the similarity of segmentation maps from different domains. Results of the comparative experiments on three different fundus image datasets show that our IOSUDA outperforms the other tested methods in unsupervised domain adaptation. The code of the proposed model is available at https://github.com/EdisonCCL/IOSUDA. Keywords Adversarial learning · Fundus images · Image translation · Joint optic disc and cup segmentation · Unsupervised domain adaptation
1 Introduction Optic nerve head examination is an important step in diagnosing glaucoma by checking the cup-to-disk ratio (CDR) [27]. To assist the ophthalmologists to screen glaucoma, we need to segment the optic disc (OD) region and the optic cup (OC) region accurately by the model. To the OD and OC segmentation, quite a few models such as M-Net [9] perform well when the distribution of the test set is consistent with the distribution of the training set. However, the generalization of the deep models needs to improve. When models trained on the source domain are applied to different target domains, the performances tend to degrade, i.e., domain shift in OD and OC segmentation [46]. Gang Wang
[email protected] 1
School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai 200433, China
2
Institute of Data Science and Statistics, Shanghai University of Finance and Economics, Shanghai 200433, China
Many unsupervised domain adaptation methods [7, 20, 38, 54] have been proposed to alleviate domain shift, such as translating the target domain images into the form of the source domain images and then using the model trained on the source domain to segment the translated images [54]. The studies [7, 20] based on the shared invariant features of the source and target domains focus on the input space of the segmentation network. Meanwhile, others [18, 38] focus on the consistency of the output space of the segmentation network by adversarial learning, which makes the segmentation maps of the source and target domains appear the same spatial and geometry structures.