Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images
- PDF / 555,177 Bytes
- 11 Pages / 595.276 x 790.866 pts Page_size
- 56 Downloads / 209 Views
RESEARCH
Open Access
Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images Danielle J. Fassler1†, Shahira Abousamra2†, Rajarsi Gupta3, Chao Chen3, Maozheng Zhao2, David Paredes2, Syeda Areeha Batool3, Beatrice S. Knudsen4, Luisa Escobar-Hoyos1,5, Kenneth R. Shroyer1, Dimitris Samaras2, Tahsin Kurc3 and Joel Saltz3*
Abstract Background: Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of the unique colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel. Methods: Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and ensemble methods that employ both ColorAE and U-Net, collectively referred to as (3) ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologistgenerated ground truth. We then used prediction results for spatial analysis (nearest neighbor). (Continued on next page)
* Correspondence: [email protected] † Danielle J. Fassler and Shahira Abousamra contributed equally to this work. 3 Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook 11794, USA Full list of author information is available at the end of the article © The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if
Data Loading...