MULTI-SCALE EXPLAINABLE FEATURE LEARNING FOR PATHOLOGICAL IMAGE ANALYSIS USING CONVOLUTIONAL NEURAL NETWORKS
Kazuki Uehara, Masahiro Murakawa, Hirokazu Nosato, Hidenori Sakanashi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:59
The use of computer-assisted diagnosis (CAD) systems for pathological image analysis constitutes an important research topic. Such systems should be accurate, and their decisions should be explainable to ensure reliability. In this paper, we present an explainable diagnosis method based on convolutional neural networks (CNNs). This method allows us to interpret the basis of the decisions made by the CNN from two perspectives, namely statistics and visualization. For the statistical explanation, the method constructs dictionaries of representative pathological features over multiple scales in training data. It performs diagnoses based on the occurrence and importance of items in the dictionaries to rationalize its decisions. We introduce a vector quantization scheme to the CNN to enable it to construct the feature dictionary. For the visual interpretation, the method provides images of learned features in the dictionary by decoding them from a high-dimensional feature space to a pathological image space. The experimental results showed that the proposed network learned pathological features, which contributed to the diagnosis, and the method yielded approximately an area under the receiver operating curve (AUC) of 0.89 for detecting atypical tissues in pathological images of a uterine cervix by using these features.