Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:01:59
21 Apr 2023

Pathological image analysis must be accurate, and to ensure reliability, it must be possible to verify the basis for the decisions made. In this study, we propose an explainable neural network that can provide the basis for its decisions from a dictionary constructed by the network itself. The network explains the basis by retrieving images from its dictionary similar to local regions in an input image. The network learns histological patterns such as pathological findings that contributed to the decisions and constructs its dictionary from the representative patterns via multiple instance learning (MIL) and contrastive learning. MIL enables the network to learn histological patterns without pixel-wise annotations, whereas contrastive learning makes the dictionary items consistent for interpretability. We perform pathological whole-slide image classification using a lung biopsy dataset, and the results show that interpretable dictionary items could be learned. Further, based on the results obtained, we establish that our method is more diagnostically accurate than the existing methods.

More Like This