Skip to main content

Federated Learning In Medical Imaging

Jayashree Kalpathy-Cramer, Holger Roth, Michael Zephyr

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 01:36:51
28 Mar 2022

Artificial Intelligence (AI) and machine learning (ML) are transformative technologies for healthcare. They are being used across the healthcare spectrum from improvement of image acquisitions to workflows, diagnosis and detection and assessment of response. Recent technical advances in deep learning have come about due to a confluence of advances in hardware, computational algorithms and access to large amounts of (annotated) data. These algorithms have demonstrated extraordinary performance for the analysis of biomedical imaging data including in radiology, pathology, ophthalmology and oncology. Despite such success, deep learning algorithms in medical imaging have also been shown to be brittle and not work as well on data that is different from what they were trained on. Data heterogeneity can arise due to differences in image acquisition, patient populations, geography and disease prevalence and presentations. Such heterogeneity poses challenges for building robust algorithms. One way to address this challenge is to ensure that the training dataset is diverse and representative, ideally from multi-institutional data sources. However, in healthcare access to such large amounts of multi-institutional data can be challenging due to concerns around patient privacy and data sharing, regulatory affairs and technical considerations around data movement, replication and storage. Recently distributed learning approaches such as federated learning have been proposed to address some of these challenges. Federated learning allows for learning from multi-institutional datasets with the need for data sharing. In classical federated learning, data reside in a consortium of sites, each with compute capabilities. Model architectures and common data elements are agreed to ahead of time. Training occurs in rounds where each site (client) trains a model locally and updates the model weights to a central server. The central server performs the aggregation of model weights and sends an updated model down to all clients. This process continues until convergence is achieved. Federated learning has shown to improve global and local model performances. Other configurations to federated learning including split learning, swarm learning and cyclical weight transfer.

Value-Added Bundle(s) Including this Product