View a PDF of the paper titled Review learning: Real world validation of privacy preserving continual learning across medical institutions, by Jaesung Yoo and 11 other authors
View PDF
HTML (experimental)
Abstract:When a deep learning model is trained sequentially on different datasets, it often forgets the knowledge learned from previous data, a problem known as catastrophic forgetting. This damages the model’s performance on diverse datasets, which is critical in privacy-preserving deep learning (PPDL) applications based on transfer learning (TL). To overcome this, we introduce “review learning” (RevL), a low cost continual learning algorithm for diagnosis prediction using electronic health records (EHR) within a PPDL framework. RevL generates data samples from the model which are used to review knowledge from previous datasets. Six simulated institutional experiments and one real-world experiment involving three medical institutions were conducted to validate RevL, using three binary classification EHR data. In the real-world experiment with data from 106,508 patients, the mean global area under the receiver operating curve was 0.710 for RevL and 0.655 for TL. These results demonstrate RevL’s ability to retain previously learned knowledge and its effectiveness in real-world PPDL scenarios. Our work establishes a realistic pipeline for PPDL research based on model transfers across institutions and highlights the practicality of continual learning in real-world medical settings using private EHR data.
Submission history
From: Jaesung Yoo [view email]
[v1]
Mon, 17 Oct 2022 19:54:38 UTC (1,555 KB)
[v2]
Thu, 26 Jun 2025 04:44:25 UTC (4,000 KB)