Skip to main content

GEMS 2026: Toward Distribution-Robust Medical Imaging Models in the Wild


While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts.  Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential. This PhD project will develop methods for building reliable medical imaging models that generalize across distribution shifts without retraining. The project will focus on automated distributional shift detection and monitoring, invariant and distributionally robust representation learning algorithms, and deployment-time calibration with uncertainty quantification using approaches such as conformal prediction. The outcome will be a robust pipeline for deploying medical imaging models that remain reliable and fair across diverse real-world clinical settings.

Required knowledge

Strong background in machine/deep learning, computer vision, or applied statistics.

Solid programming skills in Python and experience with deep learning frameworks (e.g., PyTorch or TensorFlow) 

Project funding

Other

Learn more about minimum entry requirements.