Skip to main content

Different pretraining/finetuning strategies and how they impact calibration and uncertainty

Supervisor

Suitable for

MSc in Advanced Computer Science
Computer Science, Part B
Mathematics and Computer Science, Part C
Computer Science and Philosophy, Part C
Computer Science, Part C

Abstract

Medical data acquired in various modalities (CT, MRI, photograph) and of various anatomical parts is used in clinical decision making. Increasingly, machine learning methods are used in classification or segmentation tasks. Yet neural networks are known to be miscalibrated and often provide overconfident uncertainty estimates. The goal of this project is to evaluate the impact of different pretraining strategies (e.g., contrastive learning, self-supervised learning) and different fine-tuning strategies (e.g., data augmentation, test-time augmentation, label smoothing) on model calibration.