Skip to main content

Generating explanations that involve uncertainty

Primary supervisor

Ingrid Zukerman

Co-supervisors

Research area

Vision and Language

This PhD project is part of a larger project that aims to explain the uncertainty of Machine Learning (ML) predictions. To this effect, we must quantify uncertainty, devise algorithms that explain ML predictions and their uncertainty to different stakeholders, and evaluate the effect of the conveyed information. The expected outcome of this project is an innovative conversational agent that helps users (e.g., patients and loan applicants) understand the predictions of ML models and their uncertainty. The prospective candidate may focus on ML aspects or eXplainable AI (XAI) aspects of the project, in consultation with the supervisors.

Required knowledge

The successful candidate must have expertise in one or both of ML/statistics and XAI, and at least be conversant with the other area.


Learn more about minimum entry requirements.