Skip to main content

Fairness testing of AI-based systems

Primary supervisor

Aldeida Aleti

Research area

Software Engineering

Machine learning is being used to make important decisions affecting people's lives, such as filter loan applicants, deploy police officers, and inform bail and parole decisions, among other things. Machine learning has been found to introduce and perpetuate discriminatory practices by unintentionally encoding existing human biases and introducing new ones. In this project, we will develop automated testing approaches that can be used to verify that machine learning models are not biased. 

Required knowledge

Software engineering, software testing, statistics, machine learning


Learn more about minimum entry requirements.