Skip to main content

Primary supervisor

Waqar Hussain

Co-supervisors


Disruptive technologies such as artificial Intelligence (AI) systems can have unintended negative social and business consequences if not implemented with care. Specifically, faulty or biased AI applications may harm individuals, risk compliance and governance breaches, and damage to the corporate brand. An example for the potential harm inflicted on people is the case of Robert Williams who was arrested because of a biased insufficiently trained facial recognition system in the US in 2020 (See the New York Times link below).

Student cohort

Single Semester
Double Semester

Aim/outline

AI literature establishes that in order to address ethical issues of AI, software developers need systematic processes and mechanisms to improve the way AI systems are developed. Given the popularity and effectiveness of checklists in improving the state of practice and safety measures in healthcare, manufacturing or aviation, checklists have gained popularity as a means to improve the risk prone process of AI system development too.

The objective of this project is to

  1. critically analyse existing ethical AI checklists and questionnaire based guidelines
  2.  identify their strengths and weaknesses,
  3. design an improved (digital) checklist to manage AI training data that improves on the identified shortcomings and
  4. evaluate the effectiveness of the proposed checklist through developer interviews or focus groups.

The focus of this project will be on ethical issues of data collection and processing activities that are required for training machine learning algorithms.

 

Required knowledge

Solid Programming skills in ML

Able to work with, critically review and analyse machine learning and ethical AI literature

Basic Knowledge of Qualitative Analysis