Skip to main content

Combating Machine Bias in Teaching and Learning

Primary supervisor

Guanliang Chen

Education, undoubtedly, is one of the most fundamental means for people to gain personal and professional development. Given its importance, both researchers and practitioners have endeavored to apply various technologies to construct numerous educational systems and tools to facilitate teaching and learning in the past decades. However, it has been widely demonstrated that such systems and tools tend to display bias to certain groups of students. For instance, female students, compared to their male counterparts, are often predicted as being less likely to enroll in STEM courses and more likely to fail in such courses. Beyond question, such inaccurate and unfair predictions will likely harm the benefits of minority and disadvantaged groups of students, which, in turn, hinders the social and economic development of the whole society.

Therefore, this project aims to (i) investigate the cause of such biased predictions; and (ii) develop more accurate and fair systems and tools to facilitate teaching and learning at a large scale. Potential research questions in this project include but are not limited to:

  • To what extent do existing educational predictive models display algorithmic bias (i.e., machine bias) towards students of different demographic attributes?
  • What are the causes of algorithmic bias across different educational settings?
  • How can techniques such as Class Balancing and Active Learning be used to alleviate or even eliminate algorithmic bias?

Required knowledge

  • Strong programming skills (e.g., Python)
  • Basic knowledge in Data Science, Natural Language Processing, and Machine Learning
  • The following can be a plus: (i) prior experience in applying Deep Learning models; (ii) good at academic writing; and (iii) strong motivation in pursing quality academic publications.

Learn more about minimum entry requirements.