Primary supervisor
Guanliang ChenResearch area
Data Science and Artificial IntelligenceAs education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency. The rise of Generative AI and large language models (LLMs) further intensifies these challenges, calling for the development of responsible AI systems that are transparent, trustworthy, and aligned with human values in educational contexts.
This PhD project aims to design, develop, and evaluate responsible AI technologies that enhance the scalability, equity, and pedagogical soundness of educational assessment and feedback. The overarching goal is to investigate how AI systems can support formative assessment and personalised feedback while ensuring fairness, accountability, and transparency. The research will explore a combination of algorithmic design, human–AI interaction, and empirical evaluation in authentic learning settings. Potential research questions include:
-
How can AI systems be designed to deliver high-quality, personalised feedback at scale while preserving learner agency and promoting self-regulated learning?
-
What technical and sociotechnical mechanisms can be implemented to detect and mitigate bias in automated assessment systems?
-
How can explainability and transparency be operationalised in AI-powered educational tools to support student trust and uptake?
-
What are the ethical trade-offs involved in automating assessment, and how can responsible design frameworks guide these decisions?
Required knowledge
This project is open to candidates from diverse academic backgrounds, including computer science, data science, learning sciences, or educational technology. While prior experience with programming (e.g., Python), machine learning, or educational data is beneficial, it is not a strict requirement. The project provides ample opportunities to develop these skills over time. What matters most is a strong interest in responsible AI, a curiosity about educational systems, and a willingness to learn. Candidates who enjoy interdisciplinary research and are motivated to explore the ethical and technical dimensions of AI in education are particularly encouraged to apply.