Primary supervisor
Charith JayasekaraProviding timely, individualised feedback is a persistent challenge in large-scale computing units.
This project investigates how Generative AI models can automatically produce pedagogically aligned, rubric-based feedback on student submissions.
A prototype system will interface with an LLM API (e.g., OpenAI GPT) and generate structured feedback, which will be evaluated for accuracy, usefulness, and tone against educator benchmarks.
Aim/outline
- Design a feedback-generation pipeline linking uploaded student work to AI output.
- Engineer prompts that ensure feedback relevance and academic integrity.
- Implement a lightweight web interface for students and educators.
- Evaluate generated feedback via qualitative and quantitative comparison.
- Provide design recommendations for ethical AI feedback systems in higher education.
Required knowledge
- Python (Flask or Streamlit preferred).
- Basic understanding of LLMs and prompt engineering.
- Familiarity with assessment rubrics and educational feedback principles.
- Interest in applied AI and human-centred evaluation.