Primary supervisor
Roberto Martinez-MaldonadoResearch area
Human-Centred ComputingI am seeking PhD candidates interested in designing and connecting Multimodal Learning Analytics solutions according to the pedagogical needs and contextual constraints of teamwork occurring across physical and digital spaces.
The learning analytics challenge for this PhD is to research, prototype and evaluate approaches to automatically capture traces of team members’ activity, using multimodal analytics techniques to make sense of data from heterogeneous contexts. Depending on the trajectory that you take, examples of the questions that such a project could investigate include:
- How can multimodal analytics approaches be applied to gain a holistic understanding of team members’ activity in authentic learning / training spaces?
- How can the insights of team members’ activity in physical spaces be connected with higher-level pedagogies?
- How can these insights promote productive behavioural change?
- How can the facilitator be supported with this information to provide informed feedback?
- How can team members and facilitators be supported with data?
- What are the ethical implications of rolling out analytics in the classroom or team settings?
- How can this information support more authentic and holistic assessment?
- What are the technical challenges that need to be overcome?
- How do learning theories, teamwork theory and learning design patterns map to the orchestration of such analytics tools?
I would be particularly interested in supervising students focusing on two broad scenarios:
Analytics of the classroom physical space. This would include collecting information via sensors from authentic classrooms and develop mechanisms to analyse the data and communicate insights to teachers, students and or decision makers. The following paper can serve as an illustrative example of this strand of research:
“I Spent More Time with that Team”: Making Spatial Pedagogy Visible Using Positioning Sensors. LAK 2019 [PDF]
Teamwork analytics. This would involve collecting multimodal data from collocated teamwork settings. I clear example would be teams of nurses training in simulated scenarios. Sensors such as positioning trackers, physiological wristbands, microphones and eye trackers could be used to model complex constructs based on low-level multimodal data. Here’s a paper that illustrates one potential scenario where this research could be applied:
Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data. CHI 2019 [PDF]
#digitalhealth
Required knowledge
Skills and dispositions required:
- A Masters degree, Honours distinction or equivalent with at least above-average grades in computer science, mathematics, statistics, or equivalent
- Analytical, creative and innovative approach to solving problems
- Strong interest in designing and conducting quantitative, qualitative or mixed-method studies
- Strong programming skills in at least one relevant language (e.g. C/C++, .NET, Java, Python, R, etc.)
- Experience with data mining, data analytics or business intelligence tools (e.g. Weka, ProM, RapidMiner). Visualisation tools are a bonus.
It is advantageous if you can evidence:
- Experience in designing and conducting quantitative, qualitative or mixed-method studies
- Familiarity with educational theory, instructional design, learning sciences or human-computer interaction/CSCW
- Peer-reviewed publications
- A digital scholarship profile
- Design of user-centred software