Research Group:
Multimodal Learning Technologies
Head of the Research Group:
Prof. Dr. Daniele Di Mitri
The Multimodal Learning Technologies research group investigates the integration of artificial intelligence, multimodal analytics, and immersive technologies to transform educational assessment and digital learning.
Our vision is to create adaptive, responsible, and human-centred learning environments by drawing on a wide range of data sources, such as video, audio, sensor streams, and behavioural analytics, with a particular focus on the unique opportunities offered by immersive technologies.
A central aspect of our work is the development of intelligent tutoring systems and digital platforms that provide adaptive and personalised feedback. By analysing behavioural and cognitive data, including body posture, speech, and emotional states, we can model learner progress in real-time and deliver feedback that is both educationally meaningful and contextually relevant.
A prominent example of a multimodal “AI Tutor” is Presentable, an AI-based presentation training software that provides immediate feedback and guidance to help students develop their presentation skills.
Immersive technologies and extended realities are at the heart of our approach, enabling us to design educational content and assessment tools that fully leverage the possibilities offered by virtual worlds. Our systems are also designed to be integrated in extended reality, such as virtual and augmented reality, allowing learners to participate in realistic, interactive scenarios that closely reflect real-world challenges.
Advances in wearable technology and the Internet of Things further strengthen our ability to create engaging, data-rich learning experiences. By capturing and analysing multimodal data within these settings, we gain deeper insights into how physical behaviours and cognitive processes interact, supporting skill acquisition, collaboration, and engagement in ways that traditional methods cannot achieve.
All our research is underpinned by a strong commitment to responsible artificial intelligence. We prioritise ethical and privacy-preserving methods for data collection and analysis, and we use participatory design practices to ensure our technologies meet the needs of students, educators, and policymakers. Our work is guided by sustainable and equitable design principles, ensuring that the societal impact of our innovations is always carefully considered.
In summary, the Multimodal Learning Technologies group advances educational technology by:
- Creating adaptive, AI-powered tutors that deliver personalised feedback in both traditional and immersive learning environments
- Analyse the contribution that multimodal sensors and interfaces provide to model learning
- Designing and evaluating innovative assessment tools for use in virtual, augmented, and mixed reality settings
- Employing ethical, privacy-preserving, and participatory design practices to ensure responsible innovation
Through these efforts, we bridge the gap between learning analytics, extended reality, and artificial intelligence for practical educational applications, positioning our research group as a leader in shaping the future of digital and immersive education.