Research Group

Inspired by Barsalou’s Grounded Cognition theory, our research group focuses on designing artificial agents that learn language by leveraging sensory information derived from interacting with the world and with other agents. We explore the intersection of natural language processing, computer vision, robotics, and cognitive science to develop AI systems that can understand and interact with their environment in meaningful ways.

Our work spans multiple domains including:

  • Embodied AI: Developing agents that can perceive and act in physical environments
  • Situated Language Understanding: Creating systems that understand language in context
  • Grounded Intelligence: Building AI that connects language to real-world experiences
  • Human-Robot Interaction: Designing conversational AI for embodied applications
  • Multimodal Learning: Integrating vision, language, and action for comprehensive understanding
PhD Students

Our research group is fortunate to work with talented PhD students who are pushing the boundaries of embodied and situated AI:


Amit Parekh
Generalisation for Embodied AI

Sabrina McCallum
Learning from Multimodal Feedback in Embodied AI

Malvina Nikandrou
Continual Learning for VLMs

George Pantazopoulos
Designing and Implementing VLMs

Together, we are working on cutting-edge research that bridges the gap between language understanding and embodied experience, creating AI systems that can truly understand and interact with the world around them.

Alumni and Visiting Students

Current list of students who have worked with us:

Alumni

Visiting Students