back to list

Project: Continual Reinforcement Learning with Language Instructions


Continual reinforcement learning (CRL) stands as a pivotal paradigm in the AI landscape, fostering the development of adaptive and lifelong learning agents. This project delves into the intersection of CRL and natural language processing within the immersive realm of 3D simulation environments. The integration of language instructions introduces a novel dimension to reinforcement learning, enhancing the agent's ability to comprehend and execute complex tasks in dynamic, real-world scenarios. This research aims to unravel the synergies between continual reinforcement learning and linguistic guidance, exploring the potential for more robust, context-aware AI agents capable of navigating intricate 3D environments with heightened efficiency and adaptability.


Continual Reinforcement Learning Framework: Develop a holistic framework that seamlessly integrates continual learning, reinforcement learning, and language instructions. This framework should enable learners to build upon their existing knowledge and adapt to new language skills and reinforcement learning tasks over time.

World Modeling through Language: Create models that allow learners to construct world models through language understanding. Develop the capability to translate natural language descriptions into actionable insights for reinforcement learning in dynamic environments.

Language-Guided Reinforcement Learning: Investigate the role of language instructions in guiding reinforcement learning tasks. Develop models that can interpret and execute natural language instructions to perform specific actions or tasks in dynamic environments.

Pre-training with Large Language Models: Investigate the benefits of pre-training with large language models (LLMs) for reinforcement learning with language instructions. Explore how LLMs can serve as knowledge sources to facilitate language-guided reinforcement learning and provide the foundation for adaptive language skill development.


By combining these objectives, the project takes on a more comprehensive approach, addressing the seamless integration of continual learning, reinforcement learning, language guidance, pre-training with LLMs, world modeling, and personalized learning paths. The ultimate goal is to create a versatile system that empowers AI towards language proficiency and mastery of reinforcement learning in a dynamic and evolving context.


Tristan Tomilin, Meng Fang, Yudi Zhang, Mykola Pechenizkiy. COOM: A Game Benchmark for Continual Reinforcement Learning. NeurIPS 2023

Lin, J., Du, Y., Watkins, O., Hafner, D., Abbeel, P., Klein, D., & Dragan, A. Learning to Model the World with Language. 2023

Du, Y., Watkins, O., Wang, Z., Colas, C., Darrell, T., Abbeel, P., Andreas, J. Guiding Pretraining in Reinforcement Learning with Large Language Models. ICML 2023

Chaplot, D. S., Sathyendra, K. M., Pasumarthi, R. K., Rajagopal, D., & Salakhutdinov, R. Gated-Attention Architectures for Task-Oriented Language Grounding. AAAI 2018

Meng Fang
Secondary supervisor
Tristan Tomilin
Get in contact