Deep Modular Reinforcement Learning for Physically Embedded Abstract Reasoning

  • Karkus, Peter*; Mirza, Mehdi; Guez, Arthur; Jaegle, Andrew; Lillicrap, Timothy; Buesing, Lars; Heess, Nicolas; Weber, Theophane
  • Accepted abstract
  • [PDF] [Slides] [Join poster session]
    Poster session from 15:00 to 16:00 EAT and from 20:45 to 21:45 EAT
    Obtain the zoom password from ICLR


Embodied agents need to achieve abstract objectives using concrete, spatiotemporally complex sensory information and motor control. Tabula rasa deep reinforcement learning (RL) has tackled demanding tasks in terms of either visual, abstract, or physical reasoning, but solving these jointly remains a formidable challenge. To address this challenge, we propose a Modular RL approach that partitions embodied reasoning into specialized modules for state estimation, planning, and control. The modules are trained with independent RL objectives and training regimes, but they can adapt to each other during training. We show that Modular RL dramatically outperforms standard deep RL methods on Mujoban, a new, demanding domain that embeds Sokoban puzzles in physical 3D environments. Our results give strong evidence for the importance of research into modular designs: compared to black-box architectures, modular RL can more directly incorporate additional learning signals, choose more efficient training regimes, and can more flexibly adapt to changes in the task.

If videos are not appearing, disable ad-block!