Thought Cloning:
Learning to Think while Acting by Imitating Human Thinking
NeurIPS 2023 (Spotlight)

  • 1Department of Computer Science, University of British Columbia
  • 2Vector Institute
  • 3Canada CIFAR AI Chair

Abstract

Language is often considered a key aspect of human thinking, providing us with exceptional abilities to generalize, explore, plan, replan, and adapt to new situations. However, Reinforcement Learning (RL) agents are far from human-level performance in any of these abilities. We hypothesize one reason for such cognitive deficiencies is that they lack the benefits of thinking in language and that we can improve AI agents by training them to think like humans do. We introduce a novel Imitation Learning framework, Thought Cloning, where the idea is to not just clone the behaviors of human demonstrators, but also the thoughts humans have as they perform these behaviors. While we expect Thought Cloning to truly shine at scale on internet-sized datasets of humans thinking out loud while acting (e.g. online videos with transcripts), here we conduct experiments in a domain where the thinking and action data are synthetically generated. Results reveal that Thought Cloning learns much faster than Behavioral Cloning and its performance advantage grows the further out of distribution test tasks are, highlighting its ability to better handle novel situations. Thought Cloning also provides important benefits for AI Safety and Interpretability, and makes it easier to debug and improve AI. Because we can observe the agent's thoughts, we can (1) more easily diagnose why things are going wrong, making it easier to fix the problem, (2) steer the agent by correcting its thinking, or (3) prevent it from doing unsafe things it plans to do. Overall, by training agents how to think as well as behave, Thought Cloning creates safer, more powerful agents.

Method

overview

In the Thought Cloning training framework, agents learn to produce natural language thoughts at each timestep and subsequently condition their actions based on these generated thoughts. Both thoughts and actions are learned in pretraining via imitation learning to human data.

Experimental results

Thinking in language helps humans generalize, explore, plan, replan, and combine old knowledge in new ways. Harnessing this power, Thought Cloning agents learn much faster than conventional Behavioral Cloning and can better handle or adapt to novel situations.

overview
overview

Thought Cloning also contributes to AI Safety. Because we can observe the agent's thoughts, we can (1) more easily diagnose why things are going wrong, (2) steer the agent by correcting its thinking, or (3) prevent it from doing unsafe things it plans to do. We develop “Precrime Intervention”, a simple method that allows users to define unsafe behaviors even after training. It halts the agent upon detecting dangerous thoughts. Precrime Intervention works near perfectly in our testing, showing its potential for AI Safety.

overview

Citation

Acknowledgements

This work was supported by the Vector Institute, a grant from Schmidt Futures, a grant from Open Philanthropy, an NSERC Discovery Grant, and a generous donation from Rafael Cosman. We also thank Aaron Dharna, Ben Norman, and Jenny Zhang (sorted alphabetically) in our lab at the University of British Columbia for insightful discussions.

The website template was borrowed from Jon Barron.