Reinforcement learning allows autonomous agents to learn how to act in a stochastic, unknown environment, with which they can interact. Deep reinforcement learning, in particular, has achieved great success in well-defined application domains, such as Go or chess, in which an agent has to learn how to act and there is a clear success criterion. In this talk, I will focus on the potential role of reinforcement learning as a tool for building knowledge representations in AI agents whose goal is to perform continual learning. I will examine a key concept in reinforcement learning, the value function, and discuss its generalization to support various forms of predictive knowledge. I will also discuss the role of temporally extended actions, and their associated predictive models, in learning procedural knowledge. Finally, I will discuss the challenge of how to evaluate reinforcement learning agents whose goal is not just to control their environment, but also to build knowledge about their world.
Doina Precup holds a Canada Research Chair, Tier I in Machine Learning at McGill University, Montreal, Canada, and she currently co-directs the Reasoning and Learning Lab in the School of Computer Science. Prof. Precup also serves as Associate Dean, Research, for the Faculty of Science and Associate Scientific Director of the Healthy Brains for Healthy Lives CFREF-funded research program at McGill. Prof. Precup’s research interests are in the area of artificial intelligence and machine learning, with emphasis on reinforcement learning, deep learning, time series analysis, and various applications of these methods. She is a Senior Member of the American Association for Artificial Intelligence