Reward is key in models of reinforcement learning. However, humans also learn in the absence of reward, for example, driven by curiosity, e.g., we acquire knowledge about the lay-out of a city by walking around. But then the question arises: can we integrate curiosity-driven behavior into the framework reinforcement learning? Is curiosity the same as novelty-seeking? Or is it surprise-seeking? Is intrinsic motivation just an internal reward or something else?
In neuroscience and psychology, reinforcement learning algorithms are used to explain human brain activity and behavior while in artificial intelligence, they are used to learn to play complex games (amongst other applications). This workshop brings together researchers from both neuroscience and AI, since notions of curiosity and surprise have been used in both domains to address learning in volatile environments and in tasks where reward is sparse and of unknown value.
List of speakers (in alphabetical order):
- Andrew Barto, Computer Science, University of Massachusetts
- Marc G. Bellemare, Computer Science, Google Brain
- Wulfram Gerstner, Computational Neuroscience, EPFL
- Jacqueline Gottlieb, Neuroscience, Columbia University
- Tom Griffiths, Psychology, Princeton
- Etienne Koechlin, Cognitive Neuroscience, ENS Paris
- Alireza Modirshanechi, Brain-Mind Institute, EPFL
- Dirk Ostwald, Computational Cognitive Neuroscience, Berlin
- Deepak Pathak, Computer Science, CMU
- Eliana Vassena, Neuroscience, Donders
Registration mandatory, no fee