The ability to change a previously acquired behaviour as a result of altered reinforcement contingencies is essential for a successful adaptation to a changing environment. This process is called “extinction learning” and unfolds dynamically over time - just as any other form of learning. Nevertheless, learning is typically quantified by comparing post- to pre-learning blocks, thus missing the learning process itself. In addition, the data of several individuals are often lumped together to compute an average, thus occluding any inter-subject variability. By contrast, we study how learning develops in a trial-by-trial manner in individual subjects. To this end we analyze both behavioral and neural data from rodent, avian and human subjects, which we obtain from collaborating labs. In a second line of research, we use computational approaches, such as associative models, reinforcement learning, and artificial neural networks, to understand the mechanisms underlying learning and to account for the variability of the learning dynamics, both across time and across subjects. Recently, reinforcement learning has been combined with deep neural networks to achieve superior performance. However, the representations learned by these networks are seldomly investigated. We study whether the representations and their dynamical changes during learning correlate with that those of recorded neurons.
Emergence of complex dynamics of choice due to repeated exposures to extinction learning