SFB 1280
Animals must forage for food and water, find mates, avoid predators and return to their resting location in order to survive. Spatial navigation and learning are thus vital to the success of many species, and a variety of navigation behaviors and strategies have been observed. Place cells, grid cells, and other diverse cell types discovered in the hippocampus and adjoining areas have been thought to be neural representations that support spatial navigation and learning. However, the mechanisms that lead to the emergence of the multitude of cell types involved in navigation as well as the wide variety of observed navigation strategies are still unclear. We study spatial navigation using deep reinforcement learning to understand how experimentally observed behaviors may emerge in an artificial agent in a virtual environment. To this end, simple standard navigation tasks, such as the Morris Water Maze, as well as more complex paradigms such as extinction learning are used. Once the spatial behavior is learned, we can study the spatial representations that emerged in the network and that allow the artificial agent to navigate. These representations can then be compared to neural codes for space in the hippocampus.
Publications
-
Global remapping emerges as the mechanism for renewal of context-dependent behavior in a reinforcement learning model