Abstract:
Slow feature analysis is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying input. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the learning of place cells in a model of the hippocampus.
In order to arrive at a biologically more realistic implementation of this learning paradigm, we consider analytically how slow feature analysis could be realized with linear Poisson neurons. Surprisingly, we find that the appropriate learning rule reproduces the typical learning window of spike timing-dependent plasticity. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a new computational interpretation of the peculiar learning window of physiological neurons.