In this project we investigate methods that receive high-dimensional (e.g., visual) data and extract low-dimensional representations that are coherent under temporal proximity or under any other symmetric similarity metric.
Previous work on the topic has shown the efficacy of such representations as a base for goal-directed learning [1, 2] as well as structural data analysis [3, 4]. Furthermore, temporal coherence has been proposed as a principle to model neurophysiological phenomena, such as the formation of distinct oriospatial responses in the mammalian hippocampus: a hypothesis that has been substantiated by theoretical and simulated experimental evaluation .
However, most of this work was limited in the choice of the approximators used for extraction. Past models where either shallow or relied on stacking shallowly-trained layers for hierarchical representations. Using a differentiable whitening procedure, we were able to achieve comparable results in multiple proof-of-concept settings  using deep feed-forward neural networks trained by stochastic gradient descent to optimize a global slowness objective. This hybrid approach allows us to tap not only into an extensive and active body of research regarding deep model design, but also into the well-understood theoretical foundations of slow feature analysis (SFA) and spectral embeddings.
We believe that by bridging the gap between these fields it is possible to get a better understanding of how spectral methods can be used in high-dimensional machine learning as well as new tools to investigate coherence principles in neuroscientific modelling research.
Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening