This project has two concurrent research streams.
Markov chain SFA
Under mild assumptions, Markov chains — such as those induced by a reinforcement learning agent — enables a straightforward stochastic formulation of the SFA optimization problem. Leveraging these formulation, we investigate how well optimal slow features integrate with standard RL components, particularly focusing on their suitability for tasks like value function approximation.
SFA as Variational Inference
Previous probabilistic formulations of Slow Feature Analysis enable its reinterpretation as a variational inference problem. We explore the implications of this perspective and examine how the extensive toolkit of probabilistic machine learning can enhance the extraction of slow features.