INI logo

List of Publications about Slow Feature Analysis
(compiled by Laurenz Wiskott)

RUB logo

This list contains references to publications about Slow Feature Analysis, including those that build on and extend SFA or that compare such publications to others conceptionally, analytically in terms of mathematics or experimentally in terms of performance.

You can limit the list by search terms. You can use the Global QuickSearch or simply enter a search term into the respective fields on top of any column. If you don't want QuickSearch to search in abstracts (where available) as well, you can disable this via the 'Search Settings' button to the right. Note that you can use regular expressions in your search if you want to. For instance to search for entries between 1990 and 1993, type '199[0-3]' in the Global QuickSearch, or for entries written by either turner or Schrauwen type 'Turner|Schrauwen' in the author search field.

You can also sort all entries in ascending or descending order by clicking once or twice, respectively, on a column's title. For instance, to sort by the year published, simply click on 'Year' in the top field of the year column.

If available, the Title fields also allow you to quickly access the BibTeX entry, Abstract, or link to a .pdf version of the respective paper. [URL] usually refers to an official link to an abstract or full paper; [URL(2)] usually refers to an inofficial full paper preprint version; [URL(3)] usually refers to some additional material, such as a poster. (Notice that some official full papers have copyright restrictions, e.g. Neural Computation. You may copy them but not post them somewhere else.)

Please send missing references and corrections to Laurenz Wiskott (see my homepage for the email address).



Global QuickSearch:   Number of matching entries: 0

Search Settings

    Author Year Title Reference BibTeX type
    de Alcântara, M.F.; Moreira, T.P. & Pedrini, H. 2013 Motion silhouette-based real time action recognition Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications , 471-478.
    Publ. Springer Nature.
     
    incollection
    Abstract: Most of the action recognition methods presented in the literature cannot be applied to real life situations. Some of them demand expensive feature extraction or classification processes, some require previous knowledge about starting and ending action times, others are just not scalable. In this paper, we present a real time action recognition method that uses information about the variation of the silhouette shape, which can be extracted and processed with little computational effort, and we apply a fast configuration of lightweight classifiers. The experiments are conducted on theWeizmann dataset and show that our method achieves the state-of-the-art accuracy in real time and can be scaled to work on different conditions and be applied several times simultaneously.
    BibTeX:
    			
    			
                            @incollection{AlcantaraMoreiraEtAl-2013,
                              author       = {de Alc{\^a}ntara, Marlon F and Moreira, Thierry P and Pedrini, Helio},
                              title        = {Motion silhouette-based real time action recognition},
                              booktitle    = {Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications},
                              publisher    = {Springer Nature},
                              year         = {2013},
                              pages        = {471--478},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-41827-3_59},
                              url2         = {https://www.researchgate.net/profile/Marlon_Alcantara3/publication/296658600_Motion_Silhouette-Based_Real_Time_Action_Recognition/links/57675b8508aeb4b9980981eb.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-41827-3_59}
                            }
    			
    			
    					
    Antonelo, E. 2011 Reservoir computing architectures for modeling robot navigation systems Ghent University, Ghent University .
     
    phdthesis
    Abstract: Robot Navigation Systems Autonomous mobile robots must be able to safely and purposefully navigate in complex dynamic environments, preferentially considering a restricted amount of computational power as well as limited energy consumption. In order to turn these robots into commercially viable domestic products with intelligent, abstract computational capabilities, it is also necessary to use inexpensive sensory apparatus such as a few infra-red distance sensors of limited accuracy. Current state-of-the-art methods for robot localization and navigation require fully equipped robotic platforms usually possessing ex- pensive laser scanners for environment mapping, a consider- able amount of computational power, and extensive explicit modeling of the environment and of the task. This thesis The research presented in this thesis is a step towards creating intelligent autonomous mobile robots with abstract reasoning capabilities using a limited number of very simple raw noisy sensory signals, such as distance sensors. The basic assumption is that the low-dimensional sensory signal can be projected into a high-dimensional dynamic space where learn- ing and computation is performed by linear methods (such as linear regression), overcoming sensor aliasing problems com- monly found in robot navigation tasks. This form of computa- tion is known in the literature as Reservoir Computing (RC), and the Echo State Network is a particular RC model used in this work and characterized by having the high-dimensional space implemented by a discrete analog recurrent neural net- work with fading memory properties. This thesis proposes a number of Reservoir Computing architectures which can be used in a variety of autonomous navigation tasks, by model- ing implicit abstract representations of an environment as well as navigation behaviors which can be sequentially executed in the physical environment or simulated as a plan in deliberative goal-directed tasks. Navigation attractors A navigation attractor is a reactive robot behavior defined by a temporal pattern of sensory-motor cou- pling through the environment space. Under this scheme, a robot tends to follow a trajectory with attractor-like charac- teristics in space. These navigation attractors are character- ized by being robust to noise and unpredictable events and by having inherent collision avoidance skills. In this work, it is shown that an RC network can model not only one behavior, but multiple navigation behaviors by shifting the operating point of the dynamical reservoir system into different sub-space attractors using additional external inputs representing the se- lected behavior. The sub-space attractors emerge from the coupling existing between the RC network, which controls the autonomous robot, and the environment. All this is achieved under an imitation learning framework which trains the RC network using examples of navigation behaviors generated by a supervisor controller or a human. Implicit spatial representations From the stream of sensory in- put given by distance sensors, it is possible to construct im- plicit spatial representations of an environment by using Reser- voir Computing networks. These networks are trained in a supervised way to predict locations at different levels of ab- straction, from continuous-valued robot’s pose in the global coordinate’s frame, to more abstract locations such as small delimited areas and rooms of a robot environment. The high- dimensional reservoir projects the sensory input into a dy- namic system space, whose characteristic fading memory dis- ambiguates the sensory space, solving the sensor aliasing prob- lems where multiple different locations generate similar sen- ....
    BibTeX:
    			
    			
                            @phdthesis{Antonelo-2011,
                              author       = {Antonelo, Eric},
                              title        = {Reservoir computing architectures for modeling robot navigation systems},
                              school       = {Ghent University},
                              year         = {2011},
    			  url          = {https://biblio.ugent.be/publication/3177516/file/4335735},
                              url2         = {https://www.researchgate.net/profile/Eric_Antonelo/publication/292349529_Reservoir_computing_architectures_for_modeling_robot_navigation_systems/links/578a44ed08ae7a588eebc221.pdf}
                            }
    			
    			
    					
    Antonelo, E.A. & Schrauwen, B. 2009 Unsupervised learning in reservoir computing: modeling hippocampal place cells for small mobile robots International Conference on Artificial Neural Networks , 747-756.
     
    inproceedings
    Abstract: Biological systems (e.g., rats) have efficient and robust localization abilities provided by the so called, place cells, which are found in the hippocampus of rodents and primates (these cells encode locations of the animal’s environment). This work seeks to model these place cells by employing three (biologically plausible) techniques: Reservoir Computing (RC), Slow Feature Analysis (SFA), and Independent Component Analysis (ICA). The proposed architecture is composed of three layers, where the bottom layer is a dynamic reservoir of recurrent nodes with fixed weights. The upper layers (SFA and ICA) provides a self-organized formation of place cells, learned in an unsupervised way. Experiments show that a simulated mobile robot with 17 noisy short-range distance sensors is able to self-localize in its environment with the proposed architecture, forming a spatial representation which is dependent on the robot direction.
    BibTeX:
    			
    			
                            @inproceedings{AntoneloSchrauwen-2009a,
                              author       = {Antonelo, Eric A and Schrauwen, Benjamin},
                              title        = {Unsupervised learning in reservoir computing: modeling hippocampal place cells for small mobile robots},
                              booktitle    = {International Conference on Artificial Neural Networks},
                              year         = {2009},
                              pages        = {747--756},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-04274-4_77},
                              url2         = {https://pdfs.semanticscholar.org/a3bb/f2e5c0a5440f7bf1557a55918845dc39bd30.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-04274-4_77}
                            }
    			
    			
    					
    Antonelo, E.A. & Schrauwen, B. 2009 On different learning approaches with echo state networks for localization of small mobile robots Anais do 9. Congresso Brasileiro de Redes Neurais .
    Publ. Associacao Brasileira de Inteligencia Computacional - ABRICOM.
     
    inproceedings
    Abstract: Animals such as rats have innate and robust localization capabilities which allow them to navigate to goals in a maze. The rodent’s hippocampus, with the so called place cells, is responsible for such spatial processing. This work seeks to model these place cells using either supervised or unsupervised learning techniques. More specifically, we use a randomly generated recurrent neural network (the reservoir) as a non-linear temporal kernel to expand the input to a rich dynamic space. The reservoir states are linearly combined (using linear regression) or, in the unsupervised case, are used for extracting slowly-varying features from the input to form place cells (the architectures are organized in hierarchical layers). Experiments show that a small mobile robot with cheap and low-range distance sensors can learn to self-localize in its environment with the proposed systems.
    BibTeX:
    			
    			
                            @inproceedings{AntoneloSchrauwen-2009b,
                              author       = {Antonelo, Eric Aislan and Schrauwen, Benjamin},
                              title        = {On different learning approaches with echo state networks for localization of small mobile robots},
                              booktitle    = {Anais do 9. Congresso Brasileiro de Redes Neurais},
                              publisher    = {Associacao Brasileira de Inteligencia Computacional - {ABRICOM}},
                              year         = {2009},
    			  url          = {http://abricom.org.br/eventos/cbrn_2009/067_CBRN2009/},
                              url2         = {http://snn.elis.ugent.be/sites/snn/files/papers/CBRN2009_eric.pdf},
                              doi          = {http://doi.org/10.21528/cbrn2009-067}
                            }
    			
    			
    					
    Antonelo, E. & Schrauwen, B. 2009 Towards autonomous self-localization of small mobile robots using reservoir computing and slow feature analysis 2009 IEEE International Conference on Systems, Man and Cybernetics , 3818-3823.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Biological systems such as rats have special brain structures which process spatial information from the envi- ronment. They have efficient and robust localization abilities provided by special neurons in the hippocampus, namely place cells. This work proposes a biologically plausible architecture which is based on three recently developed techniques: Reservoir Computing (RC), Slow Feature Analysis (SFA), and Independent Component Analysis (ICA). The bottom layer of our RC-SFA architecture is a reservoir of recurrent nodes which process the information from the robot’s distance sensors. It provides a temporal kernel of rich dynamics which is used by the upper two layers (SFA and ICA) to autonomously learn place cells. Experiments with an e-puck robot with 8 infra-red sensors (which measure distances in [4-30] cm) show that the learning system based on RC-SFA provides a self-organized formation of place cells that can either distinguish between two rooms or to detect the corridor connecting them.
    BibTeX:
    			
    			
                            @inproceedings{AntoneloSchrauwen-2009,
                              author       = {Antonelo, Eric and Schrauwen, Benjamin},
                              title        = {Towards autonomous self-localization of small mobile robots using reservoir computing and slow feature analysis},
                              booktitle    = {2009 {IEEE} International Conference on Systems, Man and Cybernetics},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2009},
                              pages        = {3818--3823},
    			  url          = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.705.3724&rep=rep1&type=pdf},
                              doi          = {http://doi.org/10.1109/icsmc.2009.5346617}
                            }
    			
    			
    					
    Antonelo, E. & Schrauwen, B. 2010 Learning slow features with reservoir networks for biologically-inspired robot localization .
     
    misc
    Abstract: This work proposes a hierarchical biologically- inspired architecture for learning sensor-based spatial represen- tations of a robot environment in an unsupervised way. The learning is based on the fact that high-level concepts, such as the robot position, which vary in a slower timescale, can be found in a fast-varying input signal, like distance sensors. It is also assumed that the input is low-dimensional, providing limited information from the environment. The proposed architecture is composed of three layers, where the first layer, called the reservoir, is a fixed randomly generated recurrent neural network, which projects the input into a high-dimensional, dynamic space. The second layer is trained with Slow Feature Analysis (SFA), generating instantaneous slowly-varying signals from the reservoir states. Using Independent Component Analysis (ICA), the third layer implements sparse coding on the SFA output. The architecture, called RC-SFA, benefits from the short-term memory of the reservoir and the unsupervised learning mechanisms of SFA and ICA. We show that, using a limited number of noisy short-range distance sensors, mobile robots are able to learn to self-localize in simulation as well as in real environments. It is not only the current sensor reading which is needed for predicting the robot position, but also a history of the input stream. We compare the RC-SFA model with a time-delayed model using only SFA and ICA, and show that the reservoir is essential for the temporal processing of the the input stream. Results also show that the SFA and ICA layers show activation patterns which resemble, respectively, the firing of grid cells and hippocampal place cells found in the brain of rodents.
    BibTeX:
    			
    			
                            @misc{AntoneloSchrauwen-2010,
                              author       = {Antonelo, Eric and Schrauwen, Benjamin},
                              title        = {Learning slow features with reservoir networks for biologically-inspired robot localization},
                              year         = {2010},
                              url2         = {https://pdfs.semanticscholar.org/cde9/a21dea103a41c23e4bc9e6a8e84b251c97d0.pdf}
                            }
    			
    			
    					
    Antonelo, E. & Schrauwen, B. 2012 Learning slow features with reservoir computing for biologically-inspired robot localization Neural Networks , 25(1), 178-190.
    Publ. Elsevier BV.
     
    article
    Abstract: This work proposes a hierarchical biologically-inspired architecture for learning sensor-based spatial representations of a robot environment in an unsupervised way. The first layer is comprised of a fixed randomly generated recurrent neural network, the reservoir, which projects the input into a high-dimensional, dynamic space. The second layer learns instantaneous slowly-varying signals from the reservoir states using Slow Feature Analysis (SFA), whereas the third layer learns a sparse coding on the SFA layer using Independent Component Analysis (ICA). While the SFA layer generates non-localized activations in space, the ICA layer presents high place selectivity, forming a localized spatial activation, characteristic of place cells found in the hippocampus area of the rodent's brain. We show that, using a limited number of noisy short-range distance sensors as input, the proposed system learns a spatial representation of the environment which can be used to predict the actual location of simulated and real robots, without the use of odometry. The results confirm that the reservoir layer is essential for learning spatial representations from low-dimensional input such as distance sensors. The main reason is that the reservoir state reflects the recent history of the input stream. Thus, this fading memory is essential for detecting locations, mainly when locations are ambiguous and characterized by similar sensor readings.
    BibTeX:
    			
    			
                            @article{AntoneloSchrauwen-2012,
                              author       = {Eric Antonelo and Benjamin Schrauwen},
                              title        = {Learning slow features with reservoir computing for biologically-inspired robot localization},
                              journal      = {Neural Networks},
                              publisher    = {Elsevier {BV}},
                              year         = {2012},
                              volume       = {25},
                              number       = {1},
                              pages        = {178--190},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S089360801100222X},
                              doi          = {http://doi.org/10.1016/j.neunet.2011.08.004}
                            }
    			
    			
    					
    Antonelo, E.; Schrauwen, B. & Stroobandt, D. 2008 Reservoir computing and slow feature analysis for autonomous map learning and self-localization of small mobile robots .
     
    misc
    Abstract: Small mobile robots must be able to self-localize in their environment in order to accomplish tasks. Biological systems (e.g., rats) have efficient and robust localization abilities provided by the so called, place cells, which are found in the hippocampus of rodents and primates (these cells encode locations of the animal’s environment). This work seeks to model these place cells by employing three (biologically plau- sible) techniques: Reservoir Computing (RC), Slow Feature Analysis (SFA), and Independent Component Analysis (ICA). The proposed architecture is composed of three layers, where the bottom layer is a dynamic reservoir of recurrent nodes with fixed weights. The upper layers (SFA and ICA) provides a self- organized formation of place cells, learned in a unsupervised way. Experiments show that a simulated mobile robot with 17 noisy short-range distance sensors is able to self-localize in its environment with the proposed architecture, forming a spatial representation which is dependent on the robot direction.
    BibTeX:
    			
    			
                            @misc{AntoneloSchrauwenEtAl-2008,
                              author       = {Antonelo, Eric and Schrauwen, Benjamin and Stroobandt, Dirk},
                              title        = {Reservoir computing and slow feature analysis for autonomous map learning and self-localization of small mobile robots},
                              year         = {2008},
                              url2         = {http://snn.elis.ugent.be/sites/snn/files/icra2009_eric.pdf}
                            }
    			
    			
    					
    Ar, I. & Akgul, Y.S. 2013 Action recognition using random forest prediction with combined pose-based and motion-based features 2013 8th International Conference on Electrical and Electronics Engineering (ELECO) , 315-319.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: In this paper, we propose a novel human action recognition system that uses random forest prediction with statistically combined pose-based and motion-based features. Given a set of training and test image sequences (videos), we first adopt recent techniques that extract low-level features: motion and pose features. Motion-based features which represent motion patterns in the consecutive images, are formed by 3D Haar-like features. Pose-based features are obtained by the calculation of scale invariant contour-based features. Then using statistical methods, we combine these low-level features to a novel compact representation which describes the global motion and the global pose information in the whole image sequence. Finally, Random Forest classification is employed to recognize actions in the test sequences by using this novel representation. Our experimental results on KTH and Weizmann datasets have shown that the combination of pose-based and motion-based features increased the system recognition accuracy. The proposed system also achieved classification rates comparable to the state-of-the-art approaches.
    BibTeX:
    			
    			
                            @inproceedings{ArAkgul-2013,
                              author       = {Ar, Ilktan and Akgul, Yusuf Sinan},
                              title        = {Action recognition using random forest prediction with combined pose-based and motion-based features},
                              booktitle    = {2013 8\textsuperscript{th} International Conference on Electrical and Electronics Engineering ({ELECO})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2013},
                              pages        = {315--319},
    			  url          = {http://ieeexplore.ieee.org/document/6713852/},
                              url2         = {https://www.researchgate.net/profile/Ilktan_Ar/publication/261423172_Action_recognition_using_random_forest_prediction_with_combined_pose-based_and_motion-based_features/links/00b7d5341188ca0e5d000000.pdf},
                              doi          = {http://doi.org/10.1109/eleco.2013.6713852}
                            }
    			
    			
    					
    Aung, T. & Sein, M.M. 2016 Analysing the effect of disaster International Conference on Genetic and Evolutionary Computing , 238-246.
     
    inproceedings
    Abstract: Analysing the damage area is the critical task for recovery and reconstruction for the urban area after the disaster. The purposed method is developed to detect the damage areas after the disaster using the satellite images. Most countries are exposed to a number of natural hazards such as Tsunami, Cyclone and landslide. It needs to estimate the destroying areas using the change detection techniques. In this approach, the pre and post satellite images are used to detect the damage areas. The main focus of the paper is to develop an approach that estimates the destroying areas combining the Morphological Building Index (MBI) and Slow Feature Analysis (SFA). He system output the Tchange map for the damage area. The results indicate that the proposed approach is encouraging for automatic detection of damaged buildings and it is a time saving method for monitoring buildings after the disaster happened.
    BibTeX:
    			
    			
                            @inproceedings{AungSein-2016,
                              author       = {Aung, Thida and Sein, Myint Myint},
                              title        = {Analysing the effect of disaster},
                              booktitle    = {International Conference on Genetic and Evolutionary Computing},
                              year         = {2016},
                              pages        = {238--246},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-319-48490-7_28},
                              doi          = {http://doi.org/10.1007/978-3-319-48490-7_28}
                            }
    			
    			
    					
    Bellec, G.; Galtier, M.; Brette, R. & Yger, P. 2016 Slow feature analysis with spiking neurons and its application to audio stimuli Journal of computational neuroscience , 40(3), 317-329.
    Publ. Springer.
     
    article
    Abstract: Extracting invariant features in an unsupervised manner is crucial to perform complex computation such as object recognition, analyzing music or understanding speech. While various algorithms have been proposed to perform such a task, Slow Feature Analysis (SFA) uses time as a means of detecting those invariants, extracting the slowly time-varying components in the input signals. In this work, we address the question of how such an algorithm can be implemented by neurons, and apply it in the context of audio stimuli. We propose a projected gradient implementation of SFA that can be adapted to a Hebbian like learning rule dealing with biologically plausible neuron models. Furthermore, we show that a Spike-Timing Dependent Plasticity learning rule, shaped as a smoothed second derivative, implements SFA for spiking neurons. The theory is supported by numerical simulations, and to illustrate a simple use of SFA, we have applied it to auditory signals. We show that a single SFA neuron can learn to extract the tempo in sound recordings.
    BibTeX:
    			
    			
                            @article{BellecGaltierEtAl-2016,
                              author       = {Bellec, Guillaume and Galtier, Mathieu and Brette, Romain and Yger, Pierre},
                              title        = {Slow feature analysis with spiking neurons and its application to audio stimuli},
                              journal      = {Journal of computational neuroscience},
                              publisher    = {Springer},
                              year         = {2016},
                              volume       = {40},
                              number       = {3},
                              pages        = {317--329},
    			  url          = {http://link.springer.com/content/pdf/10.1007%2Fs10827-016-0599-3.pdf},
                              doi          = {http://doi.org/10.1007/s10827-016-0599-3}
                            }
    			
    			
    					
    Bengio, Y.; Courville, A. & Vincent, P. 2013 Representation learning: a review and new perspectives IEEE transactions on pattern analysis and machine intelligence , 35(8), 1798-1828.
    Publ. IEEE.
     
    article
    Abstract: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different ex- planatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implement- ing such priors. This paper reviews recent work in the area of unsu- pervised feature learning and joint training of deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep architectures. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connec- tions between representation learning, density estimation and manifold learning.
    BibTeX:
    			
    			
                            @article{BengioCourvilleEtAl-2013,
                              author       = {Bengio, Yoshua and Courville, Aaron and Vincent, Pascal},
                              title        = {Representation learning: a review and new perspectives},
                              journal      = {IEEE transactions on pattern analysis and machine intelligence},
                              publisher    = {IEEE},
                              year         = {2013},
                              volume       = {35},
                              number       = {8},
                              pages        = {1798--1828},
                              url2         = {http://mlsurveys.s3.amazonaws.com/110.pdf}
                            }
    			
    			
    					
    Bergstra, J. 2011 Incorporating complex cells into neural networks for pattern classification Département d'informatique et de recherche opérationnelle Faculté des arts et des sciences, Université de Montréa, Département d'informatique et de recherche opérationnelle Faculté des arts et des sciences, Université de Montréa .
     
    phdthesis
    Abstract: Computational neuroscientists have hypothesized that the visual system from the retina to at least primary visual cortex is continuously tting a latent variable probability model to its stream of perceptions. It is not known exactly which probability model, nor exactly how the tting takes place, but known algorithms for fitting such models require conditional estimates of the latent variables. This gives us a strong hint as to why the visual system might be tting such a model; in the right kind of model those conditional estimates can also serve as excellent features for analyzing the semantic content of images perceived. The work presented here uses image classi cation performance (accurate discrimination between common classes of objects) as a basis for comparing visual system models, and algorithms for tting those models as probability densities to images. This dissertation (a) finds that models based on visual area V1's complex cells generalize better from labeled training examples than conventional neural networks whose hidden units are more like V1's simple cells, (b) presents novel interpretations for complex-cell- based visual system models as probability distributions and novel algorithms for tting them to data, and (c) demonstrates that these models form better features for image classi cation after they are rst trained as probability models. Visual system models based on complex cells achieve some of the best results to date on the CIFAR-10 image classi cation benchmark, and samples from their probability distributions indicate that they have learnt to capture important aspects of natural images. Two auxiliary technical innovations that made this work possible are also de- scribed: a random search algorithm for selecting hyper-parameters, and an opti- mizing compiler for matrix-valued mathematical expressions which can target both CPU and GPU devices.
    BibTeX:
    			
    			
                            @phdthesis{Bergstra-2011,
                              author       = {Bergstra, James},
                              title        = {Incorporating complex cells into neural networks for pattern classification},
                              school       = {D{\'{e}}partement d'informatique et de recherche op{\'{e}}rationnelle Facult{\'{e}} des arts et des sciences, Universit{\'{e}} de Montr{\'{e}}a},
                              year         = {2011},
    			  url          = {http://www-etud.iro.umontreal.ca/~bergstrj/publications/11_These.pdf}
                            }
    			
    			
    					
    Bergstra, J.S. & Bengio, Y. 2009 Slow, decorrelated features for pretraining complex cell-like networks Advances in neural information processing systems , 99-107.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{BergstraBengio-2009,
                              author       = {Bergstra, James S and Bengio, Yoshua},
                              title        = {Slow, decorrelated features for pretraining complex cell-like networks},
                              booktitle    = {Advances in neural information processing systems},
                              year         = {2009},
                              pages        = {99--107}
                            }
    			
    			
    					
    Berkes, P. 2005 Pattern recognition with slow feature analysis. Cognitive Sciences EPrint Archive (CogPrints) , 4104.
     
    misc
    BibTeX:
    			
    			
                            @misc{Berkes-2005a,
                              author       = {Pietro Berkes},
                              title        = {Pattern recognition with slow feature analysis.},
                              year         = {2005},
                              volume       = {4104},
                              howpublished = {Cognitive Sciences EPrint Archive (CogPrints)},
    			  url          = {http://cogprints.org/4104/}
                            }
    			
    			
    					
    Berkes, P. 2005 Temporal slowness as an unsupervised learning principle: self-organization of complex-cell receptive fields and application to pattern recognition. PhD thesis, Institute for Biology, Humboldt University Berlin, D-10099 Berlin, Germany .
     
    phdthesis
    BibTeX:
    			
    			
                            @phdthesis{Berkes-2005c,
                              author       = {Pietro Berkes},
                              title        = {Temporal slowness as an unsupervised learning principle: self-organization of complex-cell receptive fields and application to pattern recognition.},
                              school       = {Institute for Biology},
                              year         = {2005}
                            }
    			
    			
    					
    Berkes, P. & Wiskott, L. 2002 Applying slow feature analysis to image sequences yields a rich repertoire of complex cell properties. Proc. Intl. Conf. on Artificial Neural Networks (ICANN'02) , Lecture Notes in Computer Science , 81-86.
    Ed. José & Dorronsoro, R.
    Publ. Springer.
     
    inproceedings
    Abstract: We apply Slow Feature Analysis (SFA) to image sequences generated from natural images using a range of spatial transformations. An analysis of the resulting receptive fields shows that they have a rich spectrum of invariances and share many properties with complex and hypercomplex cells of the primary visual cortex. Furthermore, the dependence of the solutions on the statistics of the transformations is investigated.
    BibTeX:
    			
    			
                            @inproceedings{BerkesWiskott-2002,
                              author       = {Pietro Berkes and Laurenz Wiskott},
                              title        = {Applying slow feature analysis to image sequences yields a rich repertoire of complex cell properties.},
                              booktitle    = {Proc.\ Intl.\ Conf.\ on Artificial Neural Networks (ICANN'02)},
                              publisher    = {Springer},
                              year         = {2002},
                              pages        = {81--86},
    			  url          = {http://link.springer.com/chapter/10.1007%2F3-540-46084-5_14},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/BerkesWiskott-2002-ProcICANN-SFAComplexCells-Preprint.pdf},
                              doi          = {http://doi.org/10.1007/3-540-46084-5_14}
                            }
    			
    			
    					
    Berkes, P. & Wiskott, L. 2003 Slow feature analysis yields a rich repertoire of complex-cells properties. Proc. 29th Göttingen Neurobiology Conference, Göttingen, Germany , 602-603.
    Eds. Elsner, N. & Zimmermann, H.
    Publ. Georg Thieme Verlag, Stuttgart.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{BerkesWiskott-2003b,
                              author       = {Pietro Berkes and Laurenz Wiskott},
                              title        = {Slow feature analysis yields a rich repertoire of complex-cells properties.},
                              booktitle    = {Proc.\ 29\textsuperscript{th} G{\"o}ttingen Neurobiology Conference, G\"ottingen, Germany},
                              publisher    = {Georg Thieme Verlag},
                              year         = {2003},
                              pages        = {602--603}
                            }
    			
    			
    					
    Berkes, P. & Wiskott, L. 2003 Slow feature analysis yields a rich repertoire of complex-cell properties. Cognitive Sciences EPrint Archive (CogPrints) , 2804.
     
    misc
    BibTeX:
    			
    			
                            @misc{BerkesWiskott-2003a,
                              author       = {Pietro Berkes and Laurenz Wiskott},
                              title        = {Slow feature analysis yields a rich repertoire of complex-cell properties.},
                              year         = {2003},
                              volume       = {2804},
                              howpublished = {Cognitive Sciences EPrint Archive (CogPrints)},
    			  url          = {http://cogprints.org/2804/}
                            }
    			
    			
    					
    Berkes, P. & Wiskott, L. 2004 Slow feature analysis yields a rich repertoire of complex-cells properties. Proc. Early Cognitive Vision Workshop, May 28 - Jun 1, Isle Of Skye, Scotland .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{BerkesWiskott-2004,
                              author       = {Pietro Berkes and Laurenz Wiskott},
                              title        = {Slow feature analysis yields a rich repertoire of complex-cells properties.},
                              booktitle    = {Proc.\ Early Cognitive Vision Workshop, May 28 -- Jun 1, Isle Of Skye, Scotland},
                              year         = {2004}
                            }
    			
    			
    					
    Berkes, P. & Wiskott, L. 2005 Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision , 5(6), 579-602.
     
    article
    Abstract: In this study, we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data. We find a good qualitative and quantitative match between the set of learned functions trained on image sequences and the population of complex cells in the primary visual cortex (V1). The functions show many properties found also experimentally in complex cells, such as direction selectivity, non-orthogonal inhibition, end-inhibition, and side-inhibition. Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties.
    BibTeX:
    			
    			
                            @article{BerkesWiskott-2005c,
                              author       = {Pietro Berkes and Laurenz Wiskott},
                              title        = {Slow feature analysis yields a rich repertoire of complex cell properties.},
                              journal      = {Journal of Vision},
                              year         = {2005},
                              volume       = {5},
                              number       = {6},
                              pages        = {579--602},
    			  url          = {http://journalofvision.org/5/6/9/},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/BerkesWiskott-2005c-JoV-SFAComplexCells.pdf},
                              doi          = {http://doi.org/10.1167/5.6.9}
                            }
    			
    			
    					
    Bertram, D. 2012 Untersuchungen zur Varianzreduktion beschleunigungsbasierter 3D-Gestendaten Master thesis, Faculty of Computer Science and Engineering Science, Cologne University of Applied Science, Master thesis, Faculty of Computer Science and Engineering Science, Cologne University of Applied Science .
     
    mastersthesis
    Abstract: Verschiedene Arbeiten haben sich bereits mit der Klassifikation von 3D-Gestendaten beschäftigt, wobei die Varianz aller verwendeten Verfahren zwei Gemeinsamkeiten be- sitzt. Keine der Arbeiten erzielt benutzerunabhängig ähnlich gute Ergebnisse wie im benutzerabhängigen Fall und keine Arbeit erzielt eine 100% Erkennung. Diese Arbeit untersuchte am Beispiel einer Gestenerkennung mittels Slow Feature Analysis (SFA), die auf einem iPhone umgesetzt wurde, welche Unterschiede zwischen benutzerabhängiger und benutzerunabhängiger Erkennung bestimmbar sind und wie sich diese in ihrer Varianz reduzieren lassen. Für die Betrachtungen wurden zwei personendisjunkte Datensätze verwendet. Es wurden verschiedene Einflüsse bestimmt und durch Operatoren zur Invarianz überführt. Durch die bestimmten Operatoren ist es möglich, die Erkennungswahrscheinlichkeit im benutzerabhängigen und im benutzerunabhängigen Fall zu steigern. Es wurden drei Invarianzen erschaffen, die Rotationsinvarianz durch eine Datensatzrotation mittels Quaternionen, die Gestensegmentierung durch Ruheauslöschung sowie dem Ausklingen um Bewegungsab- brüche zu kompensieren. Die SFA hat sich als robustes und zuverlässiges Verfahren erwiesen, welches durch seine Analyseeigenschaften sogar für rotierte Gesten korrekt klassifizierbare Eigenschaftsvektoren bestimmt. Durch die unterschiedlichen Methoden wurde die Erkennungswahr- scheinlichkeit im benutzerunabhängigen Fall gesteigert und durch die Überführung von Einflussfaktoren zur Invarianz die Benutzbarkeit für einen ”untrainierten“ Benutzer deutlich erhöht. Weiterhin wurde festgestellt, dass die Betrachtung der Datensatze im Bild- und Frequenzbereich zu unterschiedlichen Fehlerkennungen führen, diese somit unterschiedliche Informationen für die SFA besitzen.
    BibTeX:
    			
    			
                            @mastersthesis{Bertram-2012,
                              author       = {Bertram, Daniel},
                              title        = {Untersuchungen zur {V}arianzreduktion beschleunigungsbasierter {3D-G}estendaten},
                              school       = {Master thesis, Faculty of Computer Science and Engineering Science, Cologne University of Applied Science},
                              year         = {2012},
                              url2         = {http://www.gm.fh-koeln.de/~konen/research/PaperPDF/MT-Bertram_final-2012.pdf}
                            }
    			
    			
    					
    Bethge, M.; Gerwinn, S. & Macke, J.H. 2007 Unsupervised learning of a steerable basis for invariant image representations Electronic Imaging 2007 , 64920C-64920C.
     
    inproceedings
    Abstract: There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informa- tiveness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical trans- formations occuring in sequences of natural images. We utilize ideas of ‘steerability’ and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the average bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. ‘complex cells’) from sequences of natural images.
    BibTeX:
    			
    			
                            @inproceedings{BethgeGerwinnEtAl-2007,
                              author       = {Bethge, Matthias and Gerwinn, Sebastian and Macke, Jakob H},
                              title        = {Unsupervised learning of a steerable basis for invariant image representations},
                              booktitle    = {Electronic Imaging 2007},
                              year         = {2007},
                              pages        = {64920C--64920C},
    			  url          = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1298483},
                              url2         = {https://www.researchgate.net/profile/Matthias_Bethge/publication/41781729_Unsupervised_learning_of_a_steerable_basis_for_invariant_image_representations/links/0fcfd509bdb291078a000000.pdf},
                              doi          = {http://doi.org/10.1117/12.711119}
                            }
    			
    			
    					
    Blaschke, T. 2005 Independent component analysis and slow feature analysis: relations and combination. PhD thesis, Institute for Physics, Humboldt University Berlin, D-10099 Berlin, Germany .
     
    phdthesis
    Abstract: Within this thesis, we focus on the relation between independent component analysis (ICA) and slow feature analysis (SFA). To allow a comparison between both methods we introduce CuBICA2, an ICA algorithm based on second-order statistics only, i.e. cross-correlations. In contrast to algorithms based on higher-order statistics not only instantaneous cross-correlations but also time-delayed cross correlations are considered for minimization. CuBICA2 requires signal components with auto-correlation like in SFA, and has the ability to separate source signal components that have a Gaussian distribution. Furthermore, we derive an alternative formulation of the SFA objective function and compare it with that of CuBICA2. In the case of a linear mixture the two methods are equivalent if a single time delay is taken into account. The comparison can not be extended to the case of several time delays. For ICA a straightforward extension can be derived, but a similar extension to SFA yields an objective function that can not be interpreted in the sense of SFA. However, a useful extension in the sense of SFA to more than one time delay can be derived. This extended SFA reveals the close connection between the slowness objective of SFA and temporal predictability. Furthermore, we combine CuBICA2 and SFA. The result can be interpreted from two perspectives. From the ICA point of view the combination leads to an algorithm that solves the nonlinear blind source separation problem. From the SFA point of view the combination of ICA and SFA is an extension to SFA in terms of statistical independence. Standard SFA extracts slowly varying signal components that are uncorrelated meaning they are statistically independent up to second-order. The integration of ICA leads to signal components that are more or less statistically independent.
    BibTeX:
    			
    			
                            @phdthesis{Blaschke-2005,
                              author       = {Tobias Blaschke},
                              title        = {Independent component analysis and slow feature analysis: relations and combination.},
                              school       = {Institute for Physics},
                              year         = {2005},
    			  url          = {http://edoc.hu-berlin.de/docviews/abstract.php?lang=ger&id=25458},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/Blaschke-2005-Dissertation.pdf}
                            }
    			
    			
    					
    Blaschke, T.; Berkes, P. & Wiskott, L. 2006 What is the relationship between slow feature analysis and independent component analysis? Neural Computation , 18(10), 2495-2508.
     
    article
    Abstract: We present an analytical comparison between linear slow feature analysis and second-order independent component analysis, and show that in the case of one time delay the two approaches are equivalent. We also consider the case of several time delays and discuss two possible extensions of slow feature analysis.
    BibTeX:
    			
    			
                            @article{BlaschkeBerkesEtAl-2006,
                              author       = {T. Blaschke and P. Berkes and L. Wiskott},
                              title        = {What is the relationship between slow feature analysis and independent component analysis?},
                              journal      = {Neural Computation},
                              year         = {2006},
                              volume       = {18},
                              number       = {10},
                              pages        = {2495--2508},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/neco.2006.18.10.2495},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/BlaschkeBerkesEtAl-2006-NeurComp-SFAvsICA.pdf},
                              doi          = {http://doi.org/10.1162/neco.2006.18.10.2495}
                            }
    			
    			
    					
    Blaschke, T. & Wiskott, L. 2005 Nonlinear blind source separation by integrating independent component analysis and slow feature analysis. Proc. Advances in Neural Information Processing Systems 17 (NIPS'04) , 177-184.
    Eds. Saul, L. K.; Weiss, Y.; Lé & on Bottou
    Publ. The MIT Press.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{BlaschkeWiskott-2005,
                              author       = {T. Blaschke and L. Wiskott},
                              title        = {Nonlinear blind source separation by integrating independent component analysis and slow feature analysis.},
                              booktitle    = {Proc.\ Advances in Neural Information Processing Systems 17 (NIPS'04)},
                              publisher    = {The MIT Press},
                              year         = {2005},
                              pages        = {177--184}
                            }
    			
    			
    					
    Blaschke, T. & Wiskott, L. 2004 Independent slow feature analysis and nonlinear blind source separation. Proc. of the 5th Int. Conf. on Independent Component Analysis and Blind Signal Separation (ICA'04), Granada, Spain , Lecture Notes in Computer Science .
    Publ. Springer.
     
    inproceedings
    Abstract: We present independent slow feature analysis as a new method for nonlinear blind source separation. It circumvents the indeterminacy of nonlinear independent component analysis by combining the objectives of statistical independence and temporal slowness. The principle of temporal slowness is adopted from slow feature analysis, an unsupervised method to extract slowly varying features from a given observed vectorial signal. The performance of the algorithm is demonstrated on nonlinearly mixed speech data.
    BibTeX:
    			
    			
                            @inproceedings{BlaschkeWiskott-2004b,
                              author       = {T. Blaschke and L. Wiskott},
                              title        = {Independent slow feature analysis and nonlinear blind source separation.},
                              booktitle    = {Proc. of the 5\textsuperscript{th} Int. Conf. on Independent Component Analysis and Blind Signal Separation (ICA'04), Granada, Spain},
                              publisher    = {Springer},
                              year         = {2004},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-540-30110-3_94},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/BlaschkeWiskott-2004b-ProcICA-ISFA-Preprint.pdf},
                              doi          = {http://doi.org/10.1007/978-3-540-30110-3_94}
                            }
    			
    			
    					
    Blaschke, T.; Zito, T. & Wiskott, L. 2007 Independent slow feature analysis and nonlinear blind source separation. Neural Computation , 19(4), 994-1021.
     
    article
    Abstract: In the linear case statistical independence is a sufficient criterion for performing blind source separation. In the nonlinear case, however, it leaves an ambiguity in the solutions that has to be resolved by additional criteria. Here we argue that temporal slowness complements statistical independence well and that a combination of the two leads to unique solutions of the nonlinear blind source separation problem. The algorithm we present is a combination of second-order Independent Component Analysis and Slow Feature Analysis and is referred to as Independent Slow Feature Analysis. Its performance is demonstrated on nonlinearly mixed music data. We conclude that slowness is indeed a useful complement to statistical independence but that time-delayed second-order moments are only a weak measure of statistical independence.
    BibTeX:
    			
    			
                            @article{BlaschkeZitoEtAl-2007,
                              author       = {Tobias Blaschke and Tiziano Zito and Laurenz Wiskott},
                              title        = {Independent slow feature analysis and nonlinear blind source separation.},
                              journal      = {Neural Computation},
                              year         = {2007},
                              volume       = {19},
                              number       = {4},
                              pages        = {994--1021},
    			  url          = {http://neco.mitpress.org/cgi/content/abstract/19/4/994},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/BlaschkeZitoEtAl-2007-NeurComp-ISFA.pdf},
                              doi          = {http://doi.org/10.1162/neco.2007.19.4.994}
                            }
    			
    			
    					
    Böhmer, W. 2012 Robot navigation using reinforcement learning and slow feature analysis Technische Universität Berlin, Fakultät für Elektrotechnik und Informatik, Technische Universität Berlin, Fakultät für Elektrotechnik und Informatik e-print arXiv:1205.0986 .
     
    mastersthesis
    Abstract: The application of reinforcement learning algorithms onto real life problems always bears the challenge of filtering the environmental state out of raw sensor readings. While most approaches use heuristics, biology suggests that there must exist an unsupervised method to construct such filters automatically. Besides the extraction of environmental states, the filters have to represent them in a fashion that support modern reinforcement algorithms. Many popular algorithms use a linear architecture, so one should aim at filters that have good approximation properties in combination with linear functions. This thesis wants to propose the unsupervised method slow feature analysis (SFA) for this task. Presented with a random sequence of sensorreadings, SFA learns a set of filters. With growing model complexity and training examples, the filters converge against trigonometric polynomial functions. These are known to possess excellent approximation capabilities and should therfore support the reinforcement algorithms well. We evaluate this claim on a robot. The task is to learn a navigational control in a simple environment using the least square policy iteration (LSPI) algorithm. The only accessible sensor is a head mounted video camera, but without meaningful filtering, video images are not suited as LSPI input. We will show that filters learned by SFA, based on a random walk video of the robot, allow the learned control to navigate successfully in ca. 80% of the test trials.
    BibTeX:
    			
    			
                            @mastersthesis{Boehmer-2012,
                              author       = {Wendelin B{\"{o}}hmer},
                              title        = {Robot navigation using reinforcement learning and slow feature analysis},
                              school       = {Technische Universit{\"{a}}t Berlin, Fakult{\"{a}}t f{\"{u}}r Elektrotechnik und Informatik},
                              year         = {2012},
                              howpublished = {e-print arXiv:1205.0986},
                              url2         = {https://pdfs.semanticscholar.org/75e5/278d2fc2259dee117edf2aef48189ee1ed68.pdf}
                            }
    			
    			
    					
    Böhmer, W. 2017 Representation and generalization in autonomous reinforcement learning Technische Universität Berlin, Technische Universität Berlin .
     
    phdthesis
    BibTeX:
    			
    			
                            @phdthesis{Boehmer-2017,
                              author       = {Wendelin B\"ohmer},
                              title        = {Representation and generalization in autonomous reinforcement learning},
                              school       = {Technische Universit\"at Berlin},
                              year         = {2017},
    			  url          = {https://depositonce.tu-berlin.de/handle/11303/6150},
                              doi          = {http://doi.org/10.14279/depositonce-5715}
                            }
    			
    			
    					
    Böhmer, W.; Grünewälder, S.; Nickisch, H. & Obermayer, K. 2011 Regularized sparse kernel slow feature analysis Joint European Conference on Machine Learning and Knowledge Discovery in Databases , 235-248.
    Publ. Springer Nature.
     
    inproceedings
    Abstract: This paper develops a kernelized slow feature analysis (SFA) algorithm. SFA is an unsupervised learning method to extract features which encode latent variables from time series. Generative relationships are usually complex, and current algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to provide a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small but complex data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. Versatility and performance of our method are demonstrated on audio and video data sets.
    BibTeX:
    			
    			
                            @inproceedings{BoehmerGruenewaelderEtAl-2011,
                              author       = {B{\"o}hmer, Wendelin and Gr{\"u}new{\"a}lder, Steffen and Nickisch, Hannes and Obermayer, Klaus},
                              title        = {Regularized sparse kernel slow feature analysis},
                              booktitle    = {Joint European Conference on Machine Learning and Knowledge Discovery in Databases},
                              publisher    = {Springer Nature},
                              year         = {2011},
                              pages        = {235--248},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-23780-5_25},
                              url2         = {http://hannes.nickisch.org/papers/conferences/boehmer11regSKSFA.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-23780-5_25}
                            }
    			
    			
    					
    Böhmer, W.; Grünewälder, S.; Nickisch, H. & Obermayer, K. 2012 Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis Machine Learning , 89(1-2), 67-86.
    Publ. Springer US.
     
    article
    Abstract: Without non-linear basis functions many problems can not be solved by linear algorithms. This article proposes a method to automatically construct such basis functions with slow feature analysis (SFA). Non-linear optimization of this unsupervised learning method generates an orthogonal basis on the unknown latent space for a given time series. In contrast to methods like PCA, SFA is thus well suited for techniques that make direct use of the latent space. Real-world time series can be complex, and current SFA algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to develop a kernelized SFA algorithm which provides a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. We hypothesize that our algorithm generates a feature space that resembles a Fourier basis in the unknown space of latent variables underlying a given real-world time series. We evaluate this hypothesis at the example of a vowel classification task in comparison to sparse kernel PCA. Our results show excellent classification accuracy and demonstrate the superiority of kernel SFA over kernel PCA in encoding latent variables.
    BibTeX:
    			
    			
                            @article{BoehmerGruenewaelderEtAl-2012,
                              author       = {B{\"o}hmer, Wendelin and Gr{\"u}new{\"a}lder, Steffen and Nickisch, Hannes and Obermayer, Klaus},
                              title        = {Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis},
                              journal      = {Machine Learning},
                              publisher    = {Springer US},
                              year         = {2012},
                              volume       = {89},
                              number       = {1-2},
                              pages        = {67--86},
    			  url          = {http://link.springer.com/article/10.1007%2Fs10994-012-5300-0},
                              doi          = {http://doi.org/10.1007/s10994-012-5300-0}
                            }
    			
    			
    					
    Böhmer, W.; Grünewälder, S. & Obermayer, K. 2009 State extraction by dimensionality reduction .
     
    misc
    Abstract: Real world data (e.g. video images) are usually high dimensional and present information highly mixed and noise afflicted. Most applications therefore require a dimensionality reduction or information filtering beforehand. Such filters have to present the desired information in an applicable fashion and have to make sure few are lost. Several methods exist that learn these filters from the statistics of presented training data (e.g. PCA, CCA, PLS or SFA [1]). We used the unsupervised method of slow feature analysis (SFA) to extract the position of a robot from the images of its head-mounted camera. SFA assumes a hidden low dimensional state x that changes slowly over time (in our case the position of a camera). The observed data (images recorded by this camera) is generated by an unknown bijective render function f(x). Given unlimited training data and a non restricted function class, one can derive the output of the optimal filter for some representative cases analytically. For these cases it has been proven that the optimal outputs are trigonometric basis functions in the domain of x [2]. This result is independent of the render function f(x), which is effectively inverted by the learned filter. To catch the non linear correlations in video images, we constructed a kernelized SFA algorithm analogous to kernel PCA [3], which outperformed its linear counterpart considerably. In order to deal with the huge number of training samples needed to catch the statistics of real images, we employed a sparse kernel matrix approximation method first introduced by Csat ́o and Opper [4]. The resulting feature space (an approximation of the space of trigonometric polynomials) is particularly well suited to approximate continuous functions with a linear model, e.g. value functions in reinforcement learning. We demonstrated this by learning the robots control in a simple navigation task. However, trigonometric polynomials are global functions and therefore the filters need support on their complete domain. In light of the huge computational demand one would rather like to extract the robots position in a sparse feature space that consists of localized basis functions, e.g. Gaussian bells. These are only active in a small region of their domain and therefore can be individually expressed by a sparse kernel expansion, i.e. with a small number of support vectors. In future works, we plan to construct optimization problems based on sparseness techniques that produce such basis functions and use convex optimization algorithms to solve them [5].
    BibTeX:
    			
    			
                            @misc{BoehmerGruenewaelderEtAl-2009,
                              author       = {B{\"o}hmer, Wendelin and Gr{\"u}new{\"a}lder, Steffen and Obermayer, Klaus},
                              title        = {State extraction by dimensionality reduction},
                              year         = {2009},
                              url2         = {https://www.researchgate.net/profile/Wendelin_Boehmer/publication/267799911_State_Extraction_by_Dimensionality_Reduction/links/54cb68360cf2240c27e7d56b.pdf}
                            }
    			
    			
    					
    Böhmer, W.; Grünewälder, S.; Shen, Y.; Musial, M. & Obermayer, K. 2013 Construction of approximation spaces for reinforcement learning. Journal of Machine Learning Research , 14(1), 2067-2118.
     
    article
    Abstract: Linear reinforcement learning (RL) algorithms like least-squares temporal difference learning (LSTD) require basis functions that span approximation spaces of potential value functions. This article investigates methods to construct these bases from samples. We hypothesize that an ideal approximation spaces should encode diffusion distances and that slow feature analysis (SFA) constructs such spaces. To validate our hypothesis we provide theoretical statements about the LSTD value approximation error and induced metric of approximation spaces constructed by SFA and the state-of-the-art methods Krylov bases and proto-value functions (PVF). In particular, we prove that SFA minimizes the average (over all tasks in the same environment) bound on the above approx- imation error. Compared to other methods, SFA is very sensitive to sampling and can sometimes fail to encode the whole state space. We derive a novel importance sampling modification to compensate for this effect. Finally, the LSTD and least squares policy iteration (LSPI) performance of approximation spaces constructed by Krylov bases, PVF, SFA and PCA is compared in benchmark tasks and a visual robot navigation experiment (both in a realistic simulation and with a robot). The results support our hypothesis and suggest that (i) SFA provides subspace-invariant features for MDPs with self-adjoint transition operators, which allows strong guarantees on the approximation error, (ii) the modified SFA algorithm is best suited for LSPI in both discrete and continuous state spaces and (iii) approximation spaces encoding diffusion distances facilitate LSPI performance.
    BibTeX:
    			
    			
                            @article{BoehmerGruenewaelderEtAl-2013,
                              author       = {B{\"o}hmer, Wendelin and Gr{\"u}new{\"a}lder, Steffen and Shen, Yun and Musial, Marek and Obermayer, Klaus},
                              title        = {Construction of approximation spaces for reinforcement learning.},
                              journal      = {Journal of Machine Learning Research},
                              year         = {2013},
                              volume       = {14},
                              number       = {1},
                              pages        = {2067--2118},
                              url2         = {https://pdfs.semanticscholar.org/fc2f/fb9daf9f0dd07e755d7ad5633907efb0a4b7.pdf}
                            }
    			
    			
    					
    Böhmer, W.; Guo, R. & Obermayer, K. 2016 Non-deterministic policy improvement stabilizes approximated reinforcement learning e-print arXiv:1612.07548 .
     
    misc
    Abstract: This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements’ stochastic- ity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning.
    BibTeX:
    			
    			
                            @misc{BoehmerGuoEtAl-2016,
                              author       = {B{\"{o}}hmer, Wendelin and Guo, Rong and Obermayer, Klaus},
                              title        = {Non-deterministic policy improvement stabilizes approximated reinforcement learning},
                              year         = {2016},
                              howpublished = {e-print arXiv:1612.07548},
    			  url          = {https://arxiv.org/pdf/1612.07548.pdf}
                            }
    			
    			
    					
    Böhmer, W.; Springenberg, J.T.; Boedecker, J.; Riedmiller, M. & Obermayer, K. 2015 Autonomous learning of state representations for control: an emerging field aims to autonomously learn state representations for reinforcement learning agents from their real-world sensor observations KI-Künstliche Intelligenz , 29(4), 353-362.
    Publ. Springer.
     
    article
    Abstract: This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.
    BibTeX:
    			
    			
                            @article{BoehmerSpringenbergEtAl-2015,
                              author       = {B{\"{o}}hmer, Wendelin and Springenberg, Jost Tobias and Boedecker, Joschka and Riedmiller, Martin and Obermayer, Klaus},
                              title        = {Autonomous learning of state representations for control: an emerging field aims to autonomously learn state representations for reinforcement learning agents from their real-world sensor observations},
                              journal      = {KI-K{\"{u}}nstliche Intelligenz},
                              publisher    = {Springer},
                              year         = {2015},
                              volume       = {29},
                              number       = {4},
                              pages        = {353--362},
    			  url          = {http://link.springer.com/content/pdf/10.1007%2Fs13218-015-0356-1.pdf},
                              url2         = {https://pdfs.semanticscholar.org/68e8/7a10b40f6ba5fd2ac8c2262eab0373bd2ed4.pdf},
                              doi          = {http://doi.org/10.1007/s13218-015-0356-1}
                            }
    			
    			
    					
    Bray, A. & Martinez, D. 2002 Kernel-based extraction of slow features: complex cells learn disparity and translation invariance from natural images NIPS. Vol. 15. .
     
    inproceedings
    Abstract: In Slow Feature Analysis (SFA [1]), it has been demonstrated that high-order invariant properties can be extracted by projecting inputs into a nonlinear space and computing the slowest changing features in this space; this has been proposed as a simple general model for learning nonlinear invariances in the visual system. However, this method is highly constrained by the curse of dimensionality which limits it to simple theoretical simulations. This paper demonstrates that by using a different but closely-related objective function for extracting slowly varying features ([2, 3]), and then exploiting the kernel trick, this curse can be avoided. Using this new method we show that both the complex cell properties of translation invariance and disparity coding can be learnt simultaneously from natural images when complex cells are driven by simple cells also learnt from the image.
    BibTeX:
    			
    			
                            @inproceedings{BrayMartinez-2002,
                              author       = {Bray, Alistair and Martinez, Dominique},
                              title        = {Kernel-based extraction of slow features: complex cells learn disparity and translation invariance from natural images},
                              booktitle    = {NIPS. Vol. 15.},
                              year         = {2002},
    			  url          = {https://papers.nips.cc/paper/2209-kernel-based-extraction-of-slow-features-complex-cells-learn-disparity-and-translation-invariance-from-natural-images.pdf},
                              url2         = {https://pdfs.semanticscholar.org/bb25/dcb75cc5c0261d0b1e07cf0231ad1f1524eb.pdf}
                            }
    			
    			
    					
    Bray, A. & Martinez, D. 2003 Complex cells learn disparity and translation invariance from natural images Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference , 15, 269.
     
    inproceedings
    Abstract: In Slow Feature Analysis (SFA [1]), it has been demonstrated that high-order invariant properties can be extracted by projecting in- puts into a nonlinear space and computing the slowest changing features in this space; this has been proposed as a simple general ...
    BibTeX:
    			
    			
                            @inproceedings{BrayMartinez-2003,
                              author       = {Bray, Alistair and Martinez, Dominique},
                              title        = {Complex cells learn disparity and translation invariance from natural images},
                              booktitle    = {Advances in Neural Information Processing Systems 15: Proceedings of the 2002 Conference},
                              year         = {2003},
                              volume       = {15},
                              pages        = {269}
                            }
    			
    			
    					
    Cai, Y. & Wang, G. 2010 AR parameters-based nonlinear blind source extraction Applied Mechanics and Materials , 20, 1129-1135.
     
    inproceedings
    Abstract: In nonlinear blind source separation (BSS) independence is not sufficient to recover the original source signal and additional criteria are needed to sufficiently constrain the optimization problem. Here we introduce autoregressive (AR) parameters as criteria and combined with expansion space develop a new method, which lead to a unique solution of the nonlinear BSS problem. The proposed method is based on two key assumptions. One lies in that a source signal’s AR parameters can be roughly estimated before operation, and the other is that expansion space, such as kernel feature space, should be chosen rich enough to approximate the nonlinearity. This method can extract the desired source signal as a unique solution with the help of this signal’s AR parameter, or it extracts one signal at one time. Thus it is also referred to as nonlinear blind source extraction (BSE). Its performance is demonstrated on nonlinearly mixed speech data.
    BibTeX:
    			
    			
                            @inproceedings{CaiWang-2010,
                              author       = {Cai, Ying and Wang, Gang},
                              title        = {{AR} parameters-based nonlinear blind source extraction},
                              booktitle    = {Applied Mechanics and Materials},
                              year         = {2010},
                              volume       = {20},
                              pages        = {1129--1135},
    			  url          = {https://www.scientific.net/AMM.20-23.1129.pdf},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.698.850&rep=rep1&type=pdf},
                              doi          = {http://doi.org/10.4028/www.scientific.net/amm.20-23.1129}
                            }
    			
    			
    					
    Cao, K.; Bednarz, B.; Smith, L.S.; Foo, T.K.F. & Patwardhan, K.A. 2015 Respiration induced fiducial motion tracking in ultrasound using an extended SFA approach SPIE Medical Imaging , 94190S-94190S.
     
    inproceedings
    Abstract: Radiation therapy (RT) plays an essential role in the management of cancers. The precision of the treatment delivery process in chest and abdominal cancers is often impeded by respiration induced tumor positional variations, which are accounted for by using larger therapeutic margins around the tumor volume leading to sub-optimal treatment deliveries and risk to healthy tissue. Real-time tracking of tumor motion during RT will help reduce unnecessary margin area and benefit cancer patients by allowing the treatment volume to closely match the positional variation of the tumor volume over time. In this work, we propose a fast approach which enables transferring the pre-estimated target (e.g. tumor) motion extracted from ultrasound (US) image sequences in training stage (e.g. before RT) to online data in real-time (e.g. acquired during RT). The method is based on extracting feature points of the target object, exploiting low-dimensional description of the feature motion through slow feature analysis, and finding the most similar image frame from training data for estimating current/online object location. The approach is evaluated on two 2D + time and one 3D + time US acquisitions. The locations of six annotated fiducials are used for designing experiments and validating tracking accuracy. The average fiducial distance between expert's annotation and the location extracted from our indexed training frame is 1.9±0.5mm. Adding a fast template matching procedure within a small search range reduces the distance to 1.4±0.4mm. The tracking time per frame is on the order of millisecond, which is below the frame acquisition time. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
    BibTeX:
    			
    			
                            @inproceedings{CaoBednarzEtAl-2015,
                              author       = {Cao, Kunlin and Bednarz, Bryan and Smith, LS and Foo, Thomas KF and Patwardhan, Kedar A},
                              title        = {Respiration induced fiducial motion tracking in ultrasound using an extended {SFA} approach},
                              booktitle    = {SPIE Medical Imaging},
                              year         = {2015},
                              pages        = {94190S--94190S},
    			  url          = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2209821},
                              doi          = {http://doi.org/10.1117/12.2082591}
                            }
    			
    			
    					
    Carlin, M.A. & Elhilali, M. 2013 Sustained firing of model central auditory neurons yields a discriminative spectro-temporal representation for natural sounds PLOS Comput Biol , 9(3), e1002982.
    Publ. Public Library of Science.
     
    article
    Abstract: The processing characteristics of neurons in the central auditory system are directly shaped by and reflect the statistics of natural acoustic environments, but the principles that govern the relationship between natural sound ensembles and observed responses in neurophysiological studies remain unclear. In particular, accumulating evidence suggests the presence of a code based on sustained neural firing rates, where central auditory neurons exhibit strong, persistent responses to their preferred stimuli. Such a strategy can indicate the presence of ongoing sounds, is involved in parsing complex auditory scenes, and may play a role in matching neural dynamics to varying time scales in acoustic signals. In this paper, we describe a computational framework for exploring the influence of a code based on sustained firing rates on the shape of the spectro-temporal receptive field (STRF), a linear kernel that maps a spectro-temporal acoustic stimulus to the instantaneous firing rate of a central auditory neuron. We demonstrate the emergence of richly structured STRFs that capture the structure of natural sounds over a wide range of timescales, and show how the emergent ensembles resemble those commonly reported in physiological studies. Furthermore, we compare ensembles that optimize a sustained firing code with one that optimizes a sparse code, another widely considered coding strategy, and suggest how the resulting population responses are not mutually exclusive. Finally, we demonstrate how the emergent ensembles contour the high-energy spectro-temporal modulations of natural sounds, forming a discriminative representation that captures the full range of modulation statistics that characterize natural sound ensembles. These findings have direct implications for our understanding of how sensory systems encode the informative components of natural stimuli and potentially facilitate multi-sensory integration.
    BibTeX:
    			
    			
                            @article{CarlinElhilali-2013,
                              author       = {Carlin, Michael A and Elhilali, Mounya},
                              title        = {Sustained firing of model central auditory neurons yields a discriminative spectro-temporal representation for natural sounds},
                              journal      = {PLOS Comput Biol},
                              publisher    = {Public Library of Science},
                              year         = {2013},
                              volume       = {9},
                              number       = {3},
                              pages        = {e1002982},
    			  url          = {http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002982}
                            }
    			
    			
    					
    Celikkanat, H. & Kalkan, S. 2014 Using slowness principle for feature selection: relevant feature analysis Signal Processing and Communications Applications Conference (SIU), 2014 22nd , 1540-1543.
     
    inproceedings
    Abstract: We propose a novel relevant feature selection technique which makes use of the slowness principle. The slowness principle holds that physical entities in real life are subject to slow and continuous changes. Therefore, to make sense of the world, highly erratic and fast-changing signals coming to our sensors must be processed in order to extract slow and more meaningful, high-level representations of the world. This principle has been successfully utilized in previous work of Wiskott and Sejnowski, in order to implement a biologically plausible vision architecture, which allows for robust object recognition. In this work, we propose that the same principle can be extended to distinguish relevant features in the classification of a high-dimensional space. We compare our initial results with state-of-the-art ReliefF feature selection method, as well a variant of Principle Component Analysis that has been modified for feature selection. To the best of our knowledge, this is the first application of the slowness principle for the sake of relevant feature selection or classification.
    BibTeX:
    			
    			
                            @inproceedings{CelikkanatKalkan-2014,
                              author       = {Celikkanat, Hande and Kalkan, Sinan},
                              title        = {Using slowness principle for feature selection: relevant feature analysis},
                              booktitle    = {Signal Processing and Communications Applications Conference (SIU), 2014 22\textsuperscript{nd}},
                              year         = {2014},
                              pages        = {1540--1543},
    			  url          = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6830535},
                              url2         = {http://www.kovan.ceng.metu.edu.tr/~sinan/publications/Celikkanat_SIU2014.pdf},
                              doi          = {http://doi.org/10.1109/siu.2014.6830535}
                            }
    			
    			
    					
    Celikkanat, H.; Sahin, E. & Kalkan, S. 2013 Recurrent slow feature analysis for developing object permanence in robots IROS 2013 Workshop on Neuroscience and Robotics, Tokyo, Japan .
     
    inproceedings
    Abstract: In this work, we propose a biologically inspired framework for developing object permanence in robots. In partic- ular, we build upon a previous work on a slowness principle-based visual model (Wiskott and Sejnowski, 2002), which was shown to be adept at tracking salient changes in the environment, while seamlessly “understanding” external causes, and self-emerging structures that resemble the human visual system. We propose an extension to this architecture with a prefrontal cortex-inspired recurrent loop that enables a simple short term memory, allowing the previously reactive system to retain information through time. We argue that object permanence in humans develop in a similar manner, that is, on top a previously matured object concept. Furthermore, we show that the resulting system displays the very behaviors which are thought to be cornerstones of object permanence understanding in humans. Specifically, the system is able to retain knowledge of a hidden object’s velocity, as well as identity, through (finite) occluded periods.
    BibTeX:
    			
    			
                            @inproceedings{CelikkanatSahinEtAl-2013,
                              author       = {Celikkanat, Hande and Sahin, Erol and Kalkan, Sinan},
                              title        = {Recurrent slow feature analysis for developing object permanence in robots},
                              booktitle    = {IROS 2013 Workshop on Neuroscience and Robotics, Tokyo, Japan},
                              year         = {2013},
                              url2         = {https://www.researchgate.net/profile/Hande_Celikkanat/publication/266143491_Recurrent_Slow_Feature_Analysis_for_Developing_Object_Permanence_in_Robots/links/54414fd20cf2a6a049a57142.pdf}
                            }
    			
    			
    					
    Chen, Z.; Haykin, S.; Eggermont, J.J. & Becker, S. 2007 Correlation-based neural learning and machine learning Correlative Learning: A Basis for Brain and Adaptive Systems , 129-217.
    Publ. Wiley Online Library.
     
    incollection
    BibTeX:
    			
    			
                            @incollection{ChenHaykinEtAl-2007,
                              author       = {Chen, Zhe and Haykin, Simon and Eggermont, Jos J and Becker, Suzanna},
                              title        = {Correlation-based neural learning and machine learning},
                              booktitle    = {Correlative Learning: A Basis for Brain and Adaptive Systems},
                              publisher    = {Wiley Online Library},
                              year         = {2007},
                              pages        = {129--217}
                            }
    			
    			
    					
    Chong, Y.S. & Tay, Y.H. 2015 Modeling representation of videos for anomaly detection using deep learning: a review e-print arXiv:1505.00523 .
     
    misc
    Abstract: This review article surveys the current progresses made toward video-based anomaly detection. We address the most fundamental aspect for video anomaly detection, that is, video feature representation. Much research works have been done in finding the right representation to perform anomaly detection in video streams accurately with an acceptable false alarm rate. However, this is very challenging due to large variations in environment and human movement, and high space-time complexity due to huge dimensionality of video data. The weakly supervised nature of deep learning algorithms can help in learning representations from the video data itself instead of manually designing the right feature for specific scenes. In this paper, we would like to review the existing methods of modeling video representations using deep learning techniques for the task of anomaly detection and action recognition.
    BibTeX:
    			
    			
                            @misc{ChongTay-2015,
                              author       = {Chong, Yong Shean and Tay, Yong Haur},
                              title        = {Modeling representation of videos for anomaly detection using deep learning: a review},
                              year         = {2015},
                              howpublished = {e-print arXiv:1505.00523},
    			  url          = {https://arxiv.org/abs/1505.00523}
                            }
    			
    			
    					
    Chu, J.; Liang, H.; Tong, Z. & Lu, W. 2017 Slow Feature Analysis for Mitotic Event Recognition. KSII Transactions on Internet & Information Systems , 11(3).
     
    article
    BibTeX:
    			
    			
                            @article{ChuLiangEtAl-2017,
                              author       = {Chu, Jinghui and Liang, Hailan and Tong, Zheng and Lu, Wei},
                              title        = {Slow Feature Analysis for Mitotic Event Recognition.},
                              journal      = {KSII Transactions on Internet \& Information Systems},
                              year         = {2017},
                              volume       = {11},
                              number       = {3},
                              doi          = {http://doi.org/10.3837/tiis.2017.03.023}
                            }
    			
    			
    					
    Creutzig, F. 2008 Sufficient encoding of dynamical systems Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I .
     
    phdthesis
    BibTeX:
    			
    			
                            @phdthesis{Creutzig-2008,
                              author       = {Creutzig, Felix},
                              title        = {Sufficient encoding of dynamical systems},
                              school       = {Humboldt-Universit{\"a}t zu Berlin, Mathematisch-Naturwissenschaftliche Fakult{\"a}t I},
                              year         = {2008},
    			  url          = {https://www.deutsche-digitale-bibliothek.de/binary/DY7X2BAULHMZZCWPBLZDZZGSKECELL2E/full/1.pdf},
                              url2         = {https://pdfs.semanticscholar.org/e991/09c9c6b076b9f4995cbba51165cf097f14c5.pdf}
                            }
    			
    			
    					
    Creutzig, F. & Sprekeler, H. 2008 Predictive coding and the slowness principle: an information-theoretic approach. Neural Computation , 20(4), 1026-1041.
    Publ. MIT Press - Journals.
     
    article
    Abstract: Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.
    BibTeX:
    			
    			
                            @article{CreutzigSprekeler-2008,
                              author       = {Creutzig, Felix and Sprekeler, Henning},
                              title        = {Predictive coding and the slowness principle: an information-theoretic approach.},
                              journal      = {Neural Computation},
                              publisher    = {{MIT} Press - Journals},
                              year         = {2008},
                              volume       = {20},
                              number       = {4},
                              pages        = {1026--1041},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/neco.2008.01-07-455},
                              doi          = {http://doi.org/10.1162/neco.2008.01-07-455}
                            }
    			
    			
    					
    Cunningham, J.P. & Ghahramani, Z. 2014 Linear dimensionality reduction: survey, insights, and generalizations https://arxiv.org/abs/1406.0873v1 .
     
    misc
    Abstract: Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the deeper connections between all these methods have not been understood. Here we unify methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, undercomplete independent component analysis, linear regression, and more. This optimization framework helps elucidate some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This optimization framework further allows rapid development of novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, we suggest that our generic linear dimensionality reduction solver can move linear dimensionality reduction toward becoming a blackbox, objective-agnostic numerical technology.
    BibTeX:
    			
    			
                            @misc{CunninghamGhahramani-2014,
                              author       = {Cunningham, John P and Ghahramani, Zoubin},
                              title        = {Linear dimensionality reduction: survey, insights, and generalizations},
                              year         = {2014},
                              howpublished = {https://arxiv.org/abs/1406.0873v1},
    			  url          = {https://arxiv.org/abs/1406.0873v1}
                            }
    			
    			
    					
    Cunningham, J.P. & Ghahramani, Z. 2015 Linear dimensionality reduction: survey, insights, and generalizations Journal of Machine Learning Research , 16, 2859-2900.
     
    article
    Abstract: Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dy- namical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motiva- tions in many elds, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as opti- mization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sucient di- mensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an ob- jective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward general- izations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.
    BibTeX:
    			
    			
                            @article{CunninghamGhahramani-2015,
                              author       = {Cunningham, John P and Ghahramani, Zoubin},
                              title        = {Linear dimensionality reduction: survey, insights, and generalizations},
                              journal      = {Journal of Machine Learning Research},
                              year         = {2015},
                              volume       = {16},
                              pages        = {2859--2900},
    			  url          = {http://www.jmlr.org/papers/volume16/cunningham15a/cunningham15a.pdf}
                            }
    			
    			
    					
    Dähne, S. 2010 Self-organization of V1 complex-cells based on slow feature analysis and retinal waves. Bernstein Center for Computational Neuroscience, Berlin Institute of Technology, Bernstein Center for Computational Neuroscience, Berlin Institute of Technology .
     
    mastersthesis
    Abstract: The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal struc- turing processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here I present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied in modeling parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with natural image sequences. In this work, I was able to obtain units that share a number of properties with cortical complex-cells by training with simulated retinal waves. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system so that it is best prepared for coding input from the natural world.
    BibTeX:
    			
    			
                            @mastersthesis{Daehne-2010,
                              author       = {D\"ahne, Sven},
                              title        = {Self-organization of {V1} complex-cells based on slow feature analysis and retinal waves.},
                              school       = {Bernstein Center for Computational Neuroscience, Berlin Institute of Technology},
                              year         = {2010},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/Dahne-2010-MasterThesis-SFARetinalWaves.pdf}
                            }
    			
    			
    					
    Dähne, S.; Höhne, J.; Schreuder, M. & Tangermann, M. 2011 Slow feature analysis - a tool for extraction of discriminating event-related potentials in brain-computer interfaces Artificial Neural Networks and Machine Learning - ICANN 2011 , Lecture Notes in Computer Science , 6791, 36-43.
    Eds. Honkela, T.; Duch, W.; odzisł aw; Girolami, M. & Kaski, S.
    Publ. Springer Berlin Heidelberg.
     
    inproceedings
    Abstract: The unsupervised signal decomposition method Slow Feature Analysis (SFA) is applied as a preprocessing tool in the context of EEG based Brain-Computer Interfaces (BCI). Classification results based on a SFA decomposition are compared to classification results obtained on Principal Component Analysis (PCA) decomposition and to those obtained on raw EEG channels. Both PCA and SFA improve classification to a large extend compared to using no signal decomposition and require between one third and half of the maximal number of components to do so. The two methods extract different information from the raw data and therefore lead to different classification results. Choosing between PCA and SFA based on classification of calibration data leads to a larger improvement in classification performance compared to using one of the two methods alone. Results are based on a large data set (n=31 subjects) of two studies using auditory Event Related Potentials for spelling applications.
    BibTeX:
    			
    			
                            @inproceedings{DaehneHoehneEtAl-2011,
                              author       = {D{\"a}hne, Sven and H{\"o}hne, Johannes and Schreuder, Martijn and Tangermann, Michael},
                              title        = {Slow feature analysis - a tool for extraction of discriminating event-related potentials in brain-computer interfaces},
                              booktitle    = {Artificial Neural Networks and Machine Learning -- ICANN 2011},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2011},
                              volume       = {6791},
                              pages        = {36--43},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-21735-7_5},
                              doi          = {http://doi.org/10.1007/978-3-642-21735-7_5}
                            }
    			
    			
    					
    Dähne, S.; Müller, K.-R. & Tangermann, M. 2011 Slow feature analysis as a potential preprocessing tool in BCI International Journal of Bioelectromagnetism , Vol. 13(2), 100-101.
     
    article
    Abstract: Here we present initial results of the unsupervised preprocessing method Slow Feature Analysis (SFA) for a BCI data set. It is the first time SFA is applied to EEG. SFA optimizes the signal representation with respect to temporal slowness. Its objective as well as its computational properties render it a possibly useful candidate for the preprocessing of BCI EEG data in order to detect task relevant components as well as components that represent artifacts or non-stationarities of the background brain activity or external sources.
    BibTeX:
    			
    			
                            @article{DaehneMuellerEtAl-2011,
                              author       = {D{\"a}hne, S. and M{\"u}ller, K.-R. and Tangermann, M.},
                              title        = {Slow feature analysis as a potential preprocessing tool in {BCI}},
                              journal      = {International Journal of Bioelectromagnetism},
                              year         = {2011},
                              volume       = {Vol. 13},
                              number       = {2},
                              pages        = {100--101},
    			  url          = {http://www.ijbem.org/volume13/number2/2011_v13_no2_100-101.pdf},
                              url2         = {http://www.tobi-project.org/sites/default/files/public/Publications/TOBI-122.pdf}
                            }
    			
    			
    					
    Dähne, S.; Wilbert, N. & Wiskott, L. 2010 Learning complex cell units from simulated prenatal retinal waves using slow feature analysis. Interdisciplinary College 2010 .
    Eds. Porzel, R.; Sebanz, N. & Spitzer, M.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{DaehneWilbertEtAl-2010b,
                              author       = {D{\"a}hne, Sven and Wilbert, Niko and Wiskott, Laurenz},
                              title        = {Learning complex cell units from simulated prenatal retinal waves using slow feature analysis.},
                              booktitle    = {Interdisciplinary College 2010},
                              year         = {2010}
                            }
    			
    			
    					
    Dähne, S.; Wilbert, N. & Wiskott, L. 2009 Learning complex cell units from simulated prenatal retinal waves with slow feature analysis. Proc. 18th Annual Computational Neuroscience Meeting (CNS'09), July 18-23, Berlin, Germany .
     
    inproceedings
    Abstract: Many properties of the developing visual system are structured and organized before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes [1]. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in V1 [2]. Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA) [3], to a biologically plausible model of retinal waves [4] (see figure 1). We also tested other wave-like inputs (sinusoidal waves, moving Gauss blobs) that allow for an analytical understanding of the results. Previously, SFA has been successfully applied in modeling parts of the visual system, most notably in reproducing a rich set of complex cell features by training SFA with natural image sequences [5]. In this work, we were able to obtain complex-cell like receptive fields in all input conditions, as displayed in figure 2. [Figure] Figure 1. Retinal wave training sequence. Snapshots of an image sequence that was generated by the retinal wave model described in [1] and used as input to SFA. A white square in the top left corner of the first image indicates the receptive field size. [Figure] Figure 2. A sample of optimal stimuli of quadratic functions found by SFA, after training with different inputs. Training sequences derived from natural images and pink noise images result in optimal stimuli (A and B, respectively) that exhibit complex cell properties as expected (compare [2]). Training with discretized moving Gaussian blobs and the retinal wave model results in optimal stimuli (C and D, respectively) that are similar to those in (A) and (B). All units show phase invariance similar to complex cells. Our results support the idea that retinal waves share relevant temporal and spatial properties with natural images. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system so that it is best prepared for coding input from the natural world. References 1. Wong ROL: Retinal waves and visual system development. Annu. Rev. Neurosci 1999, 22:28-47. 2. Albert MV, Schnabel A, Field DJ: Innate visual learning through spontaneous activity patterns. PLoS Comput Biol 2008., 4. 3. Wiskott L, Sejnowski TJ: Slow feature analysis: unsupervised learning of invariances. Neural Computation 2002, 14:715-770. 4. Godfrey KB, Swindale NV: Retinal wave behavior through activity-dependent refractory periods. PLoS Comput Biol 2007, 3:2408-2420. 5. Berkes P, Wiskott L: Slow feature analysis yields a rich repertoire of complex cell properties. J. Vision 2005, 5:579-602.
    BibTeX:
    			
    			
                            @inproceedings{DaehneWilbertEtAl-2009a,
                              author       = {Sven D{\"a}hne and Niko Wilbert and Laurenz Wiskott},
                              title        = {Learning complex cell units from simulated prenatal retinal waves with slow feature analysis.},
                              booktitle    = {Proc.\ 18\textsuperscript{th} Annual Computational Neuroscience Meeting (CNS'09), July 18--23, Berlin, Germany},
                              year         = {2009},
    			  url          = {http://www.biomedcentral.com/1471-2202/10/S1/P129},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/DahneWilbertEtAl-2009a-ProcCNSBerlin-Abstract-SFARetinalWaves.pdf},
                              url3         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/DahneWilbertEtAl-2009a-ProcCNSBerlin-Poster-SFARetinalWaves.pdf},
                              doi          = {http://doi.org/10.1186/1471-2202-10-S1-P129}
                            }
    			
    			
    					
    Dähne, S.; Wilbert, N. & Wiskott, L. 2009 Learning complex cell units from simulated prenatal retinal waves with slow feature analysis. Proc. 6'th International PhD Symposium Berlin Brain Days, Dec 9-11, Berlin, Germany .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{DaehneWilbertEtAl-2009b,
                              author       = {Sven D{\"a}hne and Niko Wilbert and Laurenz Wiskott},
                              title        = {Learning complex cell units from simulated prenatal retinal waves with slow feature analysis.},
                              booktitle    = {Proc.\ 6'th International PhD Symposium Berlin Brain Days, Dec 9--11, Berlin, Germany},
                              year         = {2009}
                            }
    			
    			
    					
    Dähne, S.; Wilbert, N. & Wiskott, L. 2010 Self-organization of V1 complex cells based on slow feature analysis and retinal waves. Frontiers in Computational NeuroscienceProc. Bernstein Conference on Computational Neuroscience, Sep 27-Oct 1, Berlin, Germany , 4.
    Publ. Frontiers Media SA.
     
    inproceedings
    Abstract: The structure of the early visual system, most notably simple and complex cells in primary visual cortex(V1), is believed to be very well adapted to the statistical regularities present in its natural input (Field 1994).In fact, a number of theoretical studies have shown that some of these structural properties are optimal withrespect to certain coding objectives such as sparseness (Olshausen & Field 1996), information maximization(Bell & Sejnowski 1997), or slowness (Berkes & Wiskott 2005). These studies have also demonstrated how simple and complex cells can emerge in the process of optimizing such a coding objective by training onnatural images (or natural image sequences). However, some elements of the well-adapted structure of thevisual system are already present prior to the onset of vision and can thus not have been learned from naturalvisual input. Spontaneous neural activity, which spreads in waves across the retina, has been suggested toplay a major role in these prenatal structuring processes (Wong 1999).Here we present the results of applying a coding objective that optimizes for temporal slowness, namelySlow Feature Analysis (SFA) (Wiskott & Sejnowski 2002), to a biologically plausible model of retinal waves(Godfrey & Swindale 2007). After training with retinal wave image sequences, the resulting SFA units are subjected to sinusoidal test stimuli in order to characterize their response properties in a similar fashion as itis common practice in physiological experiments. We find that the SFA units reproduce a number of featuresreminiscent of cortical complex cells, including receptive fields with elongated and spatially segregated ONand OFF regions, several types of orientation tuning, frequency tuning, and very low F0/F1 values, which is indicative of a largely invariant response with respect to the phase (or position) of an input grating (figure 1).Further analysis of the SFA units reveals that the algorithm achieves the phase invariance by construction ofquadrature filter pairs, which is in line with classical models of complex cells.Our results support the idea that retinal waves share relevant spatial and temporal properties with naturalimages. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape thedeveloping early visual system so that it is best prepared for coding input from the natural world.
    BibTeX:
    			
    			
                            @inproceedings{DaehneWilbertEtAl-2010a,
                              author       = {S. D{\"a}hne and N. Wilbert and L. Wiskott},
                              title        = {Self-organization of {V1} complex cells based on slow feature analysis and retinal waves.},
                              booktitle    = {Proc.\ Bernstein Conference on Computational Neuroscience, Sep 27--Oct 1, Berlin, Germany},
                              journal      = {Frontiers in Computational Neuroscience},
                              publisher    = {Frontiers Media {SA}},
                              year         = {2010},
                              volume       = {4},
    			  url          = {http://www.frontiersin.org/10.3389/conf.fncom.2010.51.00090/event_abstract},
                              doi          = {http://doi.org/10.3389/conf.fncom.2010.51.00090}
                            }
    			
    			
    					
    Dähne, S.; Wilbert, N. & Wiskott, L. 2014 Slow feature analysis on retinal waves leads to V1 complex cells. PLoS Comput Biol , 10(5), e1003564.
    Publ. Public Library of Science.
     
    article
    Abstract: The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world.
    BibTeX:
    			
    			
                            @article{DaehneWilbertEtAl-2014,
                              author       = {Sven D{\"{a}}hne and Niko Wilbert and Laurenz Wiskott},
                              title        = {Slow feature analysis on retinal waves leads to {V1} complex cells.},
                              journal      = {PLoS Comput Biol},
                              publisher    = {Public Library of Science},
                              year         = {2014},
                              volume       = {10},
                              number       = {5},
                              pages        = {e1003564},
    			  url          = {http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003564},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/DahneWilbertEtAl-2014-PLoSCompBiol-RetinalWaves.pdf},
                              doi          = {http://doi.org/10.1371/journal.pcbi.1003564}
                            }
    			
    			
    					
    Dawood, F. & Loo, C.K. 2014 Autonomous motion primitive segmentation of actions for incremental imitative learning of humanoid Robotic Intelligence In Informationally Structured Space (RiiSS), 2014 IEEE Symposium on , 1-8.
     
    inproceedings
    Abstract: During imitation learning or learning by demon-stration/observation, a crucial element of conception involves segmenting the continuous flow of motion into simpler units ÂĂŗ- motion primitives -ÂĂŗ by identifying the boundaries of an action. Secondly, in realistic environment the robot must be able to learn the observed motion patterns incrementally in a stable adaptive manner. In this paper, we propose an on-line and unsupervised motion segmentation method rendering the robot to learn actions by observing the patterns performed by other partner through Incremental Slow Feature Analysis. The segmentation model directly operates on the images acquired from the robot's vision sensor (camera) without requiring any kinematic model of the demonstrator. After segmentation, the spatio-temporal motion sequences are learned incrementally through Topological Gaussian Adaptive Resonance Hidden Markov Model. The learning model dynamically generates the topological structure in a self-organizing and self-stabilizing manner.
    BibTeX:
    			
    			
                            @inproceedings{DawoodLoo-2014,
                              author       = {Dawood, Farhan and Loo, Chu Kiong},
                              title        = {Autonomous motion primitive segmentation of actions for incremental imitative learning of humanoid},
                              booktitle    = {Robotic Intelligence In Informationally Structured Space (RiiSS), 2014 IEEE Symposium on},
                              year         = {2014},
                              pages        = {1--8},
    			  url          = {http://ieeexplore.ieee.org/document/7009169/},
                              doi          = {http://doi.org/10.1109/riiss.2014.7009169}
                            }
    			
    			
    					
    Dawood, F. & Loo, C.K. 2016 Incremental episodic segmentation and imitative learning of humanoid robot through self-exploration Neurocomputing , 173, 1471-1484.
    Publ. Elsevier.
     
    article
    Abstract: Imitation learning through self-exploration is an essential mechanism in developing sensorimotor skills for human infants as well for robots. We assume that a primitive sense of self is the prerequisite for successful social interaction rather than an outcome of it. During imitation learning, a crucial element of conception involves segmenting the continuous flow of motion into simpler units – motion primitives – by identifying the boundaries of an action. Secondly, in realistic environment the robot must be able to learn the observed motion patterns incrementally in a stable adaptive manner without corrupting previously learned information. In this paper, we propose an on-line and unsupervised motion segmentation method allowing the robot to imitate and perform actions by observing the motion patterns performed by other partner through Incremental Kernel Slow Feature Analysis. The segmentation model directly operates on the images acquired from the robots vision sensor (camera) without requiring any kinematic model of the demonstrator. After segmentation, the spatio-temporal motion sequences are learned incrementally through Topological Gaussian Adaptive Resonance Hidden Markov Model. The learning model dynamically generates the topological structure in a self-organizing and self-stabilizing manner. Each node represents the encoded motion element (i.e. joint angles). The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.
    BibTeX:
    			
    			
                            @article{DawoodLoo-2016a,
                              author       = {Farhan Dawood and Chu Kiong Loo},
                              title        = {Incremental episodic segmentation and imitative learning of humanoid robot through self-exploration},
                              journal      = {Neurocomputing},
                              publisher    = {Elsevier},
                              year         = {2016},
                              volume       = {173},
                              pages        = {1471--1484},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0925231215013296},
                              url2         = {https://pdfs.semanticscholar.org/3d66/f7015da9a127023ca240aa51672f7d0dbbe3.pdf},
                              doi          = {http://doi.org/10.1016/j.neucom.2015.09.021}
                            }
    			
    			
    					
    Dawood, F. & Loo, C.K. 2016 View-invariant visuomotor processing in computational mirror neuron system for humanoid PloS one , 11(3), e0152003.
    Publ. Public Library of Science.
     
    article
    Abstract: Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera) in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.
    BibTeX:
    			
    			
                            @article{DawoodLoo-2016b,
                              author       = {Dawood, Farhan and Loo, Chu Kiong},
                              title        = {View-invariant visuomotor processing in computational mirror neuron system for humanoid},
                              journal      = {PloS one},
                              publisher    = {Public Library of Science},
                              year         = {2016},
                              volume       = {11},
                              number       = {3},
                              pages        = {e0152003},
    			  url          = {http://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0152003&type=printable},
                              doi          = {http://doi.org/10.1371/journal.pone.0152003}
                            }
    			
    			
    					
    De Luca, V.; Grabner, H.; Petrusca, L.; Salomir, R.; Székely, G. & Tanner, C. 2011 Keep breathing! common motion helps multi-modal mapping International Conference on Medical Image Computing and Computer-Assisted Intervention , 597-604.
    Publ. Springer Nature.
     
    inproceedings
    Abstract: We propose an unconventional approach for transferring of information between multi-modal images. It exploits the temporal commonality of multi-modal images acquired from the same organ during free-breathing. Strikingly there is no need for capturing the same region by the modalities. The method is based on extracting a low-dimensional description of the image sequences, selecting the common cause signal (breathing) for both modalities and finding the most similar sub-sequences for predicting image feature location. The approach was evaluated for 3 volunteers on sequences of 2D MRI and 2D US images of the liver acquired at different locations. Simultaneous acquisition of these images allowed for quantitative evaluation (predicted versus ground truth MRI feature locations). The best performance was achieved with signal extraction by slow feature analysis resulting in an average error of 2.6 mm (4.2 mm) for sequences acquired at the same (a different) time.
    BibTeX:
    			
    			
                            @inproceedings{DeLucaGrabnerEtAl-2011,
                              author       = {De Luca, Valeria and Grabner, H and Petrusca, Lorena and Salomir, Rares and Sz{\'e}kely, G{\'a}bor and Tanner, Christine},
                              title        = {Keep breathing! common motion helps multi-modal mapping},
                              booktitle    = {International Conference on Medical Image Computing and Computer-Assisted Intervention},
                              publisher    = {Springer Nature},
                              year         = {2011},
                              pages        = {597--604},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-23623-5_75},
                              url2         = {https://pdfs.semanticscholar.org/2bd3/b2e9269f9930a662710fe077f0e44e853f73.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-23623-5_75}
                            }
    			
    			
    					
    Dean, T. 2006 Learning invariant features using inertial priors Annals of Mathematics and Artificial Intelligence , 47(3), 223-250.
    Publ. Springer.
     
    article
    Abstract: We address the technical challenges involved in combining key features from several theories of the visual cortex in a single coherent model. The resulting model is a hierarchical Bayesian network factored into modular component networks embedding variable-order Markov models. Each component network has an associated receptive field corresponding to components residing in the level directly below it in the hierarchy. The variable-order Markov models account for features that are invariant to naturally occurring transformations in their inputs. These invariant features give rise to increasingly stable, persistent representations as we ascend the hierarchy. The receptive fields of proximate components on the same level overlap to restore selectivity that might otherwise be lost to invariance.
    BibTeX:
    			
    			
                            @article{Dean-2006,
                              author       = {Dean, Thomas},
                              title        = {Learning invariant features using inertial priors},
                              journal      = {Annals of Mathematics and Artificial Intelligence},
                              publisher    = {Springer},
                              year         = {2006},
                              volume       = {47},
                              number       = {3},
                              pages        = {223--250},
    			  url          = {http://link.springer.com/article/10.1007%2Fs10472-006-9039-9},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.8672&rep=rep1&type=pdf},
                              doi          = {http://doi.org/10.1007/s10472-006-9039-9}
                            }
    			
    			
    					
    Deimel, R. 2009 Contextual slow feature extraction framework University of Vienna, Austrian Research Institute for Artificial Intelligence, University of Vienna, Austrian Research Institute for Artificial Intelligence (TR-2009-06).
     
    techreport
    Abstract: The paper presents an agent-based framework for in- vestigating a class of learning algorithms that exploit temporal correlation in sensor signals. They are re- ferred to as Slow Feature Extraction (SFE) methods, such as Slow Feature Analysis (SFA) (Wiskott and Se- jnowski 2002) or spike-timing dependent neural plas- ticity (Körding and König 2001). The framework pro- vides the notion of a Context within the agent, that can be utilized to suppress or affirm certain Slow Features when analysing sensor data with SFE methods. The paper presents several possible modifications to a basic slowness criterion as used by the Slow Feature Analy- sis algorithm. Simulations with a contextualized ver- sion of SFA (cSFA) shows increased robustness of fea- ture extraction in the face of different action patterns. The framework is further shown to naturally provide a hierarchical organisation of SFE methods and for the formal description of multisensory settings, useful for investigating Correlative Learning
    BibTeX:
    			
    			
                            @techreport{Deimel-2009,
                              author       = {Deimel, Raphael},
                              title        = {Contextual slow feature extraction framework},
                              school       = {University of Vienna, Austrian Research Institute for Artificial Intelligence},
                              year         = {2009},
                              number       = {TR-2009-06},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.727.6036&rep=rep1&type=pdf}
                            }
    			
    			
    					
    Deng, X.; Tian, X. & Hu, X. 2012 Nonlinear process fault diagnosis based on slow feature analysis Proceedings of the 10th World Congress on Intelligent Control and Automation , 3152-3156.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Invariant features of temporally varying signals are very useful for process monitoring. A novel nonlinear process fault diagnosis method is proposed in this paper based on slow feature analysis (SFA) which is a new invariant learning method. In the proposed method, input-output transformation functions are optimized to extract the nonlinear slowly varying components as invariant features. Based on feature variables, two monitoring statistics are constructed for fault detection and their confidence limits are computed by kernel density estimation. Simulation using a continuous stirred tank reactor (CSTR) system shows that the proposed method outperforms the traditional PCA and KPCA method.
    BibTeX:
    			
    			
                            @inproceedings{DengTianEtAl-2012,
                              author       = {Xiaogang Deng and Xuemin Tian and Xiangyang Hu},
                              title        = {Nonlinear process fault diagnosis based on slow feature analysis},
                              booktitle    = {Proceedings of the 10\textsuperscript{th} World Congress on Intelligent Control and Automation},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2012},
                              pages        = {3152--3156},
    			  url          = {http://ieeexplore.ieee.org/document/6358414/},
                              doi          = {http://doi.org/10.1109/wcica.2012.6358414}
                            }
    			
    			
    					
    Elkins, A.C.; Sun, Y.; Zafeiriou, S. & Pantic, M. 2013 The face of an imposter: computer vision for deception detection research in progress .
    Publ. IEEE Computer Society.
     
    misc
    Abstract: Using video analyzed from a novel deception experiment, this paper introduces computer vision research in progress that addresses two critical components to computational modeling of deceptive behavior: 1) individual nonverbal behavior differences, and 2) deceptive ground truth. Video interviews analyzed for this research were participants recruited as potential hooligans (extreme sports fans) who lied about support for their rival team. From these participants, we will process and extract features representing their faces that will be submitted to slow feature analysis. From this analysis we will identify each person’s unique facial expression and behaviors, and look for systemic variation between truth and deception.
    BibTeX:
    			
    			
                            @misc{ElkinsSunEtAl-2013,
                              author       = {Elkins, Aaron C and Sun, Yijia and Zafeiriou, Stefanos and Pantic, Maja},
                              title        = {The face of an imposter: computer vision for deception detection research in progress},
                              publisher    = {IEEE Computer Society},
                              year         = {2013},
                              url2         = {https://pdfs.semanticscholar.org/25f7/8f4e5c63236f7948801105352c9539e21dae.pdf}
                            }
    			
    			
    					
    Escalante, A. & Wiskott, L. 2010 Gender and age estimation from synthetic face images with hierarchical slow feature analysis. International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU'10), June 28-July 2, Dortmund .
    Eds. Hü, E.; llermeier & Kruse, R.
     
    inproceedings
    Abstract: Our ability to recognize the gender and estimate the age of people around us is crucial for our social development and interactions. In this paper, we investigate how to use Slow Feature Analysis (SFA) to estimate gender and age from synthetic face images. SFA is a versatile unsupervised learning algorithm that extracts slowly varying features from a multidimensional signal. To process very high-dimensional data, such as images, SFA can be applied hierarchically. The key idea here is to construct the training signal such that the parameters of interest, namely gender and age, vary slowly. This makes the labelling of the data implicit in the training signal and permits the use of the unsupervised algorithm in a hierarchical fashion. A simple supervised step at the very end is then sufficient to extract gender and age with high reliability. Gender was estimated with a very high accuracy, and age had an RMSE of 3.8 years for test images.
    BibTeX:
    			
    			
                            @inproceedings{EscalanteWiskott-2010,
                              author       = {Alberto Escalante and Laurenz Wiskott},
                              title        = {Gender and age estimation from synthetic face images with hierarchical slow feature analysis.},
                              booktitle    = {International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU'10), June 28--July 2, Dortmund},
                              year         = {2010},
    			  url          = {http://www.springerlink.com/content/r031104qv7228r35},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/EscalanteWiskott-2010-IPMU-AgeGenderEstimation-Preprint.pdf}
                            }
    			
    			
    					
    Escalante, A. & Wiskott, L. 2011 Heuristic evaluation of expansions for non-linear hierarchical slow feature analysis. Proc. The 10th Intl. Conf. on Machine Learning and Applications (ICMLA'11), Dec. 18-21, Honolulu, Hawaii , 133-138.
    Publ. IEEE Computer Society, Los Alamitos, CA, USA.
     
    inproceedings
    Abstract: Slow Feature Analysis (SFA) is a feature extraction algorithm based on the slowness principle with applications to both supervised and unsupervised learning. When implemented hierarchically, it allows for efficient processing of high-dimensional data, such as images. Expansion plays a crucial role in the implementation of non-linear SFA. In this paper, a fast heuristic method for the evaluation of expansions is proposed, consisting of tests on seven problems and two metrics. Several expansions with different complexities are evaluated. It is shown that the method allows predictions of the performance of SFA on a concrete data set, and the use of normalized expansions is justified. The proposed method is useful for the design of powerful expansions that allow the extraction of complex high-level features and provide better generalization.
    BibTeX:
    			
    			
                            @inproceedings{EscalanteWiskott-2011,
                              author       = {Alberto Escalante and Laurenz Wiskott},
                              title        = {Heuristic evaluation of expansions for non-linear hierarchical slow feature analysis.},
                              booktitle    = {Proc.\ The 10\textsuperscript{th} Intl.\ Conf.\ on Machine Learning and Applications (ICMLA'11), Dec.\ 18-21, Honolulu, Hawaii},
                              publisher    = {IEEE Computer Society},
                              year         = {2011},
                              pages        = {133--138},
    			  url          = {http://ieeexplore.ieee.org/document/6146957/},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/EscalanteWiskott-2011-ICMLA-Expansions-Preprint.pdf},
                              doi          = {http://doi.org/10.1109/ICMLA.2011.72}
                            }
    			
    			
    					
    Escalante-B., A.N. & Wiskott, L. 2012 Slow feature analysis: perspectives for technical applications of a versatile learning algorithm Künstliche Intelligenz [Artificial Intelligence] , 26(4), 341-348.
    Publ. Springer Nature.
     
    article
    Abstract: Slow Feature Analysis (SFA) is an unsupervised learning algorithm based on the slowness principle and has originally been developed to learn invariances in a model of the primate visual system. Although developed for computational neuroscience, SFA has turned out to be a versatile algorithm also for technical applications since it can be used for feature extraction, dimensionality reduction, and invariance learning. With minor adaptations SFA can also be applied to supervised learning problems such as classification and regression. In this work, we review several illustrative examples of possible applications including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.
    BibTeX:
    			
    			
                            @article{Escalante-B.Wiskott-2012a,
                              author       = {Alberto N. Escalante-B. and Laurenz Wiskott},
                              title        = {Slow feature analysis: perspectives for technical applications of a versatile learning algorithm},
                              journal      = {K{\"u}nstliche Intelligenz [Artificial Intelligence]},
                              publisher    = {Springer Nature},
                              year         = {2012},
                              volume       = {26},
                              number       = {4},
                              pages        = {341--348},
    			  url          = {http://www.springerlink.com/content/vk3738325250162k/},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/EscalanteWiskott-2012a-KI-SFATechnicalApplications-Preprint.pdf},
                              doi          = {http://doi.org/10.1007/s13218-012-0190-7}
                            }
    			
    			
    					
    Escalante-B., A.N. & Wiskott, L. 2012 How to solve classification and regression problems on real data with slow feature analysis Poster at the 21st Machine Learning Summer School, Aug 27 -- Sep 7, Kyoto University, Japan .
     
    misc
    BibTeX:
    			
    			
                            @misc{Escalante-B.Wiskott-2012b,
                              author       = {Alberto N. Escalante-B. and Laurenz Wiskott},
                              title        = {How to solve classification and regression problems on real data with slow feature analysis},
                              year         = {2012},
                              howpublished = {Poster at the 21\textsuperscript{st} Machine Learning Summer School, Aug 27 -- Sep 7, Kyoto University, Japan}
                            }
    			
    			
    					
    Escalante-B., A.N. & Wiskott, L. 2013 How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis Journal of Machine Learning Research , 14, 3683-3719.
     
    article
    Abstract: Supervised learning from high-dimensional data, e.g., multimedia data, is a challenging task. We propose an extension of slow feature analysis (SFA) for supervised dimensionality reduction called graph-based SFA (GSFA). The algorithm extracts a label-predictive low-dimensional set of features that can be post-processed by typical supervised algorithms to generate the final label or class estimation. GSFA is trained with a so-called training graph, in which the vertices are the samples and the edges represent similarities of the corresponding labels. A new weighted SFA optimization problem is introduced, generalizing the notion of slowness from sequences of samples to such training graphs. We show that GSFA computes an optimal solution to this problem in the considered function space, and propose several types of training graphs. For classification, the most straightforward graph yields features equivalent to those of (nonlinear) Fisher discriminant analysis. Emphasis is on regression, where four different graphs were evaluated experimentally with a subproblem of face detection on photographs. The method proposed is promising particularly when linear models are insufficient, as well as when feature selection is difficult.
    BibTeX:
    			
    			
                            @article{Escalante-B.Wiskott-2013b,
                              author       = {Alberto N. Escalante-B. and Laurenz Wiskott},
                              title        = {How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis},
                              journal      = {Journal of Machine Learning Research},
                              year         = {2013},
                              volume       = {14},
                              pages        = {3683--3719},
    			  url          = {http://jmlr.org/papers/v14/escalante13a.html}
                            }
    			
    			
    					
    Escalante-B., A.N. & Wiskott, L. 2015 Theoretical analysis of the optimal free responses of graph-based SFA for the design of training graphs e-print arXiv:1509.08329 .
     
    misc
    Abstract: Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a time series. Graph-based SFA (GSFA) is a supervised extension that can solve regression problems if followed by a post-processing regression algorithm. A training graph specifies arbitrary connections between the training samples. The connections in current graphs, however, only depend on the rank of the involved labels. Exploiting the exact label values makes further improvements in estimation accuracy possible. In this article, we propose the exact label learning (ELL) method to create a graph that codes the desired label explicitly, so that GSFA is able to extract a normalized version of it directly. The ELL method is used for three tasks: (1) We estimate gender from artificial images of human faces (regression) and show the advantage of coding additional labels, particularly skin color. (2) We analyze two existing graphs for regression. (3) We extract compact discriminative features to classify traffic sign images. When the number of output features is limited, a higher classification rate is obtained compared to a graph equivalent to nonlinear Fisher discriminant analysis. The method is versatile, directly supports multiple labels, and provides higher accuracy compared to current graphs for the problems considered.
    BibTeX:
    			
    			
                            @misc{Escalante-B.Wiskott-2015,
                              author       = {Alberto N. Escalante-B. and Laurenz Wiskott},
                              title        = {Theoretical analysis of the optimal free responses of graph-based {SFA} for the design of training graphs},
                              year         = {2015},
                              howpublished = {e-print arXiv:1509.08329},
    			  url          = {http://arxiv.org/abs/1509.08329}
                            }
    			
    			
    					
    Escalante-B., A.N. & Wiskott, L. 2016 Theoretical analysis of the optimal free responses of graph-based SFA for the design of training graphs. Journal of Machine Learning Research , 17(157), 1-36.
     
    article
    Abstract: Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a multi-dimensional time series. Graph-based SFA (GSFA) is an extension to SFA for supervised learning that can be used to successfully solve regression problems if combined with a simple supervised post-processing step on a small number of slow features. The objective function of GSFA minimizes the squared output di erences between pairs of samples speci ed by the edges of a structure called training graph. The edges of current training graphs, however, are derived only from the relative order of the labels. Exploiting the exact numerical value of the labels enables further improvements in label estimation accuracy. In this article, we propose the exact label learning (ELL) method to create a more precise training graph that encodes the desired labels explicitly and allows GSFA to extract a normalized version of them directly (i.e., without supervised post-processing). The ELL method is used for three tasks: (1)We estimate gender from arti cial images of human faces (regression) and show the advantage of coding additional labels, particularly skin color. (2) We analyze two existing graphs for regression. (3) We extract compact discriminative features to classify trac sign images. When the number of output features is limited, such compact features provide a higher classi cation rate compared to a graph that generates features equivalent to those of nonlinear Fisher discriminant analysis. The method is versatile, directly supports multiple labels, and provides higher accuracy compared to current graphs for the problems considered.
    BibTeX:
    			
    			
                            @article{Escalante-B.Wiskott-2016b,
                              author       = {Alberto N. Escalante-B. and Laurenz Wiskott},
                              title        = {Theoretical analysis of the optimal free responses of graph-based {SFA} for the design of training graphs.},
                              journal      = {Journal of Machine Learning Research},
                              year         = {2016},
                              volume       = {17},
                              number       = {157},
                              pages        = {1--36},
    			  url          = {http://jmlr.org/papers/v17/15-311.html}
                            }
    			
    			
    					
    Escalante-B., A.-N. & Wiskott, L. 2013 How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis Cognitive Sciences EPrint Archive (CogPrints) , 8966.
     
    misc
    BibTeX:
    			
    			
                            @misc{Escalante-B.Wiskott-2013a,
                              author       = {Alberto-N. Escalante-B. and Laurenz Wiskott},
                              title        = {How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis},
                              year         = {2013},
                              volume       = {8966},
                              howpublished = {Cognitive Sciences EPrint Archive (CogPrints)},
    			  url          = {http://cogprints.org/8966/}
                            }
    			
    			
    					
    Fan, K.; Wang, P.C. & Zhuang, S. 2018 Human fall detection using slow feature analysis Multimedia Tools and Applications , 1-28.
     
    article
    BibTeX:
    			
    			
                            @article{FanWangEtAl-2018,
                              author       = {Kaibo Fan and Ping Chuan Wang and Shuo Zhuang},
                              title        = {Human fall detection using slow feature analysis},
                              journal      = {Multimedia Tools and Applications},
                              year         = {2018},
                              pages        = {1-28},
                              doi          = {http://doi.org/10.1007/s11042-018-5638-9}
                            }
    			
    			
    					
    Fang, J.; Rüther, N.; Bellebaum, C.; Wiskott, L. & Cheng, S. 2018 The Interaction between Semantic Representation and Episodic Memory Neural Computation , 30, 293-332.
     
    article
    BibTeX:
    			
    			
                            @article{FangRuetherEtAl-2018,
                              author       = {Jing Fang and Naima R{\"u}ther and Christian Bellebaum and Laurenz Wiskott and Sen Cheng},
                              title        = {The Interaction between Semantic Representation and Episodic Memory},
                              journal      = {Neural Computation},
                              year         = {2018},
                              volume       = {30},
                              pages        = {293-332},
                              doi          = {http://doi.org/10.1162/neco_a_01044}
                            }
    			
    			
    					
    Feichtenhofer, C.; Pinz, A. & Wildes, R.P. 2014 Bags of spacetime energies for dynamic scene recognition Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , 2681-2688.
     
    inproceedings
    Abstract: This paper presents a unified bag of visual word (BoW) framework for dynamic scene recognition. The approach builds on primitive features that uniformly capture spatial and temporal orientation structure of the imagery (e.g., video), as extracted via application of a bank of spatiotemporally oriented filters. Various feature encoding techniques are investigated to abstract the primitives to an intermediate representation that is best suited to dynamic scene representation. Further, a novel approach to adaptive pooling of the encoded features is presented that captures spatial layout of the scene even while being robust to situations where camera motion and scene dynamics are confounded. The resulting overall approach has been evaluated on two standard, publically available dynamic scene datasets. The results show that in comparison to a representative set of alternatives, the proposed approach outperforms the previous state-of-the-art in classification accuracy by 10%.
    BibTeX:
    			
    			
                            @inproceedings{FeichtenhoferPinzEtAl-2014,
                              author       = {Feichtenhofer, Christoph and Pinz, Axel and Wildes, Richard P},
                              title        = {Bags of spacetime energies for dynamic scene recognition},
                              booktitle    = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
                              year         = {2014},
                              pages        = {2681--2688},
    			  url          = {http://ieeexplore.ieee.org/document/6909739/?arnumber=6909739},
                              url2         = {https://pdfs.semanticscholar.org/ea4e/3d9a4bfe3c57a3cf9ebca7313e91a8b2bd5c.pdf},
                              doi          = {http://doi.org/10.1109/cvpr.2014.343}
                            }
    			
    			
    					
    Fischer, M.J. 2016 Predictable components in global speleothem 8O Quaternary Science Reviews , 131, 380-392.
    Publ. Elsevier.
     
    article
    Abstract: The earth's ice–ocean–atmosphere system is made up of subsystems which have different dynamics and which evolve at different timescales. Examples include the slow dynamics of ice sheet growth and melting, the tropical response to precessional cycles (∼21,000 years), and the fast dynamics of Dansgaard–Oeschger cycles (∼1500 years). Since dynamical systems evolve along characteristic trajectories, they are, to some extent, predictable. Further, it should be possible to decompose any dynamical system that is made up of subsystems with discrete dynamics and characteristic timescales, into time series which capture those discrete components. This study reviews five methods which can potentially achieve this, including: Optimal Persistence Analysis (OPA), Slow Feature Analysis (SFA), Principal Trend Analysis (PTA), Average Predictability Time Decomposition (APTD) and Forecastable Components Analysis (ForeCA). These methods produce sets of components that are in some way predictable, such that each component is more predictable than the next component, but each method uses a different measure of predictability. The five methods are applied to a global dataset of speleothem δ18O spanning the period 22–0 ka BP. The two leading predictable components are a monotonic trend, and a low-frequency oscillation with a periodicity of ∼21,000 years. The methods ForeCA and PTA\ cleanly separate these two components from higher-frequency signals. The third predictable component consists predominantly of a peak which ramps up during Heinrich Stadial 1, and falls thereafter. Furthermore, predictable components analysis can be used not only to investigate the predictability within a field, but can be extended to exploring the predictability between fields, such as between the northern hemisphere field and the southern hemisphere field. Predictable components analysis allows a better insight into the dynamical components of climate fields, and hence should be a useful tool for improving the interpretation of paleo-isotope records and other climate proxies.
    BibTeX:
    			
    			
                            @article{Fischer-2016,
                              author       = {Matt J. Fischer},
                              title        = {Predictable components in global speleothem $\delta^18${O}},
                              journal      = {Quaternary Science Reviews},
                              publisher    = {Elsevier},
                              year         = {2016},
                              volume       = {131},
                              pages        = {380--392},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0277379115001316},
                              doi          = {http://doi.org/10.1016/j.quascirev.2015.03.024}
                            }
    			
    			
    					
    Franzius, M. 2008 Slowness and sparseness for unsupervised learning of spatial and object codes from naturalistic data. Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I .
     
    phdthesis
    Abstract: This thesis introduces a hierarchical model for unsupervised learning from naturalistic video sequences. The model is based on the principles of slowness and sparseness. Different approaches and implementations for these principles are discussed. A variety of neuron classes in the hippocampal formation of rodents and primates codes for different aspects of space surrounding the animal, including place cells, head direction cells, spatial view cells and grid cells. In the main part of this thesis, video sequences from a virtual reality environment are used for training the hierarchical model. The behavior of most known hippocampal neuron types coding for space are reproduced by this model. The type of representations generated by the model is mostly determined by the movement statistics of the simulated animal. The model approach is not limited to spatial coding. An application of the model to invariant object recognition is described, where artificial clusters of spheres or rendered fish are presented to the model. The resulting representations allow a simple extraction of the identity of the object presented as well as of its position and viewing angle.
    BibTeX:
    			
    			
                            @phdthesis{Franzius-2008,
                              author       = {Mathias Franzius},
                              title        = {Slowness and sparseness for unsupervised learning of spatial and object codes from naturalistic data.},
                              school       = {Humboldt-Universit{\"{a}}t zu Berlin, Mathematisch-Naturwissenschaftliche Fakult{\"{a}}t I},
                              year         = {2008},
    			  url          = {http://edoc.hu-berlin.de/docviews/abstract.php?id=29124}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2007 Slowness and sparseness lead to place-, head direction-, and spatial-view cells. Proc. 3rd Annual Computational Cognitive Neuroscience Conference, Nov 1-2, San Diego, USA , III-8.
    Eds. Becker, S. & others
     
    inproceedings
    Abstract: We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2007e,
                              author       = {Mathias Franzius and Henning Sprekeler and Laurenz Wiskott},
                              title        = {Slowness and sparseness lead to place-, head direction-, and spatial-view cells.},
                              booktitle    = {Proc.\ 3\textsuperscript{rd} Annual Computational Cognitive Neuroscience Conference, Nov 1--2, San Diego, USA},
                              year         = {2007},
                              pages        = {III-8},
    			  url          = {http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.0030166},
                              doi          = {http://doi.org/10.1371/journal.pcbi.0030166}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2007 Unsupervised learning of visually driven place cells in the hippocampus. Kognitionsforschung 2007, Beiträge zur 8. Jahrestagung der Gesellschaft für Kognitionswissenschaft (KogWis'07), Mar 19-21, Saarbrücken, Germany , 60.
    Eds. Frings, C.; Mecklinger, A.; Opitz, B.; Pospeschill, M.; Wentura, D. & Zimmer, H. D.
    Publ. Shaker Verlag, Aachen.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2007a,
                              author       = {M. Franzius and H. Sprekeler and L. Wiskott},
                              title        = {Unsupervised learning of visually driven place cells in the hippocampus.},
                              booktitle    = {Kognitionsforschung 2007, Beitr{\"a}ge zur 8. Jahrestagung der Gesellschaft f\"ur Kognitionswissenschaft (KogWis'07), Mar 19-21, Saarbr\"ucken, Germany},
                              publisher    = {Shaker Verlag},
                              year         = {2007},
                              pages        = {60}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2006 Slowness leads to place cells. Proc. Berlin Neuroscience Forum, Jun 8-10, Bad Liebenwalde, Germany , 42.
    Publ. Max-Delbrück-Centrum für Molekulare Medizin (MDC), Berlin.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2006a,
                              author       = {M. Franzius and H. Sprekeler and L. Wiskott},
                              title        = {Slowness leads to place cells.},
                              booktitle    = {Proc.\ Berlin Neuroscience Forum, Jun 8--10, Bad Liebenwalde, Germany},
                              publisher    = {Max-Delbr\"uck-Centrum f\"ur Molekulare Medizin (MDC)},
                              year         = {2006},
                              pages        = {42}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2006 Slowness leads to place cells. Proc. 2nd Bernstein Symposium for Computational Neuroscience, Oct 1-3, Berlin, Germany , 45.
    Publ. Bernstein Center for Computational Neuroscience (BCCN) Berlin.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2006b,
                              author       = {M. Franzius and H. Sprekeler and L. Wiskott},
                              title        = {Slowness leads to place cells.},
                              booktitle    = {Proc.\ 2\textsuperscript{nd} Bernstein Symposium for Computational Neuroscience, Oct 1--3, Berlin, Germany},
                              publisher    = {Bernstein Center for Computational Neuroscience (BCCN) Berlin},
                              year         = {2006},
                              pages        = {45}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2006 Slowness leads to place cells. Proc. 15th Annual Computational Neuroscience Meeting (CNS'06), Jul 16-20, Edinburgh, Scotland .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2006c,
                              author       = {M. Franzius and H. Sprekeler and L. Wiskott},
                              title        = {Slowness leads to place cells.},
                              booktitle    = {Proc.\ 15\textsuperscript{th} Annual Computational Neuroscience Meeting (CNS'06), Jul 16--20, Edinburgh, Scotland},
                              year         = {2006}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2007 Unsupervised learning of place cells and head direction cells with slow feature analysis. Proc. 7th Göttingen Meeting of the German Neuroscience Society, Mar 29 - Apr 1, Göttingen, Germany , TS19-1C.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusSprekelerEtAl-2007b,
                              author       = {M. Franzius and H. Sprekeler and L. Wiskott},
                              title        = {Unsupervised learning of place cells and head direction cells with slow feature analysis.},
                              booktitle    = {Proc.\ 7\textsuperscript{th} G\"ottingen Meeting of the German Neuroscience Society, Mar 29 -- Apr 1, G\"ottingen, Germany},
                              year         = {2007},
                              pages        = {TS19--1C}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2007 Learning of place cells, head-direction cells, and spatial-view cells with slow feature analysis on quasi-natural videos. Cognitive Sciences EPrint Archive (CogPrints) , 5492.
     
    misc
    BibTeX:
    			
    			
                            @misc{FranziusSprekelerEtAl-2007c,
                              author       = {Mathias Franzius and Henning Sprekeler and Laurenz Wiskott},
                              title        = {Learning of place cells, head-direction cells, and spatial-view cells with slow feature analysis on quasi-natural videos.},
                              year         = {2007},
                              volume       = {5492},
                              howpublished = {Cognitive Sciences EPrint Archive (CogPrints)},
    			  url          = {http://cogprints.org/5492/}
                            }
    			
    			
    					
    Franzius, M.; Sprekeler, H. & Wiskott, L. 2007 Slowness and sparseness lead to place, head-direction, and spatial-view cells. PLoS Computational Biology , 3(8), e166.
     
    article
    Abstract: We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system [1]. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.
    BibTeX:
    			
    			
                            @article{FranziusSprekelerEtAl-2007d,
                              author       = {Mathias Franzius and Henning Sprekeler and Laurenz Wiskott},
                              title        = {Slowness and sparseness lead to place, head-direction, and spatial-view cells.},
                              journal      = {PLoS Computational Biology},
                              year         = {2007},
                              volume       = {3},
                              number       = {8},
                              pages        = {e166},
    			  url          = {http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.0030166},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/FranziusSprekelerEtAl-2007d-PLoSCompBiol-PlaceCells.pdf},
                              doi          = {http://doi.org/10.1371/journal.pcbi.0030166}
                            }
    			
    			
    					
    Franzius, M.; Vollgraf, R. & Wiskott, L. 2006 From grids to places. Cognitive Sciences EPrint Archive (CogPrints) , 5101.
     
    misc
    Abstract: Hafting et al. (2005) described grid cells in the dorsocaudal region of the medial enthorinal cortex (dMEC). These cells show a strikingly regular grid-like firing-pattern as a function of the position of a rat in an enclosure. Since the dMEC projects to the hippocampal areas containing the well-known place cells, the question arises whether and how the localized responses of the latter can emerge based on the output of grid cells. Here, we show that, starting with simulated grid-cells, a simple linear transformation maximizing sparseness leads to a localized representation similar to place fields.
    BibTeX:
    			
    			
                            @misc{FranziusVollgrafEtAl-2006,
                              author       = {Mathias Franzius and Roland Vollgraf and Laurenz Wiskott},
                              title        = {From grids to places.},
                              year         = {2006},
                              volume       = {5101},
                              howpublished = {Cognitive Sciences EPrint Archive (CogPrints)},
    			  url          = {http://cogprints.org/5101/}
                            }
    			
    			
    					
    Franzius, M.; Vollgraf, R. & Wiskott, L. 2007 From grids to places. Journal of Computational Neuroscience , 22(3), 297-299.
     
    article
    Abstract: Hafting et al. (2005) described grid cells in the dorsocaudal region of the medial enthorinal cortex (dMEC). These cells show a strikingly regular grid-like firing-pattern as a function of the position of a rat in an enclosure. Since the dMEC projects to the hippocampal areas containing the well-known place cells, the question arises whether and how the localized responses of the latter can emerge based on the output of grid cells. Here, we show that, starting with simulated grid-cells, a simple linear transformation maximizing sparseness leads to a localized representation similar to place fields.
    BibTeX:
    			
    			
                            @article{FranziusVollgrafEtAl-2007-SFA,
                              author       = {Mathias Franzius and Roland Vollgraf and Laurenz Wiskott},
                              title        = {From grids to places.},
                              journal      = {Journal of Computational Neuroscience},
                              year         = {2007},
                              volume       = {22},
                              number       = {3},
                              pages        = {297--299},
    			  url          = {http://www.springerlink.com/content/r6lj66670057871q/},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/FranziusVollgrafEtAl-2007-JCompNeurosci-GridsToPlaces-Preprint.pdf},
                              doi          = {http://doi.org/10.1007/s10827-006-0013-7}
                            }
    			
    			
    					
    Franzius, M. & Wersing, H. 2010 Learning invariant visual shape representations from physics Artificial Neural Networks - ICANN 2010 , Lecture Notes in Computer Science , 6354, 298-302.
    Eds. Diamantaras, K.; Duch, W. & Iliadis, L.
    Publ. Springer Berlin Heidelberg.
     
    inproceedings
    Abstract: 3D shape determines an object’s physical properties to a large degree. In this article, we introduce an autonomous learning system for categorizing 3D shape of simulated objects from single views. The system extends an unsupervised bottom-up learning architecture based on the slowness principle with top-down information derived from the physical behavior of objects. The unsupervised bottom-up learning leads to pose invariant representations. Shape specificity is then integrated as top-down information from the movement trajectories of the objects. As a result, the system can categorize 3D object shape from a single static object view without supervised postprocessing.
    BibTeX:
    			
    			
                            @inproceedings{FranziusWersing-2010,
                              author       = {Franzius, Mathias and Wersing, Heiko},
                              title        = {Learning invariant visual shape representations from physics},
                              booktitle    = {Artificial Neural Networks -- ICANN 2010},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2010},
                              volume       = {6354},
                              pages        = {298--302},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-15825-4_38},
                              url2         = {https://pdfs.semanticscholar.org/bb8c/c7906f1d16fc8806fd21408992a68a307dbf.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-15825-4_38}
                            }
    			
    			
    					
    Franzius, M. & Wersing, H. 2014 Robot with vision-based 3D shape recognition .
    Publ. Google Patents.
     
    misc
    BibTeX:
    			
    			
                            @misc{FranziusWersing-2014,
                              author       = {Franzius, Mathias and Wersing, Heiko},
                              title        = {Robot with vision-based {3D} shape recognition},
                              publisher    = {Google Patents},
                              year         = {2014}
                            }
    			
    			
    					
    Franzius, M.; Wilbert, N. & Wiskott, L. 2008 Invariant object recognition with slow feature analysis. Proc. 18th Intl. Conf. on Artificial Neural Networks (ICANN'08), Prague , Lecture Notes in Computer Science , 5163, 961-970.
    Eds. Kurková, V.; Neruda, R.; Koutní, J. & k
    Publ. Springer.
     
    inproceedings
    Abstract: Primates are very good at recognizing objects independently of viewing angle or retinal position and outperform existing computer vision systems by far. But invariant object recognition is only one prerequisite for successful interaction with the environment. An animal also needs to assess an object’s position and relative rotational angle. We propose here a model that is able to extract object identity, position, and rotation angles, where each code is independent of all others. We demonstrate the model behavior on complex three-dimensional objects under translation and in-depth rotation on homogeneous backgrounds. A similar model has previously been shown to extract hippocampal spatial codes from quasi-natural videos. The rigorous mathematical analysis of this earlier application carries over to the scenario of invariant object recognition.
    BibTeX:
    			
    			
                            @inproceedings{FranziusWilbertEtAl-2008b,
                              author       = {Mathias Franzius and Niko Wilbert and Laurenz Wiskott},
                              title        = {Invariant object recognition with slow feature analysis.},
                              booktitle    = {Proc.\ 18\textsuperscript{th} Intl.\ Conf.\ on Artificial Neural Networks (ICANN'08), Prague},
                              publisher    = {Springer},
                              year         = {2008},
                              volume       = {5163},
                              pages        = {961--970},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-540-87536-9_98},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/FranziusWilbertEtAl-2008b-ProcICANN-SFAInvariances2D.pdf},
                              doi          = {http://doi.org/10.1007/978-3-540-87536-9_98}
                            }
    			
    			
    					
    Franzius, M.; Wilbert, N. & Wiskott, L. 2007 Unsupervised learning of invariant 3D-object representations with slow feature analysis. Proc. 3rd Bernstein Symposium for Computational Neuroscience, Sep 24-27, Göttingen, Germany , 105.
    Publ. Bernstein Center for Computational Neuroscience (BCCN) Göttingen.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusWilbertEtAl-2007,
                              author       = {Mathias Franzius and Niko Wilbert and Laurenz Wiskott},
                              title        = {Unsupervised learning of invariant 3{D}-object representations with slow feature analysis.},
                              booktitle    = {Proc.\ 3\textsuperscript{rd} Bernstein Symposium for Computational Neuroscience, Sep 24--27, G\"ottingen, Germany},
                              publisher    = {Bernstein Center for Computational Neuroscience (BCCN) G\"ottingen},
                              year         = {2007},
                              pages        = {105}
                            }
    			
    			
    					
    Franzius, M.; Wilbert, N. & Wiskott, L. 2008 Unsupervised learning of invariant 3D-object and pose representations with slow feature analysis. Proc. Federation of European Neuroscience Societies Forum (FENS'08), Jul 12-16, Geneva, Switzerland .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{FranziusWilbertEtAl-2008a,
                              author       = {Mathias Franzius and Niko Wilbert and Laurenz Wiskott},
                              title        = {Unsupervised learning of invariant 3{D}-object and pose representations with slow feature analysis.},
                              booktitle    = {Proc.\ Federation of European Neuroscience Societies Forum (FENS'08), Jul 12--16, Geneva, Switzerland},
                              year         = {2008}
                            }
    			
    			
    					
    Franzius, M.; Wilbert, N. & Wiskott, L. 2011 Invariant object recognition and pose estimation with slow feature analysis. Neural Computation , 23(9), 2289-2323.
    Publ. MIT Press - Journals.
     
    article
    Abstract: Primates are very good at recognizing objects independent of viewing angle or retinal position, and they outperform existing computer vision systems by far. But invariant object recognition is only one prerequisite for successful interaction with the environment. An animal also needs to assess an object's position and relative rotational angle. We propose here a model that is able to extract object identity, position, and rotation angles. We demonstrate the model behavior on complex three-dimensional objects under translation and rotation in depth on a homogeneous background. A similar model has previously been shown to extract hippocampal spatial codes from quasi-natural videos. The framework for mathematical analysis of this earlier application carries over to the scenario of invariant object recognition. Thus, the simulation results can be explained analytically even for the complex high-dimensional data we employed.
    BibTeX:
    			
    			
                            @article{FranziusWilbertEtAl-2011,
                              author       = {Franzius, Mathias and Wilbert, Niko and Wiskott, Laurenz},
                              title        = {Invariant object recognition and pose estimation with slow feature analysis.},
                              journal      = {Neural Computation},
                              publisher    = {{MIT} Press - Journals},
                              year         = {2011},
                              volume       = {23},
                              number       = {9},
                              pages        = {2289--2323},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/NECO_a_00171},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/FranziusWilbertEtAl-2011-NeurComp.pdf},
                              doi          = {http://doi.org/10.1162/NECO_a_00171}
                            }
    			
    			
    					
    Fritsch, J.; Kühnl, T. & Kummert, F. 2014 Monocular road terrain detection by combining visual and spatial information IEEE Transactions on Intelligent Transportation Systems , 15(4), 1586-1596.
    Publ. IEEE.
     
    article
    Abstract: For future driver assistance systems and autonomous vehicles, the road course, i.e., the width and shape of the driving path, is an important source of information. In this paper, we introduce a new hierarchical two-stage approach for learning the spatial layout of road scenes. In the first stage, base classifiers analyze the local visual properties of patches extracted from monocular camera images and provide metric confidence maps. We use classifiers for road appearance, boundary appearance, and lane-marking appearance. The core of the proposed approach is the computation of SPatial RAY (SPRAY) features from each metric confidence map in the second stage. A boosting classifier selecting discriminative SPRAY features can be trained for different types of road terrain and allows capturing the local visual properties together with their spatial layout in the scene. In this paper, the extraction of road area and ego-lane on inner-city video streams is demonstrated. In particular, the detection of the ego-lane is a challenging semantic segmentation task showing the power of SPRAY features, because on a local appearance level, the ego-lane is not distinguishable from other lanes. We have evaluated our approach operating at 20 Hz on a graphics processing unit on a publicly available data set, demonstrating the performance on a variety of road types and weather conditions.
    BibTeX:
    			
    			
                            @article{FritschKuehnlEtAl-2014,
                              author       = {Fritsch, Jannik and K{\"{u}}hnl, Tobias and Kummert, Franz},
                              title        = {Monocular road terrain detection by combining visual and spatial information},
                              journal      = {IEEE Transactions on Intelligent Transportation Systems},
                              publisher    = {IEEE},
                              year         = {2014},
                              volume       = {15},
                              number       = {4},
                              pages        = {1586--1596},
    			  url          = {http://ieeexplore.ieee.org/document/6766705/},
                              url2         = {https://pdfs.semanticscholar.org/73b1/10df4809d0a015f90fa6e7a7dce351bcc52e.pdf},
                              doi          = {http://doi.org/10.1109/tits.2014.2303899}
                            }
    			
    			
    					
    Gao, J. & Ye, M. 2010 Comparison of SFA and ICA Advanced Computational Intelligence (IWACI), 2010 Third International Workshop on , 62-65.
     
    inproceedings
    Abstract: Recently, a new method that slow feature analysis (SFA), which can extract slowly varying feature of temporally varying signals, has been explored. SFA method is an extension of independent component analysis (ICA), which has been used to separate blind source signals. In this article, we present a simple and efficient SFA based method to separate blind signals according to their different smooth degree. The performance of the proposed mathod is higher than that of the conventional method ICA. Simulation illustrates the good performance of the proposed method.
    BibTeX:
    			
    			
                            @inproceedings{GaoYe-2010,
                              author       = {Gao, Jianbin and Ye, Mao},
                              title        = {Comparison of {SFA} and {ICA}},
                              booktitle    = {Advanced Computational Intelligence (IWACI), 2010 Third International Workshop on},
                              year         = {2010},
                              pages        = {62--65},
    			  url          = {http://ieeexplore.ieee.org/document/5585205/},
                              doi          = {http://doi.org/10.1109/iwaci.2010.5585205}
                            }
    			
    			
    					
    Gao, J. & Zhao, C. 2018 Distributed Bayesian network with slow feature analysis for fault diagnosis 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC) , 1100-1105.
     
    article
    BibTeX:
    			
    			
                            @article{GaoZhao-2018,
                              author       = {Jie Gao and Chunhui Zhao},
                              title        = {Distributed Bayesian network with slow feature analysis for fault diagnosis},
                              journal      = {2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC)},
                              year         = {2018},
                              pages        = {1100-1105},
                              doi          = {http://doi.org/10.1109/yac.2018.8406535}
                            }
    			
    			
    					
    Gao, J. & Zhao, C. 2018 Distributed Bayesian network with slow feature analysis for fault diagnosis 2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC) , 1100-1105.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{GaoZhao-2018a,
                              author       = {Gao, Jie and Zhao, Chunhui},
                              title        = {Distributed Bayesian network with slow feature analysis for fault diagnosis},
                              booktitle    = {2018 33rd Youth Academic Annual Conference of Chinese Association of Automation (YAC)},
                              year         = {2018},
                              pages        = {1100--1105},
                              doi          = {http://doi.org/10.1109/yac.2018.8406535}
                            }
    			
    			
    					
    Gao, J.-B.; Li, J.-P. & Xia, Q. 2008 Slowly feature analysis of Gabor feature for face recognition 2008 International Conference on Apperceiving Computing and Intelligence Analysis , 177-180.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Obtaining invariant representation of time varying signals is one of the major problems in object recognition. Recently, a new method that slowly feature analysis (SFA) which can extract invariant features of temporally varying signals is being explored, which is an extension of independent component analysis (ICA) which has been used for extracting facial feature. The technique of SFA can be extended to the field of face recognition easily. The Gabor feature face images exhibit strong characteristics of spatial locality, scale, and orientation selectivity. Theses images can produce pronounced local feature that are most suitable for face recognition. SFA would further reduce redundancy and represent slowly varying features explicitly. These slowly varying features are most useful for subsequent pattern discrimination and associative recall. Making use of the slowly feature method, in this paper, we propose a new face recognition algorithm based on Gabor face feature and slowly varying feature analysis. Results indicate that our algorithm is effective and competitive.
    BibTeX:
    			
    			
                            @inproceedings{GaoLiEtAl-2008,
                              author       = {Jian-Bin Gao and Jian-Ping Li and Qi Xia},
                              title        = {Slowly feature analysis of {G}abor feature for face recognition},
                              booktitle    = {2008 International Conference on Apperceiving Computing and Intelligence Analysis},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2008},
                              pages        = {177--180},
    			  url          = {http://ieeexplore.ieee.org/document/4769999/},
                              doi          = {http://doi.org/10.1109/ICACIA.2008.4769999}
                            }
    			
    			
    					
    Gao, X.; Li, H.; Wang, Y.; Chen, T.; Zuo, X. & Zhong, L. 2018 Fault Detection in Managed Pressure Drilling Using Slow Feature Analysis IEEE Access , 6, 34262-34271.
    Publ. Institute of Electrical and Electronics Engineers.
     
    article
    BibTeX:
    			
    			
                            @article{GaoLiEtAl-2018,
                              author       = {Xiaoyong Gao and Hao Li and Yuhong Wang and Tao Chen and Xin Zuo and Lei Zhong},
                              title        = {Fault Detection in Managed Pressure Drilling Using Slow Feature Analysis},
                              journal      = {IEEE Access},
                              publisher    = {Institute of Electrical and Electronics Engineers},
                              year         = {2018},
                              volume       = {6},
                              pages        = {34262-34271},
                              doi          = {http://doi.org/10.1109/access.2018.2846295}
                            }
    			
    			
    					
    Gao, X.; Shang, C.; Yang, F. & Huang, D. 2015 Detecting and isolating plant-wide oscillations via slow feature analysis 2015 American Control Conference (ACC) , 906-911.
     
    inproceedings
    Abstract: This paper aims at detecting and isolating multiple sources of oscillations in control loops via slow feature analysis. The control loops in the process industries are usually coupled, and therefore disturbances can propagate to downstream process variables through energy or material flows and thus plant-wide disturbances arise. A significant portion of disturbances are oscillatory, and the root causes may be poor controller design or equipment faults such as valve stiction. It is important to find out locations of these oscillation sources so that further root cause diagnosis is possible. A new technique termed as slow feature analysis (SFA) is applied to detect plant-wide oscillations and isolate the sources at the loop level. SFA can recover slowly varying source signals from observed data. Since most oscillations in the process industries have low oscillatory frequencies, SFA is a very powerful tool to recover oscillation sources from observed process data. Two projection-based indices, CCI and CSI, are derived to investigate how the control loops are affected by the oscillations and isolate oscillation sources at the loop level. A simulation case study is presented to demonstrate the effectiveness of the proposed method.
    BibTeX:
    			
    			
                            @inproceedings{GaoShangEtAl-2015,
                              author       = {Gao, Xinqing and Shang, Chao and Yang, Fan and Huang, Dexian},
                              title        = {Detecting and isolating plant-wide oscillations via slow feature analysis},
                              booktitle    = {2015 American Control Conference (ACC)},
                              year         = {2015},
                              pages        = {906--911},
    			  url          = {http://ieeexplore.ieee.org/document/7170849/},
                              doi          = {http://doi.org/10.1109/acc.2015.7170849}
                            }
    			
    			
    					
    Ghosh, R.; Siyi, T.; Rasouli, M.; Thakor, N.V. & Kukreja, S.L. 2016 Pose-invariant object recognition for event-based vision with slow-ELM International Conference on Artificial Neural Networks , 455-462.
     
    inproceedings
    Abstract: Neuromorphic image sensors produce activity-driven spiking output at every pixel. These low-power consuming imagers which encode visual change information in the form of spikes help reduce computational overhead and realize complex real-time systems; object recognition and pose-estimation to name a few. However, there exists a lack of algorithms in event-based vision aimed towards capturing invariance to transformations. In this work, we propose a methodology for recognizing objects invariant to their pose with the Dynamic Vision Sensor (DVS). A novel slow-ELM architecture is proposed which combines the effectiveness of Extreme Learning Machines and Slow Feature Analysis. The system, tested on an Intel Core i5-4590 CPU, can perform 10, 000 classifications per second and achieves 1 % classification error for 8 objects with views accumulated over 90 ∘ of 2D pose.
    BibTeX:
    			
    			
                            @inproceedings{GhoshSiyiEtAl-2016,
                              author       = {Ghosh, Rohan and Siyi, Tang and Rasouli, Mahdi and Thakor, Nitish V and Kukreja, Sunil L},
                              title        = {Pose-invariant object recognition for event-based vision with slow-{ELM}},
                              booktitle    = {International Conference on Artificial Neural Networks},
                              year         = {2016},
                              pages        = {455--462},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-319-44781-0_54},
                              doi          = {http://doi.org/10.1007/978-3-319-44781-0_54}
                            }
    			
    			
    					
    Goerg, G.M. 2013 Forecastable component analysis (ForeCA) e-print arXiv:1205.4591v3 .
     
    misc
    Abstract: I introduce Forecastable Component Analysis (ForeCA), a novel dimension reduction technique for temporally dependent signals. Based on a new forecastability measure, ForeCA finds an optimal transformation to separate a multivariate time series into a forecastable and an orthogonal white noise space. I present a converging algorithm with a fast eigenvector solution. Applications to financial and macro-economic time series show that ForeCA can successfully discover informative structure, which can be used for forecasting as well as classification. The R package ForeCA (this http URL) accompanies this work and is publicly available on CRAN.
    BibTeX:
    			
    			
                            @misc{Goerg-2013,
                              author       = {Goerg, Georg M},
                              title        = {Forecastable component analysis ({ForeCA})},
                              year         = {2013},
                              howpublished = {e-print arXiv:1205.4591v3},
    			  url          = {https://arxiv.org/abs/1205.4591v3}
                            }
    			
    			
    					
    Goerg, G.M. 2013 Forecastable component analysis. ICML (2) , 64-72.
     
    inproceedings
    Abstract: I introduce Forecastable Component Analysis (ForeCA), a novel dimension re- duction technique for temporally dependent signals. Based on a new forecastability measure, ForeCA finds an optimal transfor- mation to separate a multivariate time series into a forecastable and an orthogonal white noise space. I present a converging algorithm with a fast eigenvector solution. Applica- tions to financial and macro-economic time series show that ForeCA can successfully discover informative structure, which can be used for forecasting as well as classification. The R package ForeCA accompanies this work and is publicly available on CRAN.
    BibTeX:
    			
    			
                            @inproceedings{Goerg-2013a,
                              author       = {Goerg, Georg M},
                              title        = {Forecastable component analysis.},
                              booktitle    = {ICML (2)},
                              year         = {2013},
                              pages        = {64--72},
    			  url          = {https://pdfs.semanticscholar.org/5be4/060e644b3fa1a6ac967e0f186b9fa3497899.pdf}
                            }
    			
    			
    					
    Grathwohl, W. & Wilson, A. 2016 Disentangling space and time in video with hierarchical variational auto-encoders e-print arXiv:1612.04440 .
     
    misc
    Abstract: There are many forms of feature information present in video data. Principle among them are object identity infor- mation which is largely static across multiple video frames, and object pose and style information which continuously transforms from frame to frame. Most existing models con- found these two types of representation by mapping them to a shared feature space. In this paper we propose a probabilistic approach for learning separable representa- tions of object identity and pose information using unsu- pervised video data. Our approach leverages a deep gen- erative model with a factored prior distribution that en- codes properties of temporal invariances in the hidden fea- ture set. Learning is achieved via variational inference. We present results of learning identity and pose informa- tion on a dataset of moving characters as well as a dataset of rotating 3D objects. Our experimental results demon- strate our model’s success in factoring its representation, and demonstrate that the model achieves improved perfor- mance in transfer learning tasks.
    BibTeX:
    			
    			
                            @misc{GrathwohlWilson-2016,
                              author       = {Grathwohl, Will and Wilson, Aaron},
                              title        = {Disentangling space and time in video with hierarchical variational auto-encoders},
                              year         = {2016},
                              howpublished = {e-print arXiv:1612.04440},
    			  url          = {https://arxiv.org/pdf/1612.04440.pdf}
                            }
    			
    			
    					
    Gregor, K. & LeCun, Y. 2010 Emergence of complex-like cells in a temporal product network with local receptive fields e-print arXiv:1006.0448 .
     
    misc
    Abstract: We introduce a new neural architecture and an unsupervised algorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicatively: one that represents the content of the image, constrained to be constant over several consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encoder to extract features, and a decoder to reconstruct the input from the features. The method was applied to patches extracted from consecutive movie frames and produces orientation and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive field spread over a large image of arbitrary size. A layer of complex cells, subject to sparsity constraints, pool feature units over overlapping local neighborhoods, which causes the feature units to organize themselves into pinwheel patterns of orientation-selective receptive fields, similar to those observed in the mammalian visual cortex. A feed-forward encoder efficiently computes the feature representation of full images.
    BibTeX:
    			
    			
                            @misc{GregorLeCun-2010,
                              author       = {Gregor, Karo and LeCun, Yann},
                              title        = {Emergence of complex-like cells in a temporal product network with local receptive fields},
                              year         = {2010},
                              howpublished = {e-print arXiv:1006.0448},
    			  url          = {https://arxiv.org/abs/1006.0448},
                              url2         = {https://pdfs.semanticscholar.org/31f0/4f8f83365fabf7ba9c9be1179c0da6815128.pdf}
                            }
    			
    			
    					
    Grünewälder, S. 2009 Application of statistical estimation theory, adaptive sensory systems and time series processing to reinforcement learning Fakultät IV-Elektrotechnik und Informatik, Technische Universität Berlin, Fakultät IV-Elektrotechnik und Informatik, Technische Universität Berlin .
     
    phdthesis
    BibTeX:
    			
    			
                            @phdthesis{Gruenewaelder-2009,
                              author       = {Gr{\"u}new{\"a}lder, Steffen},
                              title        = {Application of statistical estimation theory, adaptive sensory systems and time series processing to reinforcement learning},
                              school       = {Fakult{\"{a}}t IV-Elektrotechnik und Informatik, Technische Universit{\"{a}}t Berlin},
                              year         = {2009},
    			  url          = {https://www.deutsche-digitale-bibliothek.de/binary/6Z7DL3XYO7BSFNSVAOKVOQE45HNN23J7/full/1.pdf}
                            }
    			
    			
    					
    Gu, S.; Liu, Y.; Zhang, N. & Du, D. 2015 Fault detection approach based on weighted principal component analysis applied to continuous stirred tank reactor The Open Mechanical Engineering Journal , 9, 966-972.
     
    article
    Abstract: Fault detection approach based on principal component analysis (PCA) may perform not well when the process is time-varying, because it can cause unfavorable influence on feature extraction. To solve this problem, a modified PCA which considering variance maximization is proposed, referred to as weighted PCA (WPCA). WPCA can obtain the slow features information of observed data in time-varying system. The monitoring statistical indices are based on WPCA model and their confidence limits are computed by kernel density estimation (KDE). A simulation example on continuous stirred tank reactor (CSTR) show that the proposed method achieves better performance from the perspective of both fault detection rate and fault detection time than conventional PCA model.
    BibTeX:
    			
    			
                            @article{GuLiuEtAl-2015,
                              author       = {Gu, Shanmao and Liu, Yunlong and Zhang, Ni and Du, De},
                              title        = {Fault detection approach based on weighted principal component analysis applied to continuous stirred tank reactor},
                              journal      = {The Open Mechanical Engineering Journal},
                              year         = {2015},
                              volume       = {9},
                              pages        = {966--972},
    			  url          = {https://benthamopen.com/contents/pdf/TOMEJ/TOMEJ-9-966.pdf},
                              doi          = {http://doi.org/10.2174/1874155x01509010966}
                            }
    			
    			
    					
    Gu, X.; Liu, C. & Wang, S. 2013 Supervised slow feature analysis for face recognition Biometric Recognition , Lecture Notes in Computer Science , 8232, 178-184.
    Eds. Sun, Z.; Shan, S.; Yang, G.; Zhou, J.; Wang, Y. & Yin, Y.
    Publ. Springer International Publishing.
     
    incollection
    Abstract: Slow feature analysis (SFA) is a new method based on the slowness principle and extracts slowly varying signals out of the input data. However, traditional SFA cannot be directly performed on those dataset without an obvious temporal structure. In this paper, a novel supervised slow feature analysis (SSFA) is proposed, which constructs pseudo-time series by taking advantage of the consensus information. Extensive experiments on AR and PIE face databases demonstrate superiority of our proposed method.
    BibTeX:
    			
    			
                            @incollection{GuLiuEtAl-2013,
                              author       = {Gu, Xingjian and Liu, Chuancai and Wang, Sheng},
                              title        = {Supervised slow feature analysis for face recognition},
                              booktitle    = {Biometric Recognition},
                              publisher    = {Springer International Publishing},
                              year         = {2013},
                              volume       = {8232},
                              pages        = {178--184},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-319-02961-0_22},
                              doi          = {http://doi.org/10.1007/978-3-319-02961-0_22}
                            }
    			
    			
    					
    Gu, X.; Liu, C. & Wang, S. 2015 Adaptive unsupervised slow feature analysis for feature extraction Journal of Electronic Imaging , 24(2), 023021-023021.
    Publ. International Society for Optics and Photonics.
     
    article
    Abstract: Slow feature analysis (SFA) extracts slowly varying features out of the input data and has been successfully applied on pattern recognition. However, SFA heavily relies on the constructed time series when SFA is applied on databases that neither have obvious temporal structure nor have label information. Traditional SFA constructs time series based on k-nearest neighborhood (k-NN) criterion. Specifically, the time series set constructed by k-NN criterion is likely to include noisy time series or lose suitable time series because the parameter k is difficult to determine. To overcome these problems, a method called adaptive unsupervised slow feature analysis (AUSFA) is proposed. First, AUSFA designs an adaptive criterion to generate time series for characterizing submanifold. The constructed time series have two properties: (1) two points of time series lie on the same submanifold and (2) the submanifold of the time series is smooth. Second, AUSFA seeks projections that simultaneously minimize the slowness scatter and maximize the fastness scatter to extract slow discriminant features. Extensive experimental results on three benchmark face databases demonstrate the effectiveness of our proposed method.
    BibTeX:
    			
    			
                            @article{GuLiuEtAl-2015a,
                              author       = {Gu, Xingjian and Liu, Chuancai and Wang, Sheng},
                              title        = {Adaptive unsupervised slow feature analysis for feature extraction},
                              journal      = {Journal of Electronic Imaging},
                              publisher    = {International Society for Optics and Photonics},
                              year         = {2015},
                              volume       = {24},
                              number       = {2},
                              pages        = {023021--023021},
    			  url          = {http://electronicimaging.spiedigitallibrary.org/article.aspx?articleid=2213384},
                              url2         = {https://www.researchgate.net/profile/Chuancai_Liu3/publication/277594831_Adaptive_unsupervised_slow_feature_analysis_for_feature_extraction/links/5571056108aef8e8dc632db5.pdf},
                              doi          = {http://doi.org/10.1117/1.jei.24.2.023021}
                            }
    			
    			
    					
    Gu, X.; Liu, C.; Wang, S. & Zhao, C. 2015 Feature extraction using adaptive slow feature discriminant analysis Neurocomputing , 154, 139-148.
     
    article
    Abstract: Slow feature discriminant analysis (SFDA) is an attractive biologically inspired learning method to extract discriminant features for classification. However, SFDA\ heavily relies on the constructed time series. For discriminant analysis, SFDA\ cannot make full use of discriminant power for classification, because the type of data distribution is unknown. To address those problems, we propose a new feature extraction method called adaptive slow feature discriminant analysis (ASFDA) in this paper. First, we design a new adaptive criterion to generate within-class time series. The time series have two properties: (1) a pair of time series lies on the same sub-manifold, (2) the sub-manifold of a pair of time series is smooth. Second, ASFDA\ seeks projections to minimize within-class temporal variation and maximize between-class temporal variation simultaneously based on maximum margin criterion. ASFDA\ provides an adaptive parameter to balance between-class temporal variation and within-class temporal variation to obtain an optimal discriminant subspace. Experimental results on three benchmark face databases demonstrate that our proposed ASFDA\ is superior to some state-of-the-art methods.
    BibTeX:
    			
    			
                            @article{GuLiuEtAl-2015b,
                              author       = {Xingjian Gu and Chuancai Liu and Sheng Wang and Cairong Zhao},
                              title        = {Feature extraction using adaptive slow feature discriminant analysis},
                              journal      = {Neurocomputing},
                              year         = {2015},
                              volume       = {154},
                              pages        = {139--148},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0925231214016749},
                              url2         = {https://www.researchgate.net/profile/Chuancai_Liu3/publication/272102390_Feature_extraction_using_adaptive_slow_feature_discriminant_analysis/links/55701db508aec226830ac10f.pdf},
                              doi          = {http://doi.org/10.1016/j.neucom.2014.12.010}
                            }
    			
    			
    					
    Gu, X.; Liu, C.; Wang, S.; Zhao, C. & Wu, S. 2015 Uncorrelated slow feature discriminant analysis using globality preserving projections for feature extraction Neurocomputing , 168, 488-499.
     
    article
    Abstract: Slow Feature Discriminant Analysis (SFDA) is a supervised feature extraction method for classification inspired by biological mechanism. However, SFDA\ only considers the local geometrical structure information of data and ignores the global geometrical structure information. Furthermore, previous works have demonstrated that uncorrelated features of minimum redundancy are effective for classification. In this paper, a novel method called uncorrelated slow feature discriminant analysis using globality preserving projections (USFDA-GP) is proposed for feature extraction and recognition. In USFDA-GP, two kinds of global information are imposed to the objective function of conventional SFDA\ for respecting some more global geometric structures. We also provide an analytical solution by simple eigenvalue decomposition to the optimal model instead of previous iterative method. Experimental results on Extended YaleB, CMU\ PIE\ and LFW-a face databases demonstrate the effectiveness of our proposed method.
    BibTeX:
    			
    			
                            @article{GuLiuEtAl-2015c,
                              author       = {Xingjian Gu and Chuancai Liu and Sheng Wang and Cairong Zhao and Songsong Wu},
                              title        = {Uncorrelated slow feature discriminant analysis using globality preserving projections for feature extraction},
                              journal      = {Neurocomputing},
                              year         = {2015},
                              volume       = {168},
                              pages        = {488--499},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0925231215007778},
                              doi          = {http://doi.org/10.1016/j.neucom.2015.05.079}
                            }
    			
    			
    					
    Gu, X.; Liu, C. & Yang, Z. 2014 Dimensionality reduction based on supervised slow feature analysis for face recognition structure , 7(1).
    Publ. Citeseer.
     
    article
    Abstract: Slow feature analysis (SFA) is motivated by biological model to extracts slowly varying feature from a quickly varying input signal. However, traditional slow feature analysis is an unsupervised method to extract slow or invariant feature and cannot be directly applied on the data set without an obvious temporal structure, i.e. face databases. In this paper, we propose a supervised slow feature analysis to do dimensionality reduction for face recognition. First, a new criterion is developed to construct a Pseudo-time series for data sets without an obvious temporal structure. Then, the first-order derivative at each point in the Pseudo-time series is computed in form of vectors. At last we construct the objective function of SSFA that ensures the secondary moment of first-order derivative as small as possible in the embedding space. SSFA is able to extract the invariant feature for each class and preserve the local structure in embedding space simultaneously. Experimental results on the Yale, ORL, AR, and FERET face databases show the effectiveness of the proposed algorithm.
    BibTeX:
    			
    			
                            @article{GuLiuEtAl-2014,
                              author       = {Gu, Xingjian and Liu, Chuancai and Yang, Zhangjing},
                              title        = {Dimensionality reduction based on supervised slow feature analysis for face recognition},
                              journal      = {structure},
                              publisher    = {Citeseer},
                              year         = {2014},
                              volume       = {7},
                              number       = {1},
    			  url          = {http://www.sersc.org/journals/IJSIP/vol7_no1/2.pdf},
                              doi          = {http://doi.org/10.14257/ijsip.2014.7.1.02}
                            }
    			
    			
    					
    Gu, X.; Shu, X.; Ren, S. & Xu, H. 2018 Two Dimensional Slow Feature Discriminant Analysis via L 2, 1 Norm Minimization for Feature Extraction. KSII Transactions on Internet & Information Systems , 12(7).
     
    article
    BibTeX:
    			
    			
                            @article{GuShuEtAl-2018,
                              author       = {Gu, Xingjian and Shu, Xiangbo and Ren, Shougang and Xu, Huanliang},
                              title        = {Two Dimensional Slow Feature Discriminant Analysis via L 2, 1 Norm Minimization for Feature Extraction.},
                              journal      = {KSII Transactions on Internet \& Information Systems},
                              year         = {2018},
                              volume       = {12},
                              number       = {7},
                              doi          = {http://doi.org/10.3837/tiis.2018.07.012}
                            }
    			
    			
    					
    Guo, F.; Shang, C.; Huang, B.; Wang, K.; Yang, F. & Huang, D. 2016 Monitoring of operating point and process dynamics via probabilistic slow feature analysis Chemometrics and Intelligent Laboratory Systems , 151, 115-125.
    Publ. Elsevier.
     
    article
    Abstract: Traditional multivariate statistical process monitoring (MSPM) approaches aim at detecting deviations from the routine operating condition. However, if the process remains well controlled by feedback controllers in spite of some deviations, alarms triggered in this context become no longer necessary. In this regard, slow feature analysis (SFA) has been recently applied to MSPM tasks by Shang et al. (2015), which allows for seperate distributions of both nominal operating points and dynamic behaviors. Since a poor control performance is always characterized by dynamics anomalies, one can discriminate nominal operating deviations with acceptable control performance, from real faults that deserve more attentions, according to the temporal dynamics of processes. In this work, we propose a new process monitoring scheme based upon probabilistic SFA (PSFA). Compared to deterministic SFA, its probabilistic extension takes the measurement noise into considerations and allows for missing data imputation conveniently, which is beneficial for process monitoring. Apart from generic T2 and SPE metrics for monitoring the operating point, a novel S2 statistics is considered for exclusively monitoring temporal behaviors of processes. Two case studies are provided to show the efficacy of the proposed monitoring approach.
    BibTeX:
    			
    			
                            @article{GuoShangEtAl-2016,
                              author       = {Guo, Feihong and Shang, Chao and Huang, Biao and Wang, Kangcheng and Yang, Fan and Huang, Dexian},
                              title        = {Monitoring of operating point and process dynamics via probabilistic slow feature analysis},
                              journal      = {Chemometrics and Intelligent Laboratory Systems},
                              publisher    = {Elsevier},
                              year         = {2016},
                              volume       = {151},
                              pages        = {115--125},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0169743915003329},
                              doi          = {http://doi.org/10.1016/j.chemolab.2015.12.017}
                            }
    			
    			
    					
    Ha Quang, M. & Wiskott, L. 2011 Slow feature analysis and decorrelation filtering for separating correlated sources Proc. 13th International Conference on Computer Vision (ICCV), Nov 6-13, Barcelona, Spain , 866-873.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: We generalize the method of Slow Feature Analysis for vector-valued functions of multivariables and apply it to the problem of blind source separation, in particular image separation. For the linear case, exact mathematical analysis is given, which shows in particular that the sources are perfectly separated by SFA if and only if they and their first order derivatives are uncorrelated. When the sources are correlated, we apply the following technique called decorrelation filtering: use a linear filter to decorrelate the sources and their derivatives, then apply the separating matrix obtained on the filtered sources to the original sources. We show that if the filtered sources are perfectly separated by this matrix, then so are the original sources. We show how to numerically obtain such a decorrelation filter by solving a nonlinear optimization problem. This technique can also be applied to other linear separation methods, whose output signals are decorrelated, such as ICA.
    BibTeX:
    			
    			
                            @inproceedings{HaQuangWiskott-2011,
                              author       = {Ha Quang, Minh and Wiskott, L.},
                              title        = {Slow feature analysis and decorrelation filtering for separating correlated sources},
                              booktitle    = {Proc.\ 13\textsuperscript{th} International Conference on Computer Vision (ICCV), Nov 6-13, Barcelona, Spain},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {866--873},
    			  url          = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=6126327&abstractAccess=no&userType=inst},
                              doi          = {http://doi.org/10.1109/ICCV.2011.6126327}
                            }
    			
    			
    					
    Ha Quang, M. & Wiskott, L. 2013 Multivariate slow feature analysis and decorrelation filtering for blind source separation IEEE Trans. on Image Processing , 22(7), 2737-2750.
     
    article
    Abstract: We generalize the method of Slow Feature Analysis (SFA) for vector-valued functions of several variables and apply it to the problem of blind source separation, in particular to image separation. It is generally necessary to use multivariate SFA instead of univariate SFA for separating multi-dimensional signals. For the linear case, an exact mathematical analysis is given, which shows in particular that the sources are perfectly separated by SFA if and only if they and their first order derivatives are uncorrelated. When the sources are correlated, we apply the following technique called decorrelation filtering: use a linear filter to decorrelate the sources and their derivatives in the given mixture, then apply the unmixing matrix obtained on the filtered mixtures to the original mixtures. If the filtered sources are perfectly separated by this matrix, so are the original sources. A decorrelation filter can be numerically obtained by solving a nonlinear optimization problem. This technique can also be applied to other linear separation methods, whose output signals are decorrelated, such as ICA. When there are more mixtures than sources, one can determine the actual number of sources by using a regularized version of SFA with decorrelation filtering. Extensive numerical experiments using SFA and ICA with decorrelation filtering, supported by mathematical analysis, demonstrate the potential of our methods for solving problems involving blind source separation.
    BibTeX:
    			
    			
                            @article{HaQuangWiskott-2013,
                              author       = {Ha Quang, Minh and Wiskott, L.},
                              title        = {Multivariate slow feature analysis and decorrelation filtering for blind source separation},
                              journal      = {IEEE Trans.\ on Image Processing},
                              year         = {2013},
                              volume       = {22},
                              number       = {7},
                              pages        = {2737--2750},
    			  url          = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6497610},
                              doi          = {http://doi.org/10.1109/TIP.2013.2257808}
                            }
    			
    			
    					
    Hansard, M. & Horaud, R. 2010 Complex cells and the representation of local image-structure RR-7485, INRIA. 2010Research Report, INRIA, INRIA (RR-7485, inria-00546779).
     
    techreport
    Abstract: The receptive fields of simple cells in the visual cortex can be un- derstood as linear filters. These filters can be modelled by Gabor functions, or by Gaussian derivatives. Gabor functions can also be combined in an ‘en- ergy model’ of the complex cell response. This paper proposes an alternative model of the complex cell, based on Gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous ap- proaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the 2-d differential struc- ture of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the Gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The relevance of the new model to the cortical image representation is discussed
    BibTeX:
    			
    			
                            @techreport{HansardHoraud-2010,
                              author       = {Miles Hansard and Radu Horaud},
                              title        = {Complex cells and the representation of local image-structure},
                              school       = {INRIA},
                              year         = {2010},
                              number       = {RR-7485, inria-00546779},
    			  url          = {https://hal.inria.fr/inria-00546779/file/RR-7485.pdf},
                              url2         = {https://pdfs.semanticscholar.org/3838/031546a0a61a1600b5e3b8316413e13ae7a6.pdf}
                            }
    			
    			
    					
    Hansard, M. & Horaud, R. 2011 A differential model of the complex cell Neural Computation , 23(9), 2324-2357.
    Publ. MIT Press - Journals.
     
    article
    Abstract: The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modelled by Gabor functions, or by Gaussian derivatives. Gabor functions can also be combined in an ‘energy model’ of the complex cell response. This paper proposes an alternative model of the complex cell, based on Gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the 2-D differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the Gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The neural implementation and predictions of the model are discussed.
    BibTeX:
    			
    			
                            @article{HansardHoraud-2011,
                              author       = {Hansard, Miles and Horaud, Radu},
                              title        = {A differential model of the complex cell},
                              journal      = {Neural Computation},
                              publisher    = {{MIT} Press - Journals},
                              year         = {2011},
                              volume       = {23},
                              number       = {9},
                              pages        = {2324--2357},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/NECO_a_00163},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.225.7084&rep=rep1&type=pdf},
                              doi          = {http://doi.org/10.1162/neco_a_00163}
                            }
    			
    			
    					
    Harrison, M.; Geman, S. & Bienenstock, E. 2004 Using statistics of natural images to facilitate automatic receptive field analysis techreport, Division of Applied Mathematics, Brown University, Providence, USA, Division of Applied Mathematics, Brown University, Providence, USA (APPTS Report \#04-2).
    Publ. Citeseer.
     
    techreport
    BibTeX:
    			
    			
                            @techreport{HarrisonGemanEtAl-2004,
                              author       = {Harrison, Matthew and Geman, Stuart and Bienenstock, Elie},
                              title        = {Using statistics of natural images to facilitate automatic receptive field analysis},
                              publisher    = {Citeseer},
                              school       = {Division of Applied Mathematics, Brown University, Providence, USA},
                              year         = {2004},
                              number       = {APPTS Report \#04-2},
    			  url          = {http://www.dam.brown.edu/ptg/REPORTS/04-2.pdf},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.58.807&rep=rep1&type=pdf}
                            }
    			
    			
    					
    Hashimoto, W. 2003 Quadratic forms in natural images Network: Computation in Neural Systems , 14(4), 765-788.
    Publ. Taylor & Francis.
     
    article
    Abstract: Several studies have succeeded in correlating natural image statistics with receptive field properties of neurons in the primary visual cortex. If we determine the parameters of linear transformations that make their output values as independent as possible when input data are natural images, we obtain parameter values that correspond to simple cell characteristics. It was also proved that, by making output values as temporally coherent as possible, simple cell characteristics also emerge. However, complex cell properties have not been fully explained by previous studies of natural image statistics. In this study, we examine whether we could reproduce complex cell properties by determining the parameters of two-layer networks that make their outputs as independent and sparse as possible or as temporally coherent as possible. Input– output functions of two-layer networks correspond to quadratic forms and they form a class of functions that includes complex cell responses and many other functions. Therefore, we employed two-layer networks as a framework for discussing complex cell properties as in previous studies. By maximizing the independence and sparseness of output values of two-layer networks without considering the temporal structure of input images, squared responses of simple cells are obtained and complex cell properties are not reproduced. On the other hand, by maximizing the temporal coherence of output, we obtain complex cell properties among other kinds of input–output functions. In previous studies, the measure of temporal coherence was the squared difference between the responses to two consecutive input images. We obtain two-layer networks that minimize this measure and show that some of them exhibit properties of complex cells but not clearly. We propose the sparseness of difference between responses to two consecutive inputs as an alternative measure of temporal coherence. We formulate an algorithm to maximize the sparseness of difference and show that complex cell properties emerge more clearly.
    BibTeX:
    			
    			
                            @article{Hashimoto-2003,
                              author       = {Wakako Hashimoto},
                              title        = {Quadratic forms in natural images},
                              journal      = {Network: Computation in Neural Systems},
                              publisher    = {Taylor \& Francis},
                              year         = {2003},
                              volume       = {14},
                              number       = {4},
                              pages        = {765--788},
    			  url          = {http://www.tandfonline.com/doi/abs/10.1088/0954-898X_14_4_308},
                              doi          = {http://doi.org/10.1088/0954-898x/14/4/308}
                            }
    			
    			
    					
    He, Z.; Li, X.; Zhang, Z.; Zhang, Y.; Xiao, J. & Zhou, X. 2016 Structure-aware slow feature analysis for age estimation IEEE Signal Processing Letters , 23(12), 1702-1706.
    Publ. IEEE.
     
    article
    Abstract: As an important and challenging problem in computer vision, face age estimation is typically cast as a classification or regression problem over a set of face samples. However, most existing efforts to age estimation usually cope with the face samples individually, which do not take full advantage of the temporal structure and contextual structure of the face samples. In this letter, we propose an age estimation approach named structure-aware slow feature analysis, which is capable of effectively capturing the structure of human faces in the aspects of time-related smoothness for progressive age variation as well as face-related attribute constraints for face age consistency. As a result, we present an iterative optimization scheme to effectively learn the slowly varying feature transformation. Experimental results demonstrate the effectiveness of our approach on the Morph dataset.
    BibTeX:
    			
    			
                            @article{HeLiEtAl-2016,
                              author       = {He, Zhouzhou and Li, Xi and Zhang, Zhongfei and Zhang, Yaqing and Xiao, Jun and Zhou, Xue},
                              title        = {Structure-aware slow feature analysis for age estimation},
                              journal      = {IEEE Signal Processing Letters},
                              publisher    = {IEEE},
                              year         = {2016},
                              volume       = {23},
                              number       = {12},
                              pages        = {1702--1706},
    			  url          = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7549096},
                              doi          = {http://doi.org/10.1109/lsp.2016.2602538}
                            }
    			
    			
    					
    Hein, K. 2009 Lernende Klassifikation beschleunigungsbasierter 3D-Gesten des Wii-Controllers Projektbericht, University of Applied Sciences Cologne, Gummersbach .
     
    misc
    BibTeX:
    			
    			
                            @misc{Hein-2009,
                              author       = {Hein, Kristine},
                              title        = {Lernende {K}lassifikation beschleunigungsbasierter {3D}-{G}esten des {W}ii-{C}ontrollers},
                              year         = {2009},
                              howpublished = {Projektbericht, University of Applied Sciences Cologne, Gummersbach},
    			  url          = {http://maanvs03.gm.fh-koeln.de/webpub/CIOPReports.d/Hein10b.d/Klassifikation3D.pdf}
                            }
    			
    			
    					
    Hein, K. 2010 Klassifizierung von beschleunigungsbasierten 3D-Gesten des Wii-Controllers Fachhochschule Köln, Campus Gummersbach, Fakultät für Informatik und Ingenieurwissenschaften, Fachhochschule Köln, Campus Gummersbach, Fakultät für Informatik und Ingenieurwissenschaften .
     
    mastersthesis
    Abstract: Diese Arbeit untersucht die Slow Feature Analysis (SFA) auf ihre Möglichkeiten zur Gestenerkennung auf Basis von gerätebasierten 3D-Beschleuningungsdaten des Wii- Controllers. Bisherige Ansätze zur Klassifizierung von beschleunigungsbasierten 3D-Gesten verwenden häufig ein HMM-basiertes Verfahren. Die Erkennung von Anfang und Ende einer Geste erfolgt in der Regel entweder anhand eines Ruhelage-Filters oder mit Hilfe von externen Markierungen der Gesten bei ihrer Erhebung. Hier soll nun ein anderer Ansatz, nämlich die Gestenerkennung mit der Slow Feature Analysis (SFA) vorgestellt, näher untersucht und mit anderen gängigen Verfahren verglichen werden. Die SFA ist ein Lernalgorithmus, der die am langsamsten variierenden Signale aus einem sich schnell ändernden Eingangssignal findet, und anhand dessen lernt auf die Gestenklassen zu schlieÿen. Die SFA wurde bereits erfolgreich zur Mustererkennung von handgeschriebene Ziffern eingesetzt und zeigt auch in diesem Ansatz für die Gestenerkennung vergleichbare Ergebnisse mit anderen gängigen Klassifizierungsverfahren. Zur Segmentierung eines Gestensignal um Anfangs- und Endpunkt einer Geste zu ermitteln, wurden unterschiedliche Varianten mit der SFA und ein anderer alternativer regelbasierter Ansatz untersucht. Diese lieferten vergleichbare Ergebnisse mit einem gängigen dynamischen Segmentierungsverfahren.
    BibTeX:
    			
    			
                            @mastersthesis{Hein-2010,
                              author       = {Kristine Hein},
                              title        = {{K}lassifizierung von beschleunigungsbasierten 3{D-G}esten des {W}ii-{C}ontrollers},
                              school       = {Fachhochschule K{\"{o}}ln, Campus Gummersbach, Fakult{\"{a}}t f{\"{u}}r Informatik und Ingenieurwissenschaften},
                              year         = {2010},
    			  url          = {https://epb.bibl.th-koeln.de/files/236/Masterthesis_final.pdf}
                            }
    			
    			
    					
    Hein, K. 2011 Gestenerkennung mit der SFA: Klassifizierung von beschleunigungsbasierten 3D-Gesten des Wii-Controllers Köln, Fachhochsch., Masterarbeit, 2010, Köln, Fachhochsch., Masterarbeit, 2010 .
     
    mastersthesis
    Abstract: Diese Arbeit untersucht die Slow Feature Analysis (SFA) auf ihre Möglichkeiten zur Gestenerkennung auf Basis von gerätebasierten 3D-Beschleuningungsdaten des Wii-Controllers. Bisherige Ansätze zur Klassifizierung von beschleunigungsbasierten 3D-Gesten verwenden häfig ein HMM-basiertes Verfahren. Die Erkennung von Anfang und Ende einer Geste erfolgt in der Regel entweder anhand eines Ruhelage-Filters oder mit Hilfe von externen Markierungen der Gesten bei ihrer Erhebung. Hier soll nun ein anderer Ansatz, nämlich die Gestenerkennung mit der Slow Feature Analysis (SFA) vorgestellt, näher untersucht und mit anderen gängigen Verfahren verglichen werden. Die SFA ist ein Lernalgorithmus, der die am langsamsten variierenden Signale aus einem sich schnell ändernden Eingangssignal findet, und anhand dessen lernt auf die Gestenklassen zu schließen. Die SFA wurde bereits erfolgreich zur Mustererkennung von handgeschriebenen Ziffern eingesetzt und zeigt auch in diesem Ansatz für die Gestenerkennung vergleichbare Ergebnisse mit anderen gängigen Klassifizierungsverfahren. Zur Segmentierung eines Gestensignal um Anfangs- und Endpunkt einer Geste zu ermitteln, wurden unterschiedliche Varianten mit der SFA und ein anderer alternativer regelbasierter Ansatz untersucht. Diese lieferten vergleichbare Ergebnisse mit einem gängigen dynamischen Segmentierungsverfahren.
    BibTeX:
    			
    			
                            @mastersthesis{Hein-2011,
                              author       = {Hein, Kristine},
                              title        = {Gestenerkennung mit der {SFA}: {K}lassifizierung von beschleunigungsbasierten {3D}-{G}esten des {W}ii-{C}ontrollers},
                              school       = {K{\"o}ln, Fachhochsch., Masterarbeit, 2010},
                              year         = {2011}
                            }
    			
    			
    					
    Hinze, C. 2012 Optimale Stimuli in einem hierarchischen SFA-Netzwerk. Klinik für Neurologie der Charité Berlin, Institut für theoretische Biologie der Humboldt Universität Berlin, Klinik für Neurologie der Charité Berlin, Institut für theoretische Biologie der Humboldt Universität Berlin .
     
    phdthesis
    BibTeX:
    			
    			
                            @phdthesis{Hinze-2012,
                              author       = {Christian Hinze},
                              title        = {Optimale {S}timuli in einem hierarchischen {SFA}-{N}etzwerk.},
                              school       = {Klinik f{\"{u}}r Neurologie der Charit{\'{e}} Berlin, Institut f{\"{u}}r theoretische Biologie der Humboldt Universit{\"{a}}t Berlin},
                              year         = {2012},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/Hinze-2012-Doktorarbeit.pdf}
                            }
    			
    			
    					
    Hinze, C.; Wilbert, N. & Wiskott, L. 2009 Visualization of higher-level receptive fields in a hierarchical model of the visual system. Proc. 18th Annual Computational Neuroscience Meeting (CNS'09), July 18-23, Berlin, Germany .
     
    inproceedings
    Abstract: Early visual receptive fields have been measured extensively and are fairly well mapped. Receptive fields in higher areas, on the other hand, are very difficult to characterize, because it is not clear what they are tuned to and which stimuli to use to study them. Early visual receptive fields have been reproduced by computational models. Slow feature analysis (SFA), for instance, is an algorithm that finds functions that extract most slowly varying features from a multi-dimensional input sequence [1]. Applied to quasi-natural image sequences, i.e. image sequences derived from natural images by translation, rotation and zoom, SFA yields many properties of complex cells in V1 [2]. A hierarchical network of SFA units learns invariant object representations much like in IT [3]. These successes suggest that units of intermediate layers in the network might share properties with cells in V2 or V4. The goal of this project is therefore to develop techniques to visualize and characterize such units to understand how cells in V2/V4 might work. This is nontrivial because the units are highly nonlinear. The algorithm is gradient-based and applied in a cascade within the network. We start with a natural image patch as an input, which then gets optimized by gradient ascent to maximize the output of one particular unit. Figure 1 shows such optimal stimuli for units in the first (a, b) and the second layer (c, d). The latter can be associated with cells in V2/V4. We plan to extend this to higher layers and larger receptive fields and will also develop techniques to visualize the invariances of the units, i.e. those variations to the input that have little effect on the unit's output. The long-term goal is to provide a good stimulus set for characterizing cells in V2/V4. [Figure] Figure 1. Optimal stimuli of units in the first layer (a, b) and the second layer (c, d) of a hierarchical SFA network optimized for slowness and trained with quasi-natural image sequences. References 1. Wiskott L, Sejnowski TJ: Slow feature analysis: Unsupervised learning of invariances. Neural Computation 2002, 14:715-770. 2. Berkes P, Wiskott L: Slow feature analysis yields a rich repertoire of complex cell properties. J Vision 2005, 5:579-602. 3. Franzius M, Wilbert N, Wiskott L: Invariant object recognition with slow feature analysis. Proc 18th Int'l Conf on Artificial Neural Networks 2008, 961-970.
    BibTeX:
    			
    			
                            @inproceedings{HinzeWilbertEtAl-2009,
                              author       = {Christian Hinze and Niko Wilbert and Laurenz Wiskott},
                              title        = {Visualization of higher-level receptive fields in a hierarchical model of the visual system.},
                              booktitle    = {Proc.\ 18\textsuperscript{th} Annual Computational Neuroscience Meeting (CNS'09), July 18--23, Berlin, Germany},
                              year         = {2009},
    			  url          = {http://www.biomedcentral.com/1471-2202/10/S1/P158},
                              url3         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/HinzeWilbertEtAl-2009-ProcCNSBerlin-Poster-HigherRFs.pdf},
                              doi          = {http://doi.org/10.1186/1471-2202-10-S1-P158}
                            }
    			
    			
    					
    Höfer, S. 2009 Analyse des Laufverhaltens von humanoiden Robotern mit der Slow Feature Analysis Studienarbeit, Humboldt-Universität zu Berlin, Institut für Informatik .
     
    misc
    Abstract: Die Slow Feature Analysis (SFA) ist ein Lernverfahren, welches sich langsam verän- dernde Komponenten aus einem mehrdimensionalen Eingabesignal extrahiert. Sie findet eine Eingabe-Ausgabe-Funktion, die es erlaubt, aus dem Eingabesignal mehrere ihrer Langsamkeit nach geordnete, unkorrelierte Komponenten zu berechnen. Diese Eingabe- Ausgabe-Funktion kann offline berechnet werden und liefert die langsamsten Komponen- ten, welche die optimale Lösung innerhalb einer eingeschränkten Familie von Funktionen darstellen. In der Regel kann bei geeigneter Wahl von Trainingsdaten diese Eingabe- Ausgabe-Funktion auch auf unbekannten Testdaten annähernd optimale Ergebnisse lie- fern und das Verfahren somit online nutzbar gemacht werden. Die SFA wird auf Beschleunigungssensordaten einer humanoiden Roboterserie ange- wandt, um Informationen über den aktuellen Zustand des Roboters zu gewinnen. Dabei geht es vor allem darum, für ein bestimmtes Laufmuster mit der SFA eine Komponente zu extrahieren, welche dem Roboter als Indikator dienen kann, sobald er droht das Gleichge- wicht zu verlieren und umzufallen. Zuerst werden verschiedene Anwendungsmöglichkeiten der SFA evaluiert und auf den gleichen Daten gelernt und getestet. Die gefundenen Kom- ponenten werden vorgestellt und interpretiert, darunter einige Komponenten, die sich als Kandidaten für eine Umfalldetektion eignen. Dann werden die von der SFA gelernten Eingabe-Ausgabe-Funktionen auch auf unbekannte Testdaten des gleichen sowie ande- rer Modelle derselben Roboterfamilie angewandt. Die Analyse zeigt, dass die gelernten Parameter in Abhängigkeit von den Trainingsdaten teilweise robust genug sind, um zu generalisieren und auf anderen Robotern gleicher Bauart als Umfalldetektion zu dienen.
    BibTeX:
    			
    			
                            @misc{Hoefer-2009,
                              author       = {H{\"o}fer, Sebastian},
                              title        = {Analyse des {L}aufverhaltens von humanoiden {R}obotern mit der {S}low {F}eature {A}nalysis},
                              year         = {2009},
                              howpublished = {Studienarbeit, Humboldt-Universit{\"{a}}t zu Berlin, Institut f{\"{u}}r Informatik},
                              url2         = {http://www.neurorobotik.de/downloads/publications/2009%20Ho%CC%88fer%20-%20Analyse%20des%20Laufverhaltens%20mit%20der%20Slowe%20Feature%20Analysis.pdf}
                            }
    			
    			
    					
    Höfer, S. 2010 Anwendungen der slow feature analysis in der humanoiden robotik Diploma Thesis Humboldt University of Berlin, Humboldt University of Berlin , Anwendungen der Slow Feature Analysis in der humanoiden Robotik .
    , Germany  
    mastersthesis
    Abstract: This thesis deals with the Slow Feature Analysis (SFA), an unsupervised learning method stemming from the domain of theoretical biology, and its application in humanoid robotics. SFA is an algorithm that extracts abstract semantic features from a vectorial input signal. It is investigated which features SFA learns from proprioceptive, non-visual sensory data from a humanoid robot, and how the extracted features can be used for the robot’s self-perception and control. The thesis is divided into a theoretical and a practical part. The theoretical part describes the SFA algorithm and methods for the analysis of its results. Moreover, extensions and its relation to other methods are presented. The execution step of the SFA algorithm is reformulated in terms of an artificial neural model and optimised with respect to this model. In the practical part, two applications of SFA for a humanoid robot platform are presented. First, SFA is employed for the detection of static postures performed by the robot, as well as for dimensionality reduction of the sensory state space. The obtained results are compared to other methods for dimensionality reduction. Secondly, SFA is applied to a dynamic motion, more precisely, to a biped gait pattern. It is shown that SFA can be used within the sensorimotor loop that generates the gait pattern, while at the same time increasing the reactivity of the gait pattern without profound loss in stability.
    BibTeX:
    			
    			
                            @mastersthesis{Hoefer-2010,
                              author       = {Sebastian H{\"{o}}fer},
                              title        = {Anwendungen der slow feature analysis in der humanoiden robotik},
                              booktitle    = {Diploma Thesis},
                              school       = {Humboldt University of Berlin},
                              year         = {2010},
                              url2         = {http://www.sebastianhoefer.de/publications/Hoefer-SFA-Thesis.pdf}
                            }
    			
    			
    					
    Höfer, S. & Hild, M. 2010 Using slow feature analysis to improve the reactivity of a humanoid robot's sensorimotor gait pattern Proceedings of the International Conference on Fuzzy Computation and 2nd International Conference on Neural Computation , Using Slow Feature Analysis to Improve the Reactivity of a Humanoid Robot's Sensorimotor Gait Pattern , 212-219.
    Publ. Scitepress, Valencia, Spain.
     
    inproceedings
    Abstract: This paper presents an approach for increasing the reactivity of a humanoid robot’s gait, incorporating Slow Feature Analysis (SFA), an unsupervised learning algorithm issuing from the domain of theoretical biology. The main objective of this work is to find a means to detect disturbances in the gait pattern at an early stage without losing stability. Another goal is to investigate the general potential of SFA for using it within sensorimotor loops which to our knowledge has not been considered until now. The application of SFA within sensorimotor loops is motivated by pointing out its relation to second-order Volterra filters. Our experiments show that the overall reactivity of the gait pattern increases without any profound loss in stability, and that SFA appears to be suitable for the usage even at such levels of sensorimotor control that are directly involved into motor activity regulation.
    BibTeX:
    			
    			
                            @inproceedings{HoeferHild-2010,
                              author       = {Sebastian H{\"{o}}fer and Manfred Hild},
                              title        = {Using slow feature analysis to improve the reactivity of a humanoid robot's sensorimotor gait pattern},
                              booktitle    = {Proceedings of the International Conference on Fuzzy Computation and 2\textsuperscript{nd} International Conference on Neural Computation},
                              publisher    = {Scitepress},
                              year         = {2010},
                              pages        = {212--219},
    			  url          = {http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0003082102120219},
                              url2         = {http://www.sebastianhoefer.de/publications/Hoefer_Hild-SFA_Sensorimotor_cr.pdf},
                              doi          = {http://doi.org/10.5220/0003082102120219}
                            }
    			
    			
    					
    Höfer, S.; Hild, M. & Kubisch, M. 2010 Using slow feature analysis to extract behavioural manifolds related to humanoid robot postures Tenth International Conference on Epigenetic Robotics , Using Slow Feature Analysis to Extract Behavioural Manifolds Related to Humanoid Robot Postures , 43-50.
    , Örenas, Sweden  
    inproceedings
    Abstract: This paper demonstrates how Slow Feature Analysis (SFA), an unsupervised learning algorithm stemming from the domain of theoretical biology, can be used to extract behavioural manifolds related to a humanoid robot's body postures. On one hand, we show that SFA detects abstract semantic features, encoding high-level behaviours, which can be used for representation making and the classiffication of the robot's posture; on the other hand we propose a method for analysing the obtained SFA components in terms of the manifold that contains the robot's sensory states belonging to the detected postures. This allows further characterisation of the SFA results as well as a possible means for directed exploration of the sensory state space.
    BibTeX:
    			
    			
                            @inproceedings{HoeferHildEtAl-2010,
                              author       = {Sebastian H{\"{o}}fer and Manfred Hild and Matthias Kubisch},
                              title        = {Using slow feature analysis to extract behavioural manifolds related to humanoid robot postures},
                              booktitle    = {Tenth International Conference on Epigenetic Robotics},
                              year         = {2010},
                              pages        = {43--50},
                              url2         = {http://www.sebastianhoefer.de/publications/Hoefer_et_al-SFA_Behavioural_Manifolds.pdf}
                            }
    			
    			
    					
    Höfer, S.; Spranger, M. & Hild, M. 2012 Posture recognition based on slow feature analysis Language Grounding in Robots , 111-130.
    Eds. Steels, L. & Hild, M.
    Publ. Springer.
     
    incollection
    Abstract: Basic postures such as sit, stand and lie are ubiquitous in human interaction. In order to build robots that aid and support humans in their daily life, we need to understand how posture categories can be learned and recognized. This paper presents an unsupervised learning approach to posture recognition for a biped humanoid robot. The approach is based on Slow Feature Analysis (SFA), a biologically inspired algorithm for extracting slowly changing signals from signals varying on a fast time scale. Two experiments are carried out: First, we consider the problem of recognizing static postures in a multimodal sensory stream which consists of visual and proprioceptive stimuli. Secondly, we show how to extract a low-dimensional representation of the sensory state space which is suitable for posture recognition in a more complex setting. We point out that the beneficial performance of SFA in this task can be related to the fact that SFA computes manifolds which are used in robotics to model invariants in motion and behavior. Based on this insight, we also propose a method for using SFA components for guided exploration of the state space.
    BibTeX:
    			
    			
                            @incollection{HoeferSprangerEtAl-2012,
                              author       = {Sebastian H{\"{o}}fer and Michael Spranger and Manfred Hild},
                              title        = {Posture recognition based on slow feature analysis},
                              booktitle    = {Language Grounding in Robots},
                              publisher    = {Springer},
                              year         = {2012},
                              pages        = {111--130},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-1-4614-3064-3_6},
                              doi          = {http://doi.org/10.1007/978-1-4614-3064-3_6}
                            }
    			
    			
    					
    Hu, X.; Hu, S.; Huang, Y.; Zhang, H. & Wu, H. 2016 Video anomaly detection using deep incremental slow feature analysis network IET Computer Vision .
    Publ. IET.
     
    article
    BibTeX:
    			
    			
                            @article{HuHuEtAl-2016,
                              author       = {Hu, Xing and Hu, Shiqiang and Huang, Yingping and Zhang, Huanlong and Wu, Hanbing},
                              title        = {Video anomaly detection using deep incremental slow feature analysis network},
                              journal      = {IET Computer Vision},
                              publisher    = {IET},
                              year         = {2016},
    			  url          = {https://mr.crossref.org/iPage?doi=10.1049%2Fiet-cvi.2015.0271},
                              doi          = {http://doi.org/10.1049/iet-cvi.2015.0271}
                            }
    			
    			
    					
    Huang, X.; Zhu, T.; Zhang, L. & Tang, Y. 2014 A novel building change index for automatic building change detection from high-resolution remote sensing imagery Remote Sensing Letters , 5(8), 713-722.
    Publ. Taylor & Francis.
     
    article
    Abstract: In pace with rapid urbanization, urban areas in many countries are undergoing huge changes. The large spectral variance and spatial heterogeneity within the ‘buildings’ land cover class, as well as the similar spectral properties between buildings and other urban structures, make building change detection a challenging problem. In this work, we propose a set of novel building change indices (BCIs) by combining morphological building index (MBI) and slow feature analysis (SFA) for building change detection from high-resolution imagery. MBI is a recently developed automatic building detector for high-resolution imagery, which is able to highlight building components but simultaneously suppress other urban structures. SFA is an unsupervised learning algorithm that can discriminate the changed components from the unchanged ones for multitemporal images. By effectively integrating the information from MBI and SFA, the building change components can be automatically generated. Experiments conducted on the QuickBird 2002–2005 data-set are used to validate the effectiveness of the proposed building change detection framework.
    BibTeX:
    			
    			
                            @article{HuangZhuEtAl-2014,
                              author       = {Huang, Xin and Zhu, Tingting and Zhang, Liangpei and Tang, Yuqi},
                              title        = {A novel building change index for automatic building change detection from high-resolution remote sensing imagery},
                              journal      = {Remote Sensing Letters},
                              publisher    = {Taylor \& Francis},
                              year         = {2014},
                              volume       = {5},
                              number       = {8},
                              pages        = {713--722},
    			  url          = {http://www.tandfonline.com/doi/abs/10.1080/2150704X.2014.963732},
                              url2         = {http://www.lmars.whu.edu.cn/prof_web/zhangliangpei/rs/publication/A%20novel%20building%20change%20index%20for%20automatic%20building%20change%20detection%20from%20high-resolution%20remote%20sensing%20imagery.pdf},
                              doi          = {http://doi.org/10.1080/2150704x.2014.963732}
                            }
    			
    			
    					
    Huang, Y.; Zhao, J.; Liu, Y.; Luo, S.; Zou, Q. & Tian, M. 2011 Nonlinear dimensionality reduction using a temporal coherence principle Information Sciences , 181(16), 3284-3307.
    Publ. Elsevier BV.
     
    article
    Abstract: Temporal coherence principle is an attractive biologically inspired learning rule to extract slowly varying features from quickly varying input data. In this paper we develop a new Nonlinear Neighborhood Preserving (NNP) technique, by utilizing the temporal coherence principle to find an optimal low dimensional representation from the original high dimensional data. NNP\ is based on a nonlinear expansion of the original input data, such as polynomials of a given degree. It can be solved by the eigenvalue problem without using gradient descent and is guaranteed to find the global optimum. NNP\ can be viewed as a nonlinear dimensionality reduction framework which takes into consideration both time series and data sets without an obvious temporal structure. According to different situations, we introduce three algorithms of NNP, named NNP-1, NNP-2, and NNP-3. The objective function of NNP-1 is equal to Slow Feature Analysis (SFA), and it works well for time series such as image sequences. NNP-2 artificially constructs time series consisting of neighboring points for data sets without a clear temporal structure such as image data. NNP-3 is proposed for classification tasks, which can minimize the distances of neighboring points in the embedding space and ensure that the remaining points are as far apart as possible simultaneously. Furthermore, the kernel extension of NNP\ is also discussed in this paper. The proposed algorithms work very well on some image sequences and image data sets compared to other methods. Meanwhile, we perform the classification task on the MNIST\ handwritten digit database using the supervised NNP\ algorithms. The experimental results demonstrate that NNP\ is an effective technique for nonlinear dimensionality reduction tasks.
    BibTeX:
    			
    			
                            @article{HuangZhaoEtAl-2011,
                              author       = {YaPing Huang and JiaLi Zhao and YunHui Liu and SiWei Luo and Qi Zou and Mei Tian},
                              title        = {Nonlinear dimensionality reduction using a temporal coherence principle},
                              journal      = {Information Sciences},
                              publisher    = {Elsevier {BV}},
                              year         = {2011},
                              volume       = {181},
                              number       = {16},
                              pages        = {3284--3307},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0020025511001691},
                              doi          = {http://doi.org/10.1016/j.ins.2011.04.001}
                            }
    			
    			
    					
    Huang, Y.; Zhao, J.; Tian, M.; Zou, Q. & Luo, S. 2009 Slow feature discriminant analysis and its application on handwritten digit recognition Conference on Neural Networks, 2009. IJCNN 2009. International Joint , 1294-1297.
     
    inproceedings
    Abstract: Slow feature analysis (SFA) is an unsupervised algorithm by extracting the slowly varying features from time series and has been used to pattern recognition successfully. Based on SFA, this paper develops a new algorithm, Slow feature discriminant analysis (SFDA), which can maximize the temporal variation of between-class time series, and minimize the temporal variation of within-class time series simultaneously. Due to adoption of discrimination power, the performance on pattern recognition is improved compared to SFA. The experiments results on MNIST digit handwritten database also show that the proposed algorithm is in particular attractive.
    BibTeX:
    			
    			
                            @inproceedings{HuangZhaoEtAl-2009,
                              author       = {Yaping Huang and Jiali Zhao and Mei Tian and Qi Zou and Siwei Luo},
                              title        = {Slow feature discriminant analysis and its application on handwritten digit recognition},
                              booktitle    = {Conference on Neural Networks, 2009. IJCNN 2009. International Joint},
                              year         = {2009},
                              pages        = {1294--1297},
    			  url          = {http://ieeexplore.ieee.org/document/5178596/},
                              doi          = {http://doi.org/10.1109/IJCNN.2009.5178596}
                            }
    			
    			
    					
    Hyvärinen, A.; Hurri, J. & Hoyer, P.O. 2009 Natural image statistics: a probabilistic approach to early computational vision. , 39.
    Publ. Springer Science & Business Media.
     
    book
    Abstract: This book is both an introductory textbook and a research monograph on modelling the statistical structure of natural images. In very simple terms, “natural images” are photographs of the typical environment where we live. In this book, their statistical structure is described using a number of statistical models whose parameters are estimated from image samples. Our main motivation for exploring natural image statistics is computational mod- elling of biological visual systems. A theoretical framework which is gaining more and more support considers the properties of the visual system to be reflections of the statistical structure of natural images, because of evolutionary adaptation pro- cesses. Another motivation for natural image statistics research is in computer sci- ence and engineering, where it helps in development of better image processing and computer vision methods. While research on natural image statistics has been growingrapidly since the mid-1990’s, no attempt has been made to cover the field in a single book, providing a unified view of the different models and approaches. This book attempts to do just that. Furthermore, our aim is to provide an accessible introduction to the field for students in related disciplines. However, not all aspects of such a large field of study can be completely covered in a single book, so we have had to make some choices. Basically, we concentrate on the neural modelling approaches at the expense of engineering applications. Fur- thermore, those topics on which the authors themselves havebeen doing research are, inevitably, given more emphasis
    BibTeX:
    			
    			
                            @book{HyvaerinenHurriEtAl-2009,
                              author       = {Hyv{\"a}rinen, Aapo and Hurri, Jarmo and Hoyer, Patrick O},
                              title        = {Natural image statistics: a probabilistic approach to early computational vision.},
                              publisher    = {Springer Science \& Business Media},
                              year         = {2009},
                              volume       = {39},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.207.5015&rep=rep1&type=pdf}
                            }
    			
    			
    					
    Ilin, A.; Valpola, H. & Oja, E. 2006 Exploratory analysis of climate data using source separation methods Neural Networks , 19(2), 155-167.
    Publ. Elsevier.
     
    article
    Abstract: We present an example of exploratory data analysis of climate measurements using a recently developed denoising source separation (DSS) framework. We analyzed a combined dataset containing daily measurements of three variables: surface temperature, sea level pressure and precipitation around the globe, for a period of 56 years. Components exhibiting slow temporal behavior were extracted using DSS with linear denoising. The first component, most prominent in the interannual time scale, captured the well-known El Nino-Southern Oscillation (ENSO) phenomenon and the second component was close to the derivative of the first one. The slow components extracted in a wider frequency range were further rotated using a frequency-based separation criterion implemented by DSS with nonlinear denoising. The rotated sources give a meaningful representation of the slow climate variability as a combination of trends, interannual oscillations, the annual cycle and slowly changing seasonal variations. Again, components related to the ENSO phenomenon emerge very clearly among the found sources.
    BibTeX:
    			
    			
                            @article{IlinValpolaEtAl-2006,
                              author       = {Ilin, Alexander and Valpola, Harri and Oja, Erkki},
                              title        = {Exploratory analysis of climate data using source separation methods},
                              journal      = {Neural Networks},
                              publisher    = {Elsevier},
                              year         = {2006},
                              volume       = {19},
                              number       = {2},
                              pages        = {155--167},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0893608006000086},
                              url2         = {https://pdfs.semanticscholar.org/6547/94fc04980944f3cdad6097d9ccde203bded0.pdf},
                              doi          = {http://doi.org/10.1016/j.neunet.2006.01.011}
                            }
    			
    			
    					
    Jayaraman, D. & Grauman, K. 2015 Slow and steady feature analysis: higher order temporal coherence in video e-print arXiv:1506.04714 .
     
    misc
    Abstract: How can unlabeled video augment visual learning? Existing methods perform "slow" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes. We propose to generalize slow feature analysis to "steady" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach.
    BibTeX:
    			
    			
                            @misc{JayaramanGrauman-2015,
                              author       = {Jayaraman, Dinesh and Grauman, Kristen},
                              title        = {Slow and steady feature analysis: higher order temporal coherence in video},
                              year         = {2015},
                              howpublished = {e-print arXiv:1506.04714},
    			  url          = {http://arxiv.org/abs/1506.04714}
                            }
    			
    			
    					
    Jayaraman, D. & Grauman, K. 2016 Slow and steady feature analysis: higher order temporal coherence in video 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 3852-3861.
     
    inproceedings
    Abstract: How can unlabeled video augment visual learning? Existing methods perform "slow" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture how the visual content changes. We propose to generalize slow feature analysis to "steady" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach.
    BibTeX:
    			
    			
                            @inproceedings{JayaramanGrauman-2016,
                              author       = {D. Jayaraman and K. Grauman},
                              title        = {Slow and steady feature analysis: higher order temporal coherence in video},
                              booktitle    = {2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
                              year         = {2016},
                              pages        = {3852--3861},
    			  url          = {http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Jayaraman_Slow_and_Steady_CVPR_2016_paper.pdf},
                              doi          = {http://doi.org/10.1109/CVPR.2016.418}
                            }
    			
    			
    					
    Jonschkowski, R. & Brock, O. 2013 Learning task-specific state representations by maximizing slowness and predictability 6th international workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS) .
     
    inproceedings
    Abstract: The success of reinforcement learning in robotic tasks is highly dependent on the state representation – a mapping from high dimensional sensory observations of the robot to states that can be used for reinforcement learning. Even though many methods have been proposed to learn state representations, it remains an important open problem. Identifying the characteristics existing methods are optimizing to find good state representations, combining them, and adding new characteristics will lead to a more robust method for state representation learning. We define a new characteristic – predictabil- ity – and combine it with slowness. We implement these character- istics in a neural network and show that this approach can find good state representations from visual input in simulated robotic tasks.
    BibTeX:
    			
    			
                            @inproceedings{JonschkowskiBrock-2013,
                              author       = {Jonschkowski, Rico and Brock, Oliver},
                              title        = {Learning task-specific state representations by maximizing slowness and predictability},
                              booktitle    = {6\textsuperscript{th} international workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems (ERLARS)},
                              year         = {2013},
                              url2         = {https://pdfs.semanticscholar.org/6b9c/9aafca671898f0ec29e1c7a9d1799a51d41b.pdf}
                            }
    			
    			
    					
    Jonschkowski, R. & Brock, O. 2014 State representation learning in robotics: using prior knowledge about physical interaction Robotics: Science and Systems (RSS) .
     
    inproceedings
    Abstract: State representations critically affect the effective- ness of learning in robots. In this paper, we propose a robotics- specific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes. Using prior knowledge about interacting with the physical world, robots can learn state representations that are consistent with physics. We identify five robotic priors and explain how they can be used for representation learning. We demonstrate the effectiveness of this approach in a simulated slot car racing task and a simulated navigation task with distracting moving objects. We show that our method extracts task-relevant state representations from high- dimensional observations, even in the presence of task-irrelevant distractions. We also show that the state representations learned by our method greatly improve generalization in reinforcement learning.
    BibTeX:
    			
    			
                            @inproceedings{JonschkowskiBrock-2014,
                              author       = {Jonschkowski, Rico and Brock, Oliver},
                              title        = {State representation learning in robotics: using prior knowledge about physical interaction},
                              booktitle    = {Robotics: Science and Systems (RSS)},
                              year         = {2014},
    			  url          = {http://www.roboticsproceedings.org/rss10/p19.pdf},
                              doi          = {http://doi.org/10.15607/rss.2014.x.019}
                            }
    			
    			
    					
    Jonschkowski, R.; Höfer, S. & Brock, O. 2015 Patterns for learning with side information e-print arXiv:1511.06429 .
     
    misc
    Abstract: Supervised, semi-supervised, and unsupervised learning estimate a function given input/output samples. Generalization of the learned function to unseen data can be improved by incorporating side information into learning. Side information are data that are neither from the input space nor from the output space of the function, but include useful information for learning it. In this paper we show that learning with side information subsumes a variety of related approaches, e.g. multi-task learning, multi-view learning and learning using privileged information. Our main contributions are (i) a new perspective that connects these previously isolated approaches, (ii) insights about how these methods incorporate different types of prior knowledge, and hence implement different patterns, (iii) facilitating the application of these methods in novel tasks, as well as (iv) a systematic experimental evaluation of these patterns in two supervised learning tasks.
    BibTeX:
    			
    			
                            @misc{JonschkowskiHoeferEtAl-2015,
                              author       = {{Jonschkowski}, R. and {H{\"o}fer}, S. and {Brock}, O.},
                              title        = {Patterns for learning with side information},
                              year         = {2015},
                              howpublished = {e-print arXiv:1511.06429},
    			  url          = {http://arxiv.org/abs/1511.06429}
                            }
    			
    			
    					
    Kamal, S.; Supriya, M.H. & Pillai, P.R.S. 2011 Blind source separation of nonlinearly mixed ocean acoustic signals using slow feature analysis OCEANS 2011 IEEE - Spain , 1-7.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: The ocean acoustic environment is astoundingly complex, consisting of numerous noise sources like ships, offshore oil rigs, marine life, shore waves and acoustic cavitations, featuring varying sound speed profiles, multi-path interferences, as well as other hydrodynamic phenomena. Irrespective of the type of the receiver system, whether active or passive, the signals picked up by the hydrophones are disturbed by these inherent anomalies of the propagating medium and poses a prime challenge to extract useful information from the chaotic mixtures of received signals. Blind Source Separation (BSS), an engineering paradigm which attempts to mimic the human cognitive capability of selectively extracting an interesting process amidst several similar competing processes, can be considered as a viable solution to the problem. In this paper, the effectiveness of Slow Feature Analysis (SFA) algorithm (Laurenz Wiskott et.al), a biologically motivated technique based on the concept of temporal slowness to extract invariant features from multivariate time series, for solving the problem of nonlinear BSS is investigated. A testing framework for underwater acoustic signal separation has been developed in Python with the aid of Modular toolkit for Data Processing (MDP), a stack of general purpose machine learning algorithms.
    BibTeX:
    			
    			
                            @inproceedings{KamalSupriyaEtAl-2011,
                              author       = {Suraj Kamal and M. H. Supriya and P. R. Saseendran Pillai},
                              title        = {Blind source separation of nonlinearly mixed ocean acoustic signals using slow feature analysis},
                              booktitle    = {{OCEANS} 2011 {IEEE} - Spain},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {1--7},
    			  url          = {http://ieeexplore.ieee.org/document/6003620/},
                              doi          = {http://doi.org/10.1109/oceans-spain.2011.6003620}
                            }
    			
    			
    					
    Karhunen, J.; Hao, T. & Ylipaavalniemi, J. 2012 A canonical correlation analysis based method for improving BSS of two related data sets International Conference on Latent Variable Analysis and Signal Separation , 91-98.
    Publ. Springer Nature.
     
    inproceedings
    Abstract: We consider an extension of ICA and BSS for separating mutually dependent and independent components from two related data sets. We propose a new method which first uses canonical correlation analysis for detecting subspaces of independent and dependent components. Different ICA and BSS methods can after this be used for final separation of these components. Our method has a sound theoretical basis, and it is straightforward to implement and computationally not demanding. Experimental results on synthetic and real-world fMRI data sets demonstrate its good performance.
    BibTeX:
    			
    			
                            @inproceedings{KarhunenHaoEtAl-2012,
                              author       = {Karhunen, Juha and Hao, Tele and Ylipaavalniemi, Jarkko},
                              title        = {A canonical correlation analysis based method for improving {BSS} of two related data sets},
                              booktitle    = {International Conference on Latent Variable Analysis and Signal Separation},
                              publisher    = {Springer Nature},
                              year         = {2012},
                              pages        = {91--98},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-28551-6_12},
                              url2         = {https://pdfs.semanticscholar.org/f7e4/65da1276d9c7bd5d3b8b844947119692f453.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-28551-6_12}
                            }
    			
    			
    					
    Karhunen, J.; Hao, T. & Ylipaavalniemi, J. 2012 A generalized canonical correlation analysis based method for blind source separation from related data sets The 2012 International Joint Conference on Neural Networks (IJCNN) , 1-9.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: In this paper, we consider an extension of independent component analysis (ICA) and blind source separation (BSS) techniques to several related data sets. The goal is to separate mutually dependent and independent components or source signals from these data sets. This problem is important in practice, because such data sets are common in real-world applications. We propose a new method which first uses a generalization of standard canonical correlation analysis (CCA) for detecting subspaces of independent and dependent components. Any ICA or BSS method can after this be used for final separation of these components. The proposed method performs well for synthetic data sets for which the assumed data model holds, and provides interesting and meaningful results for real-world functional magnetic resonance imaging (fMRI) data. The method is straightforward to implement and computationally not too demanding. The proposed method improves clearly the separation results of several well-known ICA and BSS methods compared with the situation in which generalized CCA is not used.
    BibTeX:
    			
    			
                            @inproceedings{KarhunenHaoEtAl-2012a,
                              author       = {Karhunen, Juha and Hao, Tele and Ylipaavalniemi, Jarkko},
                              title        = {A generalized canonical correlation analysis based method for blind source separation from related data sets},
                              booktitle    = {The 2012 International Joint Conference on Neural Networks ({IJCNN})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2012},
                              pages        = {1--9},
    			  url          = {http://ieeexplore.ieee.org/document/6252708/},
                              url2         = {https://pdfs.semanticscholar.org/d82b/f16c2ad8e4663fae13e964144bc9ac745de4.pdf},
                              doi          = {http://doi.org/10.1109/ijcnn.2012.6252708}
                            }
    			
    			
    					
    Karhunen, J.; Hao, T. & Ylipaavalniemi, J. 2013 Finding dependent and independent components from related data sets: a generalized canonical correlation analysis based method Neurocomputing , 113, 153-167.
    Publ. Elsevier.
     
    article
    BibTeX:
    			
    			
                            @article{KarhunenHaoEtAl-2013,
                              author       = {Karhunen, Juha and Hao, Tele and Ylipaavalniemi, Jarkko},
                              title        = {Finding dependent and independent components from related data sets: a generalized canonical correlation analysis based method},
                              journal      = {Neurocomputing},
                              publisher    = {Elsevier},
                              year         = {2013},
                              volume       = {113},
                              pages        = {153--167},
                              url2         = {http://research.ics.aalto.fi/publications/bibdb2012/public_pdfs/Final_NEUCOM_Aug2013.pdf}
                            }
    			
    			
    					
    Kim, J.; Jeong, S.; Yu, Z. & Lee, M. 2013 Multiple timescale recurrent neural network with slow feature analysis for efficient motion recognition Neural Information Processing , Lecture Notes in Computer Science , 8227, 323-330.
    Eds. Lee, M.; Hirose, A.; Hou, Z.-G. & Kil, R.
    Publ. Springer Berlin Heidelberg.
     
    incollection
    Abstract: Multiple Timescale Recurrent Neural Network (MTRNN) model is a useful tool to learn and regenerate various kinds of action. In this paper, we use MTRNN as a dynamic model to analyze different human motions. Prediction error from dynamic model is used to classify different human actions. However, it is difficult to fully cover the human actions depending on the speed using dynamic model. In order to overcome the limitation of dynamic model, we considered Slow Feature analysis (SFA) which is used to extract the unique slow features from human actions data. In order to make input training data, we obtain 3 kinds of human actions by using KINECT. 3 dimensional slow feature data is be extracted by using SFA and those SFA feature data are used as the input of MTRNN for classification. The experiment results show that our proposed model performs better than the traditional model.
    BibTeX:
    			
    			
                            @incollection{KimJeongEtAl-2013,
                              author       = {Kim, Jihun and Jeong, Sungmoon and Yu, Zhibin and Lee, Minho},
                              title        = {Multiple timescale recurrent neural network with slow feature analysis for efficient motion recognition},
                              booktitle    = {Neural Information Processing},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2013},
                              volume       = {8227},
                              pages        = {323--330},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-42042-9_41},
                              doi          = {http://doi.org/10.1007/978-3-642-42042-9_41}
                            }
    			
    			
    					
    Klampfl, S. 2007 Training of readouts with SFA .
     
    article
    BibTeX:
    			
    			
                            @article{Klampfl-2007,
                              author       = {Klampfl, Stefan},
                              title        = {Training of readouts with {SFA}},
                              year         = {2007}
                            }
    			
    			
    					
    Klampfl, S. & Maass, W. 2009 Replacing supervised classification learning by slow feature analysis in spiking neural networks Proc. of NIPS 2009: Advances in Neural Information Processing Systems , 22, 988-996.
    Publ. MIT Press.
     
    inproceedings
    Abstract: Many models for computations in recurrent networks of neurons assume that the network state moves from some initial state to some fixed point attractor or limit cycle that represents the output of the computation. However experimental data show that in response to a sensory stimulus the network state moves from its initial state through a trajectory of network states and eventually returns to the initial state, without reaching an attractor or limit cycle in between. This type of network response, where salient information about external stimuli is encoded in characteristic trajectories of continuously varying network states, raises the question how a neural system could compute with such code, and arrive for example at a temporally stable classification of the external stimulus. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), could be an important ingredient for extracting stable information from these network trajectories. In fact, if sensory stimuli are more often followed by another stimulus from the same class than by a stimulus from another class, SFA approaches the classification capability of Fisher’s Linear Discriminant (FLD), a powerful algorithm for supervised learning. We apply this principle to simulated cortical microcircuits, and show that it enables readout neurons to learn discrimination of spoken digits and detection of repeating firing patterns within a stream of spike trains with the same firing statistics, without requiring any supervision for learning.
    BibTeX:
    			
    			
                            @inproceedings{KlampflMaass-2009,
                              author       = {S. Klampfl and W. Maass},
                              title        = {Replacing supervised classification learning by slow feature analysis in spiking neural networks},
                              booktitle    = {Proc. of NIPS 2009: Advances in Neural Information Processing Systems},
                              publisher    = {MIT Press},
                              year         = {2009},
                              volume       = {22},
                              pages        = {988--996},
                              url2         = {http://www.igi.tugraz.at/psfiles/192.pdf}
                            }
    			
    			
    					
    Klampfl, S. & Maass, W. 2010 A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction Neural Computation , 22(12), 2979-3035.
    Publ. MIT Press - Journals.
     
    article
    Abstract: Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.
    BibTeX:
    			
    			
                            @article{KlampflMaass-2010,
                              author       = {Stefan Klampfl and Wolfgang Maass},
                              title        = {A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction},
                              journal      = {Neural Computation},
                              publisher    = {{MIT} Press - Journals},
                              year         = {2010},
                              volume       = {22},
                              number       = {12},
                              pages        = {2979--3035},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/NECO_a_00050},
                              url2         = {http://ai2-s2-pdfs.s3.amazonaws.com/7825/c75661c597ff6846153ab2f3aa330903512a.pdf},
                              doi          = {http://doi.org/10.1162/NECO_a_00050}
                            }
    			
    			
    					
    Koch, P. 2013 Efficient tuning in supervised machine learning Leiden Institute of Advanced Computer Science (LIACS), Faculty of Science, Leiden University, Leiden Institute of Advanced Computer Science (LIACS), Faculty of Science, Leiden University .
     
    phdthesis
    Abstract: The tuning of learning algorithm parameters has become more and more important during the last years. With the fast growth of computational power and available memory databases have grown dramatically. This is very challenging for the tuning of parameters arising in machine learning, since the training can become very time-consuming for large datasets. For this reason efficient tuning methods are required, which are able to improve the predictions of the learning algorithms. In this thesis we incorporate model-assisted optimization techniques, for performing efficient optimization on noisy datasets with very limited budgets. Under this umbrella we also combine learning algorithms with methods for feature construction and selection. We propose to integrate a variety of elements into the learning process. E.g., can tuning be helpful in learning tasks like time series regression using state-of-the-art machine learning algorithms? Are statistical methods capable to reduce noise e ffects? Can surrogate models like Kriging learn a reasonable mapping of the parameter landscape to the quality measures, or are they deteriorated by disturbing factors? Summarizing all these parts, we analyze if superior learning algorithms can be created, with a special focus on efficient runtimes. Besides the advantages of systematic tuning approaches, we also highlight possible obstacles and issues of tuning. Di fferent tuning methods are compared and the impact of their features are exposed. It is a goal of this work to give users insights into applying state-of-the-art learning algorithms profitably in practice
    BibTeX:
    			
    			
                            @phdthesis{Koch-2013,
                              author       = {Koch, Patrick},
                              title        = {Efficient tuning in supervised machine learning},
                              school       = {Leiden Institute of Advanced Computer Science (LIACS), Faculty of Science, Leiden University},
                              year         = {2013},
    			  url          = {https://openaccess.leidenuniv.nl/handle/1887/22055},
                              url2         = {http://delta.cs.cinvestav.mx/~ccoello/EMOO/thesis-koch.pdf.gz}
                            }
    			
    			
    					
    Koch, P.; Konen, W. & Hein, K. 2010 Gesture recognition on few training data using slow feature analysis and parametric bootstrap The 2010 International Joint Conference on Neural Networks (IJCNN) , 1-8.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Slow Feature Analysis (SFA) has been established as a robust and versatile technique from the neurosciences to learn slowly varying functions from quickly changing signals. Recently, the method has been also applied to classification tasks. Here we apply SFA for the first time to a time series classification problem originating from gesture recognition. The gestures used in our experiments are based on acceleration signals of the Bluetooth Wiimote controller (Nintendo). We show that SFA achieves results comparable to the well-known Random Forest predictor in shorter computation time, given a sufficient number of training patterns. However - and this is a novelty to SFA classification - we discovered that SFA requires the number of training patterns to be strictly greater than the dimension of the nonlinear function space. If too few patterns are available, we find that the model constructed by SFA severely overfits and leads to high test set errors. We analyze the reasons for overfitting and present a new solution based on parametric bootstrap to overcome this problem.
    BibTeX:
    			
    			
                            @inproceedings{KochKonenEtAl-2010,
                              author       = {Patrick Koch and Wolfgang Konen and Kristine Hein},
                              title        = {Gesture recognition on few training data using slow feature analysis and parametric bootstrap},
                              booktitle    = {The 2010 International Joint Conference on Neural Networks ({IJCNN})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2010},
                              pages        = {1--8},
    			  url          = {http://ieeexplore.ieee.org/document/5596842/},
                              url2         = {https://pdfs.semanticscholar.org/333e/c30ee5d3c6735bc35512f59d1ec5c9d93e48.pdf},
                              doi          = {http://doi.org/10.1109/IJCNN.2010.5596842}
                            }
    			
    			
    					
    Kompella, V.R. 2014 Slowness learning for curiosity-driven agents Università della Svizzera italiana, Università della Svizzera italiana .
     
    phdthesis
    Abstract: In the absence of external guidance, how can a robot learn to map the many raw pixels of high-dimensional visual inputs to useful action sequences? I study methods that achieve this by making robots self-motivated (curious) to continually build com- pact representations of sensory inputs that encode different aspects of the changing environment. Previous curiosity-based agents acquired skills by associating intrin- sic rewards with world model improvements, and used reinforcement learning (RL) to learn how to get these intrinsic rewards. But unlike in previous implementations, I consider streams of high-dimensional visual inputs, where the world model is a set of compact low-dimensional representations of the high-dimensional inputs. To learn these representations, I use the slowness learning principle, which states that the underlying causes of the changing sensory inputs vary on a much slower time scale than the observed sensory inputs. The representations learned through the slowness learning principle are called slow features (SFs). Slow features have been shown to be useful for RL, since they capture the underlying transition process by extracting spatio-temporal regularities in the raw sensory inputs. However, existing techniques that learn slow features are not readily applicable to curiosity-driven on- line learning agents, as they estimate computationally expensive covariance matrices from the data via batch processing. The first contribution called the incremental SFA (IncSFA), is a low-complexity, online algorithm that extracts slow features without storing any input data or esti- mating costly covariance matrices, thereby making it suitable to be used for several online learning applications. However, IncSFA gradually forgets previously learned representations whenever the statistics of the input change. In open-ended online learning, it becomes essential to store learned representations to avoid re-learning previously learned inputs. The second contribution is an online active modular IncSFA algorithm called the curiosity-driven modular incremental slow feature analysis (Curious Dr. MISFA). Curious Dr. MISFA addresses the forgetting problem faced by IncSFA and learns expert slow feature abstractions in order from least to most costly, with theoretical guarantees. The third contribution uses the Curious Dr. MISFA algorithm in a continual curiosity-driven skill acquisition framework that enables robots to acquire, store, and re-use both abstractions and skills in an online and continual manner. I provide (a) a formal analysis of the working of the proposed algorithms; (b) compare them to the existing methods; and (c) use the iCub humanoid robot to demonstrate their application in real-world environments. These contributions to- gether demonstrate that the online implementations of slowness learning make it suitable for an open-ended curiosity-driven RL agent to acquire a repertoire of skills that map the many raw pixels of high-dimensional images to multiple sets of action sequences.
    BibTeX:
    			
    			
                            @phdthesis{Kompella-2014,
                              author       = {Kompella, Varun Raj},
                              title        = {Slowness learning for curiosity-driven agents},
                              school       = {Universit{\`a} della Svizzera italiana},
                              year         = {2014},
    			  url          = {http://doc.rero.ch/record/234698/files/2014INFO013.pdf}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M. & Schmidhuber, J. 2011 Incremental slow feature analysis Twenty-Second International Joint Conference on Artificial Intelligence .
     
    inproceedings
    Abstract: The Slow Feature Analysis (SFA) unsupervised learning framework extracts features representing the underlying causes of the changes within a tem- porally coherent high-dimensional raw sensory in- put signal. We develop the first online version of SFA, via a combination of incremental Princi- pal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, on- line SFA adapts along with non-stationary environ- ments, which makes it a generally useful unsuper- vised preprocessor for autonomous learning agents. We compare online SFA to batch SFA in several experiments and show that it indeed learns without a teacher to encode the input stream by informa- tive slow features representing meaningful abstract environmental properties. We extend online SFA to deep networks in hierarchical fashion, and use them to successfully extract abstract object position information from high-dimensional video.
    BibTeX:
    			
    			
                            @inproceedings{KompellaLuciwEtAl-2011,
                              author       = {Kompella, Varun Raj and Luciw, Matthew and Schmidhuber, Juergen},
                              title        = {Incremental slow feature analysis},
                              booktitle    = {Twenty-Second International Joint Conference on Artificial Intelligence},
                              year         = {2011},
                              url2         = {http://ai2-s2-pdfs.s3.amazonaws.com/1aee/341fb6c82377731ad6a5004d71e2d2de62a7.pdf}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M. & Schmidhuber, J. 2011 Incremental slow feature analysis: adaptive and episodic learning from high-dimensional input streams e-print arXiv:1112.2113 .
     
    misc
    Abstract: Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails.
    BibTeX:
    			
    			
                            @misc{KompellaLuciwEtAl-2011a,
                              author       = {Kompella, Varun R and Luciw, Matthew and Schmidhuber, J{\"u}ergen},
                              title        = {Incremental slow feature analysis: adaptive and episodic learning from high-dimensional input streams},
                              year         = {2011},
                              howpublished = {e-print arXiv:1112.2113},
    			  url          = {https://arxiv.org/abs/1112.2113}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M. & Schmidhuber, J. 2012 Incremental slow feature analysis: adaptive low-complexity slow feature updating from high-dimensional input streams Neural Computation , 24(11), 2994-3024.
    Publ. MIT Press - Journals.
     
    article
    Abstract: We introduce here an incremental version of slow feature analysis (IncSFA), combining candid covariance-free incremental principal components analysis (CCIPCA) and covariance-free incremental minor components analysis (CIMCA). IncSFA's feature updating complexity is linear with respect to the input dimensionality, while batch SFA's (BSFA) updating complexity is cubic. IncSFA does not need to store, or even compute, any covariance matrices. The drawback to IncSFA is data efficiency: it does not use each data point as effectively as BSFA. But IncSFA allows SFA to be tractably applied, with just a few parameters, directly on high-dimensional input streams (e.g., visual input of an autonomous agent), while BSFA has to resort to hierarchical receptive-field-based architectures when the input dimension is too high. Further, IncSFA's updates have simple Hebbian and anti-Hebbian forms, extending the biological plausibility of SFA. Experimental results show IncSFA learns the same set of features as BSFA and can handle a few cases where BSFA fails.
    BibTeX:
    			
    			
                            @article{KompellaLuciwEtAl-2012,
                              author       = {Varun Raj Kompella and Matthew Luciw and J{\"{u}}rgen Schmidhuber},
                              title        = {Incremental slow feature analysis: adaptive low-complexity slow feature updating from high-dimensional input streams},
                              journal      = {Neural Computation},
                              publisher    = {{MIT} Press - Journals},
                              year         = {2012},
                              volume       = {24},
                              number       = {11},
                              pages        = {2994--3024},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/NECO_a_00344},
                              doi          = {http://doi.org/10.1162/NECO_a_00344}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M. & Schmidhuber, J. 2012 Hierarchical incremental slow feature analysis In Workshop on Deep Hierarchies in Vision .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{KompellaLuciwEtAl-2012b,
                              author       = {Kompella, Varun Raj and Luciw, Matthew and Schmidhuber, J{\"u}rgen},
                              title        = {Hierarchical incremental slow feature analysis},
                              booktitle    = {In Workshop on Deep Hierarchies in Vision},
                              year         = {2012}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M.; Stollenga, M.; Pape, L. & Schmidhuber, J. 2012 Autonomous learning of abstractions using curiosity-driven modular incremental slow feature analysis 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL) , 1-8.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: To autonomously learn behaviors in complex environments, vision-based agents need to develop useful sensory abstractions from high-dimensional video. We propose a modular, curiosity-driven learning system that autonomously learns multiple abstract representations. The policy to build the library of abstractions is adapted through reinforcement learning, and the corresponding abstractions are learned through incremental slow-feature analysis (IncSFA). IncSFA learns each abstraction based on how the inputs change over time, directly from unprocessed visual data. Modularity is induced via a gating system, which also prevents abstraction duplication. The system is driven by a curiosity signal that is based on the learnability of the inputs by the current adaptive module. After the learning completes, the result is multiple slow-feature modules serving as distinct behavior-specific abstractions. Experiments with a simulated iCub humanoid robot show how the proposed method effectively learns a set of abstractions from raw un-preprocessed video, to our knowledge the first curious learning agent to demonstrate this ability.
    BibTeX:
    			
    			
                            @inproceedings{KompellaLuciwEtAl-2012a,
                              author       = {Kompella, Varun Raj and Luciw, Matthew and Stollenga, Marijn and Pape, Leo and Schmidhuber, J{\"u}rgen},
                              title        = {Autonomous learning of abstractions using curiosity-driven modular incremental slow feature analysis},
                              booktitle    = {2012 {IEEE} International Conference on Development and Learning and Epigenetic Robotics ({ICDL})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2012},
                              pages        = {1--8},
    			  url          = {http://ieeexplore.ieee.org/document/6400829/},
                              url2         = {http://www.idsia.ch/~luciw/papers/icdl12-kompella.pdf},
                              doi          = {http://doi.org/10.1109/devlrn.2012.6400829}
                            }
    			
    			
    					
    Kompella, V.R.; Luciw, M.; Stollenga, M.F. & Schmidhuber, J. 2016 Optimal curiosity-driven modular incremental slow feature analysis Neural Computation , 28(8), 1599-1662.
    Publ. MITP.
     
    article
    Abstract: Consider a self-motivated artificial agent who is exploring a complex environment. Part of the complexity is due to the raw high-dimensional sensory input streams, which the agent needs to make sense of. Such inputs can be compactly encoded through a variety of means; one of these is slow feature analysis (SFA). Slow features encode spatiotemporal regularities, which are information-rich explanatory factors (latent variables) underlying the high-dimensional input streams. In our previous work, we have shown how slow features can be learned incrementally, while the agent explores its world, and modularly, such that different sets of features are learned for different parts of the environment (since a single set of regularities does not explain everything). In what order should the agent explore the different parts of the environment? Following Schmidhuber’s theory of artificial curiosity, the agent should always concentrate on the area where it can learn the easiest-to-learn set of features that it has not already learned. We formalize this learning problem and theoretically show that, using our model, called curiosity-driven modular incremental slow feature analysis, the agent on average will learn slow feature representations in order of increasing learning difficulty, under certain mild conditions. We provide experimental results to support the theoretical analysis.
    BibTeX:
    			
    			
                            @article{KompellaLuciwEtAl-2016,
                              author       = {Kompella, Varun Raj and Luciw, Matthew and Stollenga, Marijn Frederik and Schmidhuber, Juergen},
                              title        = {Optimal curiosity-driven modular incremental slow feature analysis},
                              journal      = {Neural Computation},
                              publisher    = {MITP},
                              year         = {2016},
                              volume       = {28},
                              number       = {8},
                              pages        = {1599--1662},
    			  url          = {http://www.mitpressjournals.org/doi/10.1162/NECO_a_00855},
                              doi          = {http://doi.org/10.1162/neco_a_00855}
                            }
    			
    			
    					
    Kompella, V.R.; Pape, L.; Masci, J.; Frank, M. & Schmidhuber, J. 2011 AutoIncSFA and vision-based developmental learning for humanoid robots 2011 11th IEEE-RAS International Conference on Humanoid Robots , 622-629.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Humanoids have to deal with novel, unsupervised high-dimensional visual input streams. Our new method Au- toIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person is approaching me, or: an object was toppled. We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its world like a playing baby, maximizing intrinsic curiosity reward signals for reaching states corresponding to previously unpredicted AutoIncSFA features.
    BibTeX:
    			
    			
                            @inproceedings{KompellaPapeEtAl-2011,
                              author       = {Kompella, Varun Raj and Pape, Leo and Masci, Jonathan and Frank, Mikhail and Schmidhuber, J{\"u}rgen},
                              title        = {{AutoIncSFA} and vision-based developmental learning for humanoid robots},
                              booktitle    = {2011 11\textsuperscript{th} {IEEE}-{RAS} International Conference on Humanoid Robots},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {622--629},
    			  url          = {http://ieeexplore.ieee.org/document/6100865/},
                              url2         = {https://pdfs.semanticscholar.org/380f/32bb5dbce7d93b607533d5870d41b25e95b4.pdf},
                              doi          = {http://doi.org/10.1109/humanoids.2011.6100865}
                            }
    			
    			
    					
    Kompella, V.R.; Stollenga, M.; Luciw, M. & Schmidhuber, J. 2017 Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots Artificial Intelligence , 247, -.
    Publ. Elsevier.
     
    article
    Abstract: In the absence of external guidance, how can a robot learn to map the many raw pixels of high-dimensional visual inputs to useful action sequences? We propose here Continual Curiosity driven Skill Acquisition (CCSA). CCSA\ makes robots intrinsically motivated to acquire, store and reuse skills. Previous curiosity-based agents acquired skills by associating intrinsic rewards with world model improvements, and used reinforcement learning to learn how to get these intrinsic rewards. CCSA\ also does this, but unlike previous implementations, the world model is a set of compact low-dimensional representations of the streams of high-dimensional visual information, which are learned through incremental slow feature analysis. These representations augment the robot's state space with new information about the environment. We show how this information can have a higher-level (compared to pixels) and useful interpretation, for example, if the robot has grasped a cup in its field of view or not. After learning a representation, large intrinsic rewards are given to the robot for performing actions that greatly change the feature output, which has the tendency otherwise to change slowly in time. We show empirically what these actions are (e.g., grasping the cup) and how they can be useful as skills. An acquired skill includes both the learned actions and the learned slow feature representation. Skills are stored and reused to generate new observations, enabling continual acquisition of complex skills. We present results of experiments with an iCub humanoid robot that uses CCSA\ to incrementally acquire skills to topple, grasp and pick-place a cup, driven by its intrinsic motivation from raw pixel vision.
    BibTeX:
    			
    			
                            @article{KompellaStollengaEtAl-2017,
                              author       = {Varun Raj Kompella and Marijn Stollenga and Matthew Luciw and Juergen Schmidhuber},
                              title        = {Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots},
                              journal      = {Artificial Intelligence},
                              publisher    = {Elsevier},
                              year         = {2017},
                              volume       = {247},
                              pages        = {-},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S000437021500017X},
                              doi          = {http://doi.org/10.1016/j.artint.2015.02.001}
                            }
    			
    			
    					
    Kompella, V.R.; Stollenga, M.F.; Luciw, M.D. & Schmidhuber, J. 2014 Explore to see, learn to perceive, get the actions for free: SKILLABILITY 2014 International Joint Conference on Neural Networks (IJCNN) , 2705-2712.
     
    inproceedings
    Abstract: How can a humanoid robot autonomously learn and refine multiple sensorimotor skills as a byproduct of curiosity driven exploration, upon its high-dimensional unprocessed visual input? We present SKILLABILITY, which makes this possible. It combines the recently introduced Curiosity Driven Modular Incremental Slow Feature Analysis (Curious Dr. MISFA) with the well-known options framework. Curious Dr. MISFA's objective is to acquire abstractions as quickly as possible. These abstractions map high-dimensional pixel-level vision to a low-dimensional manifold. We find that each learnable abstraction augments the robot's state space (a set of poses) with new information about the environment, for example, when the robot is grasping a cup. The abstraction is a function on an image, called a slow feature, which can effectively discretize a high-dimensional visual sequence. For example, it maps the sequence of the robot watching its arm as it moves around, grasping randomly, then grasping a cup, and moving around some more while holding the cup, into a step function having two outputs: when the cup is or is not currently grasped. The new state space includes this grasped/not grasped information. Each abstraction is coupled with an option. The reward function for the option's policy (learned through Least Squares Policy Iteration) is high for transitions that produce a large change in the step-functionlike slow features. This corresponds to finding bottleneck states, which are known good subgoals for hierarchical reinforcement learning - in the example, the subgoal corresponds to grasping the cup. The final skill includes both the learned policy and the learned abstraction. SKILLABILITY makes our iCub the first humanoid robot to learn complex skills such as to topple or grasp an object, from raw high-dimensional video input, driven purely by its intrinsic motivations.
    BibTeX:
    			
    			
                            @inproceedings{KompellaStollengaEtAl-2014,
                              author       = {Kompella, Varan R and Stollenga, Marijn F and Luciw, Matthew D and Schmidhuber, Juergen},
                              title        = {Explore to see, learn to perceive, get the actions for free: {SKILLABILITY}},
                              booktitle    = {2014 International Joint Conference on Neural Networks (IJCNN)},
                              year         = {2014},
                              pages        = {2705--2712},
    			  url          = {http://ieeexplore.ieee.org/document/6889784/?arnumber=6889784},
                              url2         = {http://people.idsia.ch/~kompella/Papers/IJCNN_14.pdf},
                              doi          = {http://doi.org/10.1109/ijcnn.2014.6889784}
                            }
    			
    			
    					
    Kompella, V.R. & Wiskott, L. 2017 Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments CoRR , abs/1701.04663.
     
    article
    BibTeX:
    			
    			
                            @article{KompellaWiskott-2017,
                              author       = {Varun Raj Kompella and Laurenz Wiskott},
                              title        = {Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments},
                              journal      = {CoRR},
                              year         = {2017},
                              volume       = {abs/1701.04663}
                            }
    			
    			
    					
    Konen, W. 2009 On the numeric stability of the SFA implementation sfa-tk e-print arXiv:0912.1064 .
     
    misc
    Abstract: Slow feature analysis (SFA) is a method for extracting slowly varying features from a quickly varying multidimensional signal. An open source Matlab-implementation sfa-tk makes SFA easily useable. We show here that under certain circumstances, namely when the covariance matrix of the nonlinearly expanded data does not have full rank, this implementation runs into numerical instabilities. We propse a modified algorithm based on singular value decomposition (SVD) which is free of those instabilities even in the case where the rank of the matrix is only less than 10% of its size. Furthermore we show that an alternative way of handling the numerical problems is to inject a small amount of noise into the multidimensional input signal which can restore a rank-deficient covariance matrix to full rank, however at the price of modifying the original data and the need for noise parameter tuning.
    BibTeX:
    			
    			
                            @misc{Konen-2009,
                              author       = {Konen, Wolfgang},
                              title        = {On the numeric stability of the {SFA} implementation sfa-tk},
                              year         = {2009},
                              howpublished = {e-print arXiv:0912.1064},
    			  url          = {https://arxiv.org/abs/0912.1064},
                              url2         = {http://www.gm.fh-koeln.de/~konen/Publikationen/arXiv2009-SVD_SFA.pdf}
                            }
    			
    			
    					
    Konen, W. 2011 Self-configuration from a machine-learning perspective e-print arXiv:1105.1951 .
     
    misc
    Abstract: The goal of machine learning is to provide solutions which are trained by data or by experience coming from the environment. Many training algorithms exist and some brilliant successes were achieved. But even in structured environments for machine learning (e.g. data mining or board games), most applications beyond the level of toy problems need careful hand-tuning or human ingenuity (i.e. detection of interesting patterns) or both. We discuss several aspects how self-configuration can help to alleviate these problems. One aspect is the self-configuration by tuning of algorithms, where recent advances have been made in the area of SPO (Sequen- tial Parameter Optimization). Another aspect is the self-configuration by pattern detection or feature construction. Forming multiple features (e.g. random boolean functions) and using algorithms (e.g. random forests) which easily digest many fea- tures can largely increase learning speed. However, a full-fledged theory of feature construction is not yet available and forms a current barrier in machine learning. We discuss several ideas for systematic inclusion of feature construction. This may lead to partly self-configuring machine learning solutions which show robustness, flexibility, and fast learning in potentially changing environments.
    BibTeX:
    			
    			
                            @misc{Konen-2011,
                              author       = {Konen, Wolfgang},
                              title        = {Self-configuration from a machine-learning perspective},
                              year         = {2011},
                              howpublished = {e-print arXiv:1105.1951},
    			  url          = {https://arxiv.org/abs/1105.1951}
                            }
    			
    			
    					
    Konen, W. 2011 Der SFA-Algorithmus für Klassifikation CIOP Report, Cologne University of Applied Sciences, Cologne University of Applied Sciences (08/11).
     
    techreport
    Abstract: Dieser Technische Report fasst den SFA-Algorithmus für Klassifikation zusammen, wie er im MATLAB-Paket sfa-tk ab Version V2.6 (aktuelle Version V2.8) 1 implementiert ist.
    BibTeX:
    			
    			
                            @techreport{Konen-2011a,
                              author       = {Konen, Wolfgang},
                              title        = {Der {SFA-A}lgorithmus f{\"u}r {K}lassifikation},
                              school       = {Cologne University of Applied Sciences},
                              year         = {2011},
                              number       = {08/11},
                              url2         = {https://www.researchgate.net/profile/Wolfgang_Konen/publication/235709834_Der_SFA-Algorithmus_fur_Klassifikation/links/55f740ae08aeafc8abfd52fb.pdf}
                            }
    			
    			
    					
    Konen, W. 2012 SFA classification with few training data: improvements with parametric bootstrap .
     
    misc
    Abstract: Slow Feature Analysis (SFA) is a versatile algorithm to nd stable features or slow- varying signals in multidimensional data. It is capable of nding highly relevant features for classi cation tasks. This paper deals with the marginal training data problem which appears in SFA classi cation when the number of training records is too low. We derive a quantitative condition between training set size and SFA con guration parameters which allows to predict whether the marginal training data problem will occur. We analyze the reasons for the problem and propose several strategies to avoid it. Among these strategies, the parametric bootstrap approach, which augments the training data with virtual training patterns drawn from an estimated distribution, successfully solves the marginal training data problem. We report rst evidence, that parametric bootstrap is also bene cial for non-marginal SFA and for other machine learning algorithms like Random Forests.
    BibTeX:
    			
    			
                            @misc{Konen-2012,
                              author       = {Konen, Wolfgang},
                              title        = {{SFA} classification with few training data: improvements with parametric bootstrap},
                              year         = {2012},
                              url2         = {https://www.researchgate.net/profile/Wolfgang_Konen/publication/235709835_SFA_classification_with_few_training_data_Improvements_with_parametric_bootstrap/links/55f7416008ae07629dc357ff.pdf}
                            }
    			
    			
    					
    Konen, W. & Koch, P. 2010 How slow is slow? SFA detects signals that are slower than the driving force Proc. 4th Int. Conf. on Bioinspired Optimization Methods and their Applications, BIOMA, Ljubljana, Slovenia .
    Ed. Filipic, B. & Silc, J.
     
    inproceedings
    Abstract: Slow feature analysis (SFA) is a bioinspired method for extracting slowly varying driving forces from quickly varying nonstationary time series. We show here that it is possible for SFA to detect a component which is even slower than the driving force itself (e.g. the envelope of a modulated sine wave). It depends on circumstances like the embedding dimension, the time series predictability, or the base frequency, whether the driving force itself or a slower subcomponent is detected. Interestingly, we observe a swift phase transition from one regime to another and it is the objective of this work to quantify the influence of various parameters on this phase transition. We conclude that what is perceived as slow by SFA varies and that a more or less fast switching from one regime to another occurs, perhaps showing some similarity to human perception.
    BibTeX:
    			
    			
                            @inproceedings{KonenKoch-2010,
                              author       = {Wolfgang Konen and Patrick Koch},
                              title        = {How slow is slow? {SFA} detects signals that are slower than the driving force},
                              booktitle    = {Proc. 4\textsuperscript{th} Int. Conf. on Bioinspired Optimization Methods and their Applications, BIOMA, Ljubljana, Slovenia},
                              year         = {2010},
                              url2         = {http://www.gm.fh-koeln.de/~konen/Publikationen/BIOMA10-howslow.pdf}
                            }
    			
    			
    					
    Konen, W. & Koch, P. 2009 How slow is slow? SFA detects signals that are slower than the driving force e-print arXiv:0911.4397 .
     
    misc
    Abstract: Slow feature analysis (SFA) is a method for extracting slowly varying driving forces from quickly varying nonstationary time series. We show here that it is possible for SFA to detect a component which is even slower than the driving force itself (e.g. the envelope of a modulated sine wave). It is shown that it depends on circumstances like the embedding dimension, the time series predictability, or the base frequency, whether the driving force itself or a slower subcomponent is detected. We observe a phase transition from one regime to the other and it is the purpose of this work to quantify the influence of various parameters on this phase transition. We conclude that what is percieved as slow by SFA varies and that a more or less fast switching from one regime to the other occurs, perhaps showing some similarity to human perception.
    BibTeX:
    			
    			
                            @misc{KonenKoch-2009,
                              author       = {Konen, Wolfgang and Koch, Patrick},
                              title        = {How slow is slow? {SFA} detects signals that are slower than the driving force},
                              year         = {2009},
                              howpublished = {e-print arXiv:0911.4397},
    			  url          = {https://arxiv.org/abs/0911.4397v1},
                              url2         = {http://www.gm.fh-koeln.de/~konen/Publikationen/arXiv2009-slow.pdf}
                            }
    			
    			
    					
    Konen, W. & Koch, P. 2011 The slowness principle: SFA can detect different slow components in nonstationary time series International Journal of Innovative Computing and Applications (IJICA) , 3(1), 3-10.
    Publ. Inderscience Publishers.
     
    article
    Abstract: Slow feature analysis (SFA) is a bioinspired method for extracting slowly varying driving forces from quickly varying non-stationary time series. We show here that it is possible for SFA to detect a component which is even slower than the driving force itself (e.g., the envelope of a modulated sine wave). It depends on circumstances like the embedding dimension, the time series predictability, or the base frequency, whether the driving force itself or a slower subcomponent is detected. Interestingly, we observe a swift phase transition from one regime to another and it is the objective of this work to quantify the influence of various parameters on this phase transition. We conclude that what is perceived as slow by SFA varies and that a more or less fast switching from one regime to another occurs, perhaps showing some similarity to human perception.
    BibTeX:
    			
    			
                            @article{KonenKoch-2011,
                              author       = {Wolfgang Konen and Patrick Koch},
                              title        = {The slowness principle: {SFA} can detect different slow components in nonstationary time series},
                              journal      = {International Journal of Innovative Computing and Applications (IJICA)},
                              publisher    = {Inderscience Publishers},
                              year         = {2011},
                              volume       = {3},
                              number       = {1},
                              pages        = {3--10},
    			  url          = {http://www.inderscience.com/offer.php?id=37946},
                              url2         = {http://www.gm.fh-koeln.de/~konen/Publikationen/IJICA2010-howslow.pdf},
                              doi          = {http://doi.org/10.1504/ijica.2011.037946}
                            }
    			
    			
    					
    Konen, W.; Zaefferer, M.; Koch, P. & Kumar, P.J. 2014 Package ‘rSFA’ .
     
    misc
    Abstract: No abstract.
    BibTeX:
    			
    			
                            @misc{KonenZaeffererEtAl-2014,
                              author       = {Konen, Wolfgang and Zaefferer, Martin and Koch, Patrick and Kumar, Prawyn Jebakumar},
                              title        = {Package {\textquoteleft}{rSFA}{\textquoteright}},
                              year         = {2014},
    			  url          = {http://www.et.bs.ehu.es/cran/web/packages/rSFA/rSFA.pdf}
                            }
    			
    			
    					
    Kramer, O. 2010 Computational intelligence and sustainable energy: case studies and applications TR-10-010, International Computer Science Institute Berkley University, TR-10-010, International Computer Science Institute Berkley University .
     
    techreport
    Abstract: Sustainability is of great importance due to increasing demands and limited resources. Many problem classes in sustainable energy systems are data mining, optimization, and control tasks. In this work we demonstrate how techniques from computational intelligence can help in solving important tasks in sustainable energy systems. We will show how statistically sound wind models can be estimated with kernel smoothing methods. Radial basis functions will be employed for wind resource visualization. Support vector machines turn out to be successful in forecasting wind energy. Monitoring of high-dimensional wind time series is possible with a self-organizing map approach. Slow driving features in wind time series can be detected with slow feature analysis. Last, we will demonstrate how a learning classifier system evolves control rules for a virtual power plant with a simple demand side management model.
    BibTeX:
    			
    			
                            @techreport{Kramer-2010,
                              author       = {Kramer, Oliver},
                              title        = {Computational intelligence and sustainable energy: case studies and applications},
                              school       = {TR-10-010, International Computer Science Institute Berkley University},
                              year         = {2010},
    			  url          = {http://www.icsi.berkeley.edu/pubs/techreports/TR-10-010.pdf}
                            }
    			
    			
    					
    Kramer, O. & Hein, T. 2011 Monitoring of multivariate wind resources with self-organizing maps and slow feature analysis 2011 IEEE Symposium on Computational Intelligence Applications In Smart Grid (CIASG) , 1-8.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: Wind power is an important part of a sustainable and smart energy grid. Wind energy production datasets from hundreds of wind farms and thousands of windmills are collected, and have to be analyzed and understood. As wind is a volatile energy source, state observation has an important part to play for grid management, fault analysis and planning strategies of grid operators. We demonstrate how two approaches from unsupervised neural computation help to understand high-dimensional wind resource time series. The first approach for visualization of multivariate sequences is based on self-organizing feature maps. The output sequence allows the monitoring of the overall system state with a low-dimensional linear visualization that reflects the topological characteristics of the original wind data. We demonstrate the visualization on real-world wind resource measurements. The second approach shows how to identify the slowest feature in a multivariate wind time series, also known as driving force, with the help of slow feature analysis. Experiments, parameter analyses, and first interpretations demonstrate the capabilities of the approaches.
    BibTeX:
    			
    			
                            @inproceedings{KramerHein-2011,
                              author       = {Oliver Kramer and Tobias Hein},
                              title        = {Monitoring of multivariate wind resources with self-organizing maps and slow feature analysis},
                              booktitle    = {2011 {IEEE} Symposium on Computational Intelligence Applications In Smart Grid ({CIASG})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {1--8},
    			  url          = {http://ieeexplore.ieee.org/document/5953327/},
                              doi          = {http://doi.org/10.1109/ciasg.2011.5953327}
                            }
    			
    			
    					
    Kreitmann, P. 2010 Action recognition in video .
    Publ. Citeseer.
     
    misc
    Abstract: Automatic action recognition in video has a broad array of applications, from surveillance to interactive video games. Classic algorithms usually use hand- crafted descriptors such as SIFT (see [5]) or HOG (see [3]) to compute feature vectors of videos, and have achieved promising results in the past (see [7]). More recently, Quoc Le and Will Zou at the Stanford AI lab have proved that ISA features obtained from unsupervised learning achieve higher performance, while being much faster to engineer that hand-crafted features (their work is not yet published). SFA features have achieved good results in object recognition as well as position and rotation extraction from artificial video signal (see [4]). In this work, we experiment using SFA features for action recognition.
    BibTeX:
    			
    			
                            @misc{Kreitmann-2010,
                              author       = {Kreitmann, Pierre},
                              title        = {Action recognition in video},
                              publisher    = {Citeseer},
                              year         = {2010},
    			  url          = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.1808&rep=rep1&type=pdf},
                              url2         = {https://pdfs.semanticscholar.org/2a50/f15e0b7c7170b9c4de67a36944a91b6bd20c.pdf}
                            }
    			
    			
    					
    Kühnl, T. 2013 Road terrain detection for advanced driver assistance systems Bielefeld, Universität Bielefeld, Diss., 2013, Bielefeld, Universität Bielefeld, Diss., 2013 .
     
    phdthesis
    Abstract: In recent years, automotive manufacturers have equipped their vehicles with in- novative Advanced Driver Assistance Systems (ADAS) to ease driving and avoid dangerous situations, such as unintended lane departures or collisions with other road users, like vehicles and pedestrians. To this end, ADAS at the cutting edge are equipped with cameras to sense the vehicle surrounding. An important source of information for future ADAS is the road course, i.e., the future driving path of the ego-vehicle and other vehicles. Therefore, this thesis focuses on the camera-based analysis of road scenes and the detection of important types of road terrain, such as road area and ego-lane, which are necessary to draw inference about the actual road course and potential space for evasion maneuvers. For this purpose, this thesis presents a generic concept for the visual and spatial analysis of the road environment. The core of the proposed method is a hierarchical feature extraction that combines local visual appearance with its spatial layout. In this sense, a novel vision-based approach for road terrain detection that goes beyond classical lane marking detection and image segmentation approaches is presented. Thus, the approach enhances the ability to cope with noise and appearance changes because the classification decision is not only based on local visual appearance but on a combination of visual and spatial aspects. This results in a higher robustness under various visual conditions due to different asphalt appearance, illumination changes, and shadows. The approach’s generic architecture internally represents certain visual proper- ties, such as road area, road boundary, and lane marking information by means of a visuospatial representation. In contrast to many related approaches for road terrain detection, the proposed method does not employ an explicit road course model. Instead, the method learns classifying road terrain based on a combination of visual and spatial features by using machine learning. Especially the discrimina- tion between ego-lane and other parts of the road area is very challenging, because a distinction based on local appearance is impossible. Extensive evaluations in urban scenarios show that the proposed system functions in spatially diverse road scenes and reliably detects ego-lane and road area even in challenging situations. Those situations may comprise bad-quality or missing lane markings, curbstones delimiting the road, and occlusion of lane delimiters, e.g., caused by parked cars. Furthermore, the generic concept does not only have advantages in road terrain detection, but also in many other applications, benefitting from visual and spatial scene analysis. In order to prove this, the method is applied for pure vision-based ego-vehicle localization on the lane level. In this regard, a reliable classification allowing an inference about how many lanes exist adjacent to the ego-lane is pre- sented on a large highway dataset. In summary, this thesis presents a generic concept for visual and spatial anal- ysis of the road environment and is therefore a substantial contribution to the development of future ADAS. Towards this end, the general approach is geared to problem-solving for complex situations that can not be handled by state-of-the-art methods, which has been shown for inner-city road terrain detection and ego-vehicle localization.
    BibTeX:
    			
    			
                            @phdthesis{Kuehnl-2013,
                              author       = {K{\"u}hnl, Tobias},
                              title        = {Road terrain detection for advanced driver assistance systems},
                              school       = {Bielefeld, Universit{\"a}t Bielefeld, Diss., 2013},
                              year         = {2013},
                              url2         = {https://pub.uni-bielefeld.de/download/2633277/2633278}
                            }
    			
    			
    					
    Kuhnl, T.; Kummert, F. & Fritsch, J. 2011 Monocular road segmentation using slow feature analysis 2011 IEEE Intelligent Vehicles Symposium (IV) , 800-806.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: In this paper a novel approach for road detection with a monocular camera is introduced. We propose a two step approach, combining a patch-based segmentation with additional boundary detection. We use Slow Feature Analysis (SFA) which leads to improved appearance descriptors for road and non-road parts on patch level. From the slow features a low order feature set is formed which is used together with color and Walsh Hadamard texture features to train a patch-based GentleBoost classifier. This allows extracting areas from the image that correspond to the road with a certain confidence. Typically the border regions between road and non-road have the highest classification error rates, because the appearance is hard to distinguish on the patch level. Therefore we propose a post-processing step with a specialized classifier applied to the boundary region of the image to improve the segmentation results. In order to evaluate the quality of road segmentation we propose an application-based quality measurement applying an inverse perspective mapping on the image to obtain a Birds Eye View (BEV). The advantage of this approach is that the important distant parts and boundaries of the road in the real world, which are only a low fraction in the perspective image, can be assessed in this metric measure significantly better than on the pixel level. In addition, we estimate the driving corridor width and boundary error, because for Advanced Driver Assistant Systems (ADAS) metric information is needed. For all evaluations in different road and weather conditions, our system shows an improved performance of the two step approach compared to the basic segmentation.
    BibTeX:
    			
    			
                            @inproceedings{KuhnlKummertEtAl-2011,
                              author       = {Tobias Kuhnl and Franz Kummert and Jannik Fritsch},
                              title        = {Monocular road segmentation using slow feature analysis},
                              booktitle    = {2011 {IEEE} Intelligent Vehicles Symposium ({IV})},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {800--806},
    			  url          = {http://ieeexplore.ieee.org/document/5940416/},
                              doi          = {http://doi.org/10.1109/ivs.2011.5940416}
                            }
    			
    			
    					
    Kühnl, T.; Kummert, F. & Fritsch, J. 2013 Visual ego-vehicle lane assignment using spatial ray features Intelligent Vehicles Symposium (IV), 2013 IEEE , 1101-1106.
     
    inproceedings
    Abstract: Assigning the ego-vehicle to a lane is not only beneficial for navigation but will be an essential element in future Advanced Driver Assistance Systems. This paper describes an approach for ego-lane index estimation using only a monocular camera and no additional sensing equipment like, e.g., the typically employed GPS and Inertial Measurement Unit. Key aspect of the approach are SPatial RAY (SPRAY) features which represent the spatial layout of the road in the visual scene. The proposed method perceives a variety of local visual properties of the scene by means of base classifiers operating on patches extracted from camera images. The spatial arrangement of these local visual properties are captured using SPRAY features. With a boosting classifier trained on these features the ego-lane index is obtained. The system is evaluated on low traffic density and complementary to an object-based approach suitable for heavy traffic. In the conducted experiments, the proposed approach reaches recognition rates of 93% to 97% on individual highway images without applying any kind of temporal filtering.
    BibTeX:
    			
    			
                            @inproceedings{KuehnlKummertEtAl-2013,
                              author       = {K{\"{u}}hnl, Tobias and Kummert, Franz and Fritsch, Jannik},
                              title        = {Visual ego-vehicle lane assignment using spatial ray features},
                              booktitle    = {Intelligent Vehicles Symposium (IV), 2013 IEEE},
                              year         = {2013},
                              pages        = {1101--1106},
    			  url          = {http://ieeexplore.ieee.org/document/6629613/},
                              url2         = {https://www.researchgate.net/profile/Jannik_Fritsch/publication/239949497_Visual_Ego-Vehicle_Lane_Assignment_using_Spatial_Ray_Features/links/00b7d51c401de9943a000000.pdf},
                              doi          = {http://doi.org/10.1109/ivs.2013.6629613}
                            }
    			
    			
    					
    Lefakis, L. & Fleuret, F. 2014 Dynamic programming boosting for discriminative macro-action discovery. ICML , 1548-1556.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{LefakisFleuret-2014,
                              author       = {Lefakis, Leonidas and Fleuret, Francois},
                              title        = {Dynamic programming boosting for discriminative macro-action discovery.},
                              booktitle    = {ICML},
                              year         = {2014},
                              pages        = {1548--1556},
                              url2         = {https://pdfs.semanticscholar.org/18d6/187c6222336c2c0b1e23793f72a00f9700a5.pdf}
                            }
    			
    			
    					
    Legenstein, R.; Wilbert, N. & Wiskott, L. 2010 Reinforcement learning on slow features of high-dimensional input streams. PLoS Comput Biol , 6(8), e1000894.
    Publ. Public Library of Science.
     
    article
    Abstract: Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.
    BibTeX:
    			
    			
                            @article{LegensteinWilbertEtAl-2010,
                              author       = {Robert Legenstein and Niko Wilbert and Laurenz Wiskott},
                              title        = {Reinforcement learning on slow features of high-dimensional input streams.},
                              journal      = {PLoS Comput Biol},
                              publisher    = {Public Library of Science},
                              year         = {2010},
                              volume       = {6},
                              number       = {8},
                              pages        = {e1000894},
    			  url          = {http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000894},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/LegensteinWilbertEtAl-2010-PLoSCompBiol.pdf},
                              doi          = {http://doi.org/10.1371/journal.pcbi.1000894}
                            }
    			
    			
    					
    Li, D.; Zhang, Z. & Tan, T. 2017 Large-Scale Slow Feature Analysis Using Spark for Visual Object Recognition CCF Chinese Conference on Computer Vision , 132-142.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{LiZhangEtAl-2017,
                              author       = {Li, Da and Zhang, Zhang and Tan, Tieniu},
                              title        = {Large-Scale Slow Feature Analysis Using Spark for Visual Object Recognition},
                              booktitle    = {CCF Chinese Conference on Computer Vision},
                              year         = {2017},
                              pages        = {132--142},
                              doi          = {http://doi.org/10.1007/978-981-10-7305-2_12}
                            }
    			
    			
    					
    Lies, J.-P.; Häfner, R.M. & Bethge, M. 2014 Slowness and sparseness have diverging effects on complex cell learning PLoS Comput Biol , 10(3), e1003468.
    Publ. Public Library of Science.
     
    article
    Abstract: Following earlier studies which showed that a sparse coding principle may explain the receptive field properties of complex cells in primary visual cortex, it has been concluded that the same properties may be equally derived from a slowness principle. In contrast to this claim, we here show that slowness and sparsity drive the representations towards substantially different receptive field properties. To do so, we present complete sets of basis functions learned with slow subspace analysis (SSA) in case of natural movies as well as translations, rotations, and scalings of natural images. SSA directly parallels independent subspace analysis (ISA) with the only difference that SSA maximizes slowness instead of sparsity. We find a large discrepancy between the filter shapes learned with SSA and ISA. We argue that SSA can be understood as a generalization of the Fourier transform where the power spectrum corresponds to the maximally slow subspace energies in SSA. Finally, we investigate the trade-off between slowness and sparseness when combined in one objective function.
    BibTeX:
    			
    			
                            @article{LiesHaefnerEtAl-2014a,
                              author       = {Lies, J{\"o}rn-Philipp and H{\"a}fner, Ralf M and Bethge, Matthias},
                              title        = {Slowness and sparseness have diverging effects on complex cell learning},
                              journal      = {PLoS Comput Biol},
                              publisher    = {Public Library of Science},
                              year         = {2014},
                              volume       = {10},
                              number       = {3},
                              pages        = {e1003468},
    			  url          = {http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003468},
                              doi          = {http://doi.org/10.1371/journal.pcbi.1003468}
                            }
    			
    			
    					
    Liwicki, S. 2014 Robust online subspace learning Imperial College, Imperial College .
     
    phdthesis
    Abstract: In this thesis, I aim to advance the theories of online non-linear subspace learning through the development of strategies which are both efficient and robust. The use of sub- space learning methods is very popular in computer vision and they have been employed to numerous tasks. With the increasing need for real-time applications, the formulation of online (i.e. incremental and real-time) learning methods is a vibrant research field and has received much attention from the research community. A major advantage of incre- mental systems is that they update the hypothesis during execution, thus allowing for the incorporation of the real data seen in the testing phase. Tracking acts as an attractive and popular evaluation tool for incremental systems, and thus, the connection between online learning and adaptive tracking is seen commonly in the literature. The proposed system in this thesis facilitates learning from noisy input data, e.g. caused by occlusions, casted shadows and pose variations, that are challenging problems in general tracking frameworks. First, a fast and robust alternative to standard l 2 -norm principal component analysis (PCA) is introduced, which I coin Euler PCA (e-PCA). The formulation of e-PCA is based on robust, non-linear kernel PCA (KPCA) with a cosine-based kernel function that is expressed via an explicit feature space. When applied to tracking, face reconstruction and background modeling, promising results are achieved. In the second part, the problem of matching vectors of 3D rotations is explicitly targeted. A novel distance which is robust for 3D rotations is introduced, and formulated as a kernel function. The kernel leads to a new representation of 3D rotations, the full-angle quaternion (FAQ) representation. Finally, I propose 3D object recognition from point clouds, and object tracking with color values using FAQs. A domain-specific kernel function designed for visual data is then presented. KPCA with Krein space kernels is introduced, as this kernel is indefinite, and an exact incre- mental learning framework for the new kernel is developed. In a tracker framework, the presented online learning outperforms the competitors in nine popular and challenging video sequences. In the final part, the generalized eigenvalue problem is studied. Specifically, incre- mental slow feature analysis (SFA) with indefinite kernels is proposed, and applied to temporal video segmentation and tracking with change detection. As online SFA allows for drift detection, further improvements are achieved in the evaluation of the tracking task.
    BibTeX:
    			
    			
                            @phdthesis{Liwicki-2014,
                              author       = {Liwicki, Stephan},
                              title        = {Robust online subspace learning},
                              school       = {Imperial College},
                              year         = {2014},
    			  url          = {https://spiral.imperial.ac.uk/bitstream/10044/1/23234/1/Liwicki-S-2015-PhD-Thesis.pdf}
                            }
    			
    			
    					
    Liwicki, S.; Zafeiriou, S. & Pantic, M. 2013 Incremental slow feature analysis with indefinite kernel for online temporal video segmentation Computer Vision - ACCV 2012 , Lecture Notes in Computer Science , 7725, 162-176.
    Eds. Lee, K.; Matsushita, Y.; Rehg, J. & Hu, Z.
    Publ. Springer Berlin Heidelberg.
     
    incollection
    Abstract: Slow Feature Analysis (SFA) is a subspace learning method inspired by the human visual system, however, it is seldom seen in computer vision. Motivated by its application for unsupervised activity analysis, we develop SFA’s first implementation of online temporal video segmentation to detect episodes of motion changes. We utilize a domain-specific indefinite kernel which takes the data representation into account to introduce robustness. As our kernel is indefinite (i.e. defines instead of a Hilbert, a Krein space), we formulate SFA in Krein space. We propose an incremental kernel SFA framework which utilizes the special properties of our kernel. Finally, we employ our framework to online temporal video segmentation and perform qualitative and quantitative evaluation.
    BibTeX:
    			
    			
                            @incollection{LiwickiZafeiriouEtAl-2013,
                              author       = {Liwicki, Stephan and Zafeiriou, Stefanos and Pantic, Maja},
                              title        = {Incremental slow feature analysis with indefinite kernel for online temporal video segmentation},
                              booktitle    = {Computer Vision -- ACCV 2012},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2013},
                              volume       = {7725},
                              pages        = {162--176},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-37444-9_13},
                              doi          = {http://doi.org/10.1007/978-3-642-37444-9_13}
                            }
    			
    			
    					
    Liwicki, S.; Zafeiriou, S. & Pantic, M. 2012 Incremental slow feature analysis with indefinite kernel for online temporal video segmentation Asian Conference on Computer Vision , 162-176.
    Publ. Springer Nature.
     
    inproceedings
    Abstract: Slow Feature Analysis (SFA) is a subspace learning method inspired by the human visual system, however, it is seldom seen in computer vision. Motivated by its application for unsupervised activity analysis, we develop SFA’s first implementation of online temporal video segmentation to detect episodes of motion changes. We utilize a domainspecific indefinite kernel which takes the data representation into account to introduce robustness. As our kernel is indefinite (i.e. defines instead of a Hilbert, a Krein space), we formulate SFA in Krein space. We propose an incremental kernel SFA framework which utilizes the special properties of our kernel. Finally, we employ our framework to online temporal video segmentation and perform qualitative and quantitative evaluation.
    BibTeX:
    			
    			
                            @inproceedings{LiwickiZafeiriouEtAl-2012,
                              author       = {Liwicki, Stephan and Zafeiriou, Stefanos and Pantic, Maja},
                              title        = {Incremental slow feature analysis with indefinite kernel for online temporal video segmentation},
                              booktitle    = {Asian Conference on Computer Vision},
                              publisher    = {Springer Nature},
                              year         = {2012},
                              pages        = {162--176},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-37444-9_13},
                              url2         = {https://pdfs.semanticscholar.org/41af/863c5e1d3c5758e61b94dce05d1acdaff663.pdf},
                              doi          = {http://doi.org/10.1007/978-3-642-37444-9_13}
                            }
    			
    			
    					
    Liwicki, S.; Zafeiriou, S.P. & Pantic, M. 2015 Online kernel slow feature analysis for temporal video segmentation and tracking IEEE transactions on image processing , 24(10), 2955-2970.
    Publ. IEEE.
     
    article
    Abstract: Slow feature analysis (SFA) is a dimensionality reduction technique which has been linked to how visual brain cells work. In recent years, the SFA was adopted for computer vision tasks. In this paper, we propose an exact kernel SFA (KSFA) framework for positive definite and indefinite kernels in Krein space. We then formulate an online KSFA which employs a reduced set expansion. Finally, by utilizing a special kind of kernel family, we formulate exact online KSFA for which no reduced set is required. We apply the proposed system to develop a SFA-based change detection algorithm for stream data. This framework is employed for temporal video segmentation and tracking. We test our setup on synthetic and real data streams. When combined with an online learning tracking system, the proposed change detection approach improves upon tracking setups that do not utilize change detection.
    BibTeX:
    			
    			
                            @article{LiwickiZafeiriouEtAl-2015,
                              author       = {Liwicki, Stephan and Zafeiriou, Stefanos P and Pantic, Maja},
                              title        = {Online kernel slow feature analysis for temporal video segmentation and tracking},
                              journal      = {IEEE transactions on image processing},
                              publisher    = {IEEE},
                              year         = {2015},
                              volume       = {24},
                              number       = {10},
                              pages        = {2955--2970},
    			  url          = {http://ieeexplore.ieee.org/document/7097728/},
                              url2         = {https://pdfs.semanticscholar.org/d7fc/3940b948ba25ccf9b729cf8eeea3d2541da0.pdf},
                              doi          = {http://doi.org/10.1109/TIP.2015.2428052}
                            }
    			
    			
    					
    Loo, C. & Bardia, Y. 2012 Sparse F-IncSFA for action recognition Proc. of the 2012 JSME Conf. on Robotics and Mechatronics, Hamamatsu, Japan, May 27-29 .
     
    inproceedings
    Abstract: High dimensional input streams and unsupervised learning are two important factors in the area of humanoids and processes of the actions and movements of human. Our Fast Incremental Slow Feature Analysis (F-IncSFA) can learn and extract the few significant features of the complex sensory input sequences regarding high-level spatio-temporal conceptions. In this paper, the application of the F-IncSFA and some of its structure to make a hierarchical compound network made of F-IncSFA has been described. Also the techniques developed by adding efficient sparse coding as an encoder and a preprocessing step before an application of the F-IncSFA. The efficient sparse coding can dramatically reduces the dimension of extracted features and outcome of the efficient sparse coding are quite small as compared with the size of high-dimension video obtained by humanoid or human action. It has revealed excellent and promising dimension reduction by this preprocessor.
    BibTeX:
    			
    			
                            @inproceedings{LooBardia-2012,
                              author       = {Loo, Chukiong and Bardia, Yousefi},
                              title        = {Sparse {F-IncSFA} for action recognition},
                              booktitle    = {Proc.\ of the 2012 JSME Conf.\ on Robotics and Mechatronics, Hamamatsu, Japan, May 27-29},
                              year         = {2012},
    			  url          = {http://eprints.um.edu.my/14089/1/1A1-P04.pdf}
                            }
    			
    			
    					
    Loo, C. & Bardia, Y. 2012 1A1-P04 sparse F-IncSFA for action recognition (communication robot) ロボティクス・メカトロニクス講演会講演概要集 , 2012.
    Publ. 一般社団法人日本機械学会.
     
    article
    Abstract: High dimensional input streams and unsupervised learning are two important factors in the area of humanoids and processes of the actions and movements of human. Our Fast Incremental Slow Feature Analysis (F-IncSFA) can learn and extract the few significant features of the complex sensory input sequences regarding high-level spatio-temporal conceptions. In this paper, the application of the F-IncSFA and some of its structure to make a hierarchical compound network made of F-IncSFA has been described. Also the techniques developed by adding efficient sparse coding as an encoder and a preprocessing step before an application of the F-IncSFA. The efficient sparse coding can dramatically reduces the dimension of extracted features and outcome of the efficient sparse coding are quite small as compared with the size of high-dimension video obtained by humanoid or human action. It has revealed excellent and promising dimension reduction by this preprocessor.
    BibTeX:
    			
    			
                            @article{LooBardia-2012a,
                              author       = {Loo, Chukiong and Bardia, Yousefi},
                              title        = {{1A1-P04} sparse {F-IncSFA} for action recognition (communication robot)},
                              journal      = {ロボティクス・メカトロニクス講演会講演概要集},
                              publisher    = {一般社団法人日本機械学会},
                              year         = {2012},
                              volume       = {2012}
                            }
    			
    			
    					
    Luciw, M.; Kompella, V.; Kazerounian, S. & Schmidhuber, J. 2013 An intrinsic value system for developing multiple invariant representations with incremental slowness learning Front Neurorobot , 7, 9.
     
    article
    Abstract: Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;) is a recently introduced model of intrinsically-motivated invariance learning. Artificial curiosity enables the orderly formation of multiple stable sensory representations to simplify the agent's complex sensory input. We discuss computational properties of the CD-MISFA model itself as well as neurophysiological analogs fulfilling similar functional roles. CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through learning progress of the developing features, and 3. balancing of exploration and exploitation to maximize learning progress and quickly learn multiple feature sets for perceptual simplification. Experimental results on synthetic observations and on the iCub robot show that the intrinsic value system is essential for representation learning. Representations are typically explored and learned in order from least to most costly, as predicted by the theory of curiosity.
    BibTeX:
    			
    			
                            @article{LuciwKompellaEtAl-2013,
                              author       = {Luciw, M. and Kompella, V. and Kazerounian, S. and Schmidhuber, J.},
                              title        = {An intrinsic value system for developing multiple invariant representations with incremental slowness learning},
                              journal      = {Front Neurorobot},
                              year         = {2013},
                              volume       = {7},
                              pages        = {9},
    			  url          = {http://journal.frontiersin.org/article/10.3389/fnbot.2013.00009/full},
                              doi          = {http://doi.org/10.3389/fnbot.2013.00009}
                            }
    			
    			
    					
    Luciw, M.; Kompella, V.R. & Schmidhuber, J. 2012 Hierarchical incremental slow feature analysis Workshop on Deep Hierarchies in Vision (under CogSys2012) .
     
    misc
    Abstract: low feature analysis [1] (SFA) is an unsupervised learning technique that extracts features from an input stream with the objective of maintaining an informa- tive but slowly-changing feature response over time. Due to some promising results so far [1,2], SFA has an intriguing potential for autonomous agents that learn upon raw visual streams, but in order to realize this potential it needs to be both hierarchical and adaptive. An incremental version of Slow Feature Analysis, called IncSFA, was recently introduced [2,3,4]. Here, we focus on its hierarchical extension (H-IncSFA). H-IncSFA networks are composed of multiple layers of overlapping IncSFA units, where each unit has a local receptive field. Figure 1 shows an example H-IncSFA network, based on the one specified by Franzius et al. [5].
    BibTeX:
    			
    			
                            @misc{LuciwKompellaEtAl-2012,
                              author       = {Matthew Luciw and Varun Raj Kompella and J{\"u}rgen Schmidhuber},
                              title        = {Hierarchical incremental slow feature analysis},
                              year         = {2012},
                              howpublished = {Workshop on Deep Hierarchies in Vision (under CogSys2012)},
                              url2         = {http://www.idsia.ch/~luciw/papers/dhv12-luciw.pdf}
                            }
    			
    			
    					
    Luciw, M. & Schmidhuber, J. 2012 Low complexity proto-value function learning from sensory observations with incremental slow feature analysis Artificial Neural Networks and Machine Learning - ICANN 2012 , Lecture Notes in Computer Science , 7553, 279-287.
    Eds. Villa, A.; Duch, W.; odzisł aw; É rdi, P.; ter; Masulli, F.; Palm, G. & nther
    Publ. Springer Berlin Heidelberg.
     
    inproceedings
    Abstract: We show that Incremental Slow Feature Analysis (IncSFA) provides a low complexity method for learning Proto-Value Functions (PVFs). It has been shown that a small number of PVFs provide a good basis set for linear approximation of value functions in reinforcement environments. Our method learns PVFs from a high-dimensional sensory input stream, as the agent explores its world, without building a transition model, adjacency matrix, or covariance matrix. A temporal-difference based reinforcement learner improves a value function approximation upon the features, and the agent uses the value function to achieve rewards successfully. The algorithm is local in space and time, furthering the biological plausibility and applicability of PVFs.
    BibTeX:
    			
    			
                            @inproceedings{LuciwSchmidhuber-2012,
                              author       = {Luciw, Matthew and Schmidhuber, J{\"u}rgen},
                              title        = {Low complexity proto-value function learning from sensory observations with incremental slow feature analysis},
                              booktitle    = {Artificial Neural Networks and Machine Learning -- ICANN 2012},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2012},
                              volume       = {7553},
                              pages        = {279--287},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-33266-1_35},
                              doi          = {http://doi.org/10.1007/978-3-642-33266-1_35}
                            }
    			
    			
    					
    Ma, K.; Tao, Q. & Wang, J. 2010 Nonlinear blind source separation using slow feature analysis with random features 20th International Conference on Pattern Recognition (ICPR) , 830-833.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: We develop an algorithm RSFA to perform nonlinear blind source separation with temporal constraints. The algorithm is based on slow feature analysis using random Fourier features for shift invariant kernels, followed by a selection procedure to obtain the sought-after signals. This method not only obtains remarkable results in a short computing time, but also excellently handles situations where there are multiple types of mixtures. In kernel methods, since the problem is unsupervised, the need of multiple kernels is ubiquitous. Experiments on music excerpts illustrate the strong performance of our method.
    BibTeX:
    			
    			
                            @inproceedings{MaTaoEtAl-2010,
                              author       = {Kuijun Ma and Qing Tao and Jue Wang},
                              title        = {Nonlinear blind source separation using slow feature analysis with random features},
                              booktitle    = {20\textsuperscript{th} International Conference on Pattern Recognition (ICPR)},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2010},
                              pages        = {830--833},
    			  url          = {http://ieeexplore.ieee.org/document/5596057/},
                              doi          = {http://doi.org/10.1109/ICPR.2010.209}
                            }
    			
    			
    					
    Malik, Z.K.; Hussain, A. & Wu, J. 2014 Novel biologically inspired approaches to extracting online information from temporal data Cognitive Computation , 6(3), 595-607.
    Publ. Springer.
     
    article
    Abstract: In this paper, we aim to develop novel learning approaches for extracting invariant features from time series. Specifically, we implement an existing method of solving the generalized eigenproblem and use this to firstly implement the biologically inspired technique of slow feature analysis (SFA) originally developed by Wiskott and Sejnowski (Neural Comput 14:715–770, 2002) and a rival method derived earlier by Stone (Neural Comput 8(7):1463–1492, 1996). Secondly, we investigate preprocessing the data using echo state networks (ESNs) (Lukosevicius and Jaeger in Comput Sci Rev 3(3):127–149, 2009) and show that the combination of generalized eigensolver and ESN is very powerful as a more biologically plausible implementation of SFA. Thirdly, we also investigate the effect of higher-order derivatives as a smoothing constraint and show the overall smoothness in the output signal. We demonstrate the potential of our proposed techniques, benchmarked against state-of-the-art approaches, using datasets comprising artificial, MNIST digits and hand-written character trajectories.
    BibTeX:
    			
    			
                            @article{MalikHussainEtAl-2014,
                              author       = {Malik, Zeeshan Khawar and Hussain, Amir and Wu, Jonathan},
                              title        = {Novel biologically inspired approaches to extracting online information from temporal data},
                              journal      = {Cognitive Computation},
                              publisher    = {Springer},
                              year         = {2014},
                              volume       = {6},
                              number       = {3},
                              pages        = {595--607},
    			  url          = {http://link.springer.com/article/10.1007%2Fs12559-014-9257-0},
                              doi          = {http://doi.org/10.1007/s12559-014-9257-0}
                            }
    			
    			
    					
    Marcos, M.I.C. 2010 Learning sensorimotor abstractions Aalto University, School of Science and Technology, Faculty of Information and Natural Sciences, Aalto University, School of Science and Technology, Faculty of Information and Natural Sciences .
    Publ. Universitat Politècnica de Catalunya.
     
    phdthesis
    Abstract: In order to interact with real environments, performing daily tasks, autonomous agents (as machines or robots) cannot be hard-coded. Given all the possible scenarios and, in each scenario, all the possible variations, it is impossible to take into account every single situation that the autonomous agent may encounter. Humans are able to interact with the changing world using as a guidance the sensory input perceived. Thus, autonomous agents need to be able to adapt to a changing environment. This work proposes a biologically inspired solution that allows the agent to learn representations and skills autonomously that prepare the agent for future learning tasks. The biologically inspired solution proposed here, called a cognitive architecture, follows the hierarchical architecture found in the cerebral cortex. This model permits the autonomous agent to extract useful information from the sensory input data it receives. The information is coded in abstractions, which are invariant features found within the input patterns. The cognitive architecture uses slowness as a principle for extracting features. In principle, unsupervised learning algorithms based on slowness try to find relevant and slowly changing data. This information could be useful for self evaluation. The agent tries to learn how to manipulate the sensory abstractions, by linking those to the motor ones. This allows the robot to find the mapping between the motor actions it is taking and the changes it is able to produce in the surrounding environment. Using the cognitive architecture, an example will be implemented. An agent, whoknows nothing about the environment it is placed on, will be able to learn how to move towards different places in space in an efficient (not random) way. Starting from random movements and capturing the sensory input data, it is able to learn concepts such as place and distance, which permits it to learn how to move towards a target efficiently
    BibTeX:
    			
    			
                            @phdthesis{Marcos-2010,
                              author       = {Mar{\'{i}}a Isabel Cordero Marcos},
                              title        = {Learning sensorimotor abstractions},
                              publisher    = {Universitat Polit{\`e}cnica de Catalunya},
                              school       = {Aalto University, School of Science and Technology, Faculty of Information and Natural Sciences},
                              year         = {2010},
    			  url          = {https://upcommons.upc.edu/bitstream/handle/2099.1/11435/62152.pdf}
                            }
    			
    			
    					
    Maurer, A. 2006 Unsupervised slow subspace-learning from stationary processes International Conference on Algorithmic Learning Theory , 363-377.
     
    inproceedings
    Abstract: We propose a method of unsupervised learning from stationary, vector-valued processes. A low-dimensional subspace is selected on the basis of a criterion which rewards data-variance (like PSA) and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove error bounds in terms of the β-mixing coefficients and consistency for absolutely regular processes. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps
    BibTeX:
    			
    			
                            @inproceedings{Maurer-2006,
                              author       = {Maurer, Andreas},
                              title        = {Unsupervised slow subspace-learning from stationary processes},
                              booktitle    = {International Conference on Algorithmic Learning Theory},
                              year         = {2006},
                              pages        = {363--377},
    			  url          = {http://link.springer.com/chapter/10.1007%2F11894841_29},
                              url2         = {https://pdfs.semanticscholar.org/f761/c2bf0e993ec340d7b42c2e489ddab818553d.pdf},
                              doi          = {http://doi.org/10.1007/11894841_29}
                            }
    			
    			
    					
    Maurer, A. 2008 Unsupervised slow subspace-learning from stationary processes Theoretical Computer Science , 405(3), 237-255.
     
    article
    Abstract: We propose a method of unsupervised learning from stationary, vector-valued processes. A projection to a low-dimensional subspace is selected on the basis of an objective function which rewards data-variance and penalizes the variance of the velocity vector, thus exploiting the short-time dependencies of the process. We prove bounds on the estimation error of the objective in terms of the β-mixing coefficients of the process. It is also shown that maximizing the objective minimizes an error bound for simple classification algorithms on a generic class of learning tasks. Experiments with image recognition demonstrate the algorithms ability to learn geometrically invariant feature maps.
    BibTeX:
    			
    			
                            @article{Maurer-2008,
                              author       = {Andreas Maurer},
                              title        = {Unsupervised slow subspace-learning from stationary processes},
                              journal      = {Theoretical Computer Science},
                              year         = {2008},
                              volume       = {405},
                              number       = {3},
                              pages        = {237--255},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0304397508004593},
                              doi          = {http://doi.org/10.1016/j.tcs.2008.06.054}
                            }
    			
    			
    					
    Metka, B.; Franzius, M. & Bauer-Wersing, U. 2013 Outdoor self-localization of a mobile robot using slow feature analysis Neural Information Processing , Lecture Notes in Computer Science , 8226, 249-256.
    Eds. Lee, M.; Hirose, A.; Hou, Z.-G. & Kil, R.
    Publ. Springer Berlin Heidelberg.
     
    incollection
    Abstract: We apply slow feature analysis (SFA) to the problem of self-localization with a mobile robot. A similar unsupervised hierarchical model has earlier been shown to extract a virtual rat’s position as slowly varying features by directly processing the raw, high dimensional views captured during a training run. The learned representations encode the robot’s position, are orientation invariant and similar to cells in a rodent’s hippocampus. Here, we apply the model to virtual reality data and, for the first time, to data captured by a mobile outdoor robot. We extend the model by using an omnidirectional mirror, which allows to change the perceived image statistics for improved orientation invariance. The resulting representations are used for the notoriously difficult task of outdoor localization with mean absolute localization errors below 6%.
    BibTeX:
    			
    			
                            @incollection{MetkaFranziusEtAl-2013,
                              author       = {Metka, Benjamin and Franzius, Mathias and Bauer-Wersing, Ute},
                              title        = {Outdoor self-localization of a mobile robot using slow feature analysis},
                              booktitle    = {Neural Information Processing},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2013},
                              volume       = {8226},
                              pages        = {249--256},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-42054-2_32},
                              doi          = {http://doi.org/10.1007/978-3-642-42054-2_32}
                            }
    			
    			
    					
    Metka, B.; Franzius, M. & Bauer-Wersing, U. 2016 Improving robustness of slow feature analysis based localization using loop closure events International Conference on Artificial Neural Networks , 489-496.
    Publ. Springer Nature.
     
    inproceedings
    Abstract: Hierarchical Slow Feature Analysis (SFA) extracts a spatial representation of the environment by directly processing images from a training run and has been shown to enable self-localization of a mobile robot by encoding its position as slowly varying features. However, in real world outdoor scenarios other variables, like global illumination or location of dynamic objects, might vary on an equal or slower time scale than the position of the robot. To prevent encoding of said variables we propose to restructure the temporal order of training samples based on loop closures in the trajectory. Every time the robot passes by a previously visited place, former recorded images are re-inserted to increase temporal variation of environmental variables. Hence, it is a feedback signal enforcing the model to produce similar outputs due to its slowness objective. Experiments in a simulated outdoor environment demonstrate increased robustness especially for changing lighting conditions.
    BibTeX:
    			
    			
                            @inproceedings{MetkaFranziusEtAl-2016,
                              author       = {Metka, Benjamin and Franzius, Mathias and Bauer-Wersing, Ute},
                              title        = {Improving robustness of slow feature analysis based localization using loop closure events},
                              booktitle    = {International Conference on Artificial Neural Networks},
                              publisher    = {Springer Nature},
                              year         = {2016},
                              pages        = {489--496},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-319-44781-0_58},
                              doi          = {http://doi.org/10.1007/978-3-319-44781-0_58}
                            }
    			
    			
    					
    Metka, B.; Franzius, M. & Bauer-Wersing, U. 2017 Efficient navigation using slow feature gradients 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , 1311-1316.
     
    article
    BibTeX:
    			
    			
                            @article{MetkaFranziusEtAl-2017,
                              author       = {Benjamin Metka and Mathias Franzius and Ute Bauer-Wersing},
                              title        = {Efficient navigation using slow feature gradients},
                              journal      = {2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
                              year         = {2017},
                              pages        = {1311-1316},
                              doi          = {http://doi.org/10.1109/iros.2017.8202307}
                            }
    			
    			
    					
    Miao, J.; Xu, X.; Qiu, S.; Qing, C. & Tao, D. 2015 Temporal variance analysis for action recognition IEEE Transactions on Image Processing , 24(12), 5904-5915.
     
    article
    Abstract: Slow feature analysis (SFA) extracts slowly varying signals from input data and has been used to model complex cells in the primary visual cortex (V1). It transmits information to both ventral and dorsal pathways to process appearance and motion information respectively. However, SFA only uses slowly varying features for local feature extraction, because they represent appearance information more effectively than motion information. To better utilize temporal information, we propose temporal variance analysis (TVA) as a generalization of SFA. TVA learns a linear transformation matrix which projects multi-dimensional temporal data to temporal components with temporal variance. Inspired by the function of V1, we learn receptive fields by TVA and apply convolution and pooling to extract local features. Embedded in the improved dense trajectory framework, TVA for action recognition is proposed to: 1) extract appearance and motion features from gray using slow and fast filters respectively; 2) extract additional motion features using slow filters from horizontal and vertical optical flows; and 3) separately encode extracted local features with different temporal variances and concatenate all the encoded features as final features. We evaluate the proposed TVA features on several challenging datasets and show that both slow and fast features are useful in low level feature extraction. Experimental results show that the proposed TVA features outperform conventional histogram-based features, and excellent results can be achieved by combining all TVA features.
    BibTeX:
    			
    			
                            @article{MiaoXuEtAl-2015,
                              author       = {Miao, J. and Xu, X. and Qiu, S. and Qing, C. and Tao, D.},
                              title        = {Temporal variance analysis for action recognition},
                              journal      = {IEEE Transactions on Image Processing},
                              year         = {2015},
                              volume       = {24},
                              number       = {12},
                              pages        = {5904--5915},
    			  url          = {http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7298412},
                              doi          = {http://doi.org/10.1109/tip.2015.2490551}
                            }
    			
    			
    					
    Miao, J.; Xu, X.; Xing, X. & Tao, D. 2017 Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition CoRR , abs/1706.03015.
     
    article
    BibTeX:
    			
    			
                            @article{MiaoXuEtAl-2017,
                              author       = {Jie Miao and Xiangmin Xu and Xiaofen Xing and Dacheng Tao},
                              title        = {Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition},
                              journal      = {CoRR},
                              year         = {2017},
                              volume       = {abs/1706.03015}
                            }
    			
    			
    					
    Miao, J.; Xu, X.; Xing, X. & Tao, D. 2017 Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition arXiv preprint arXiv:1706.03015 .
     
    article
    BibTeX:
    			
    			
                            @article{MiaoXuEtAl-2017a,
                              author       = {Miao, Jie and Xu, Xiangmin and Xing, Xiaofen and Tao, Dacheng},
                              title        = {Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition},
                              journal      = {arXiv preprint arXiv:1706.03015},
                              year         = {2017}
                            }
    			
    			
    					
    Mikhailova, I. 2013 Energy-based state-feedback control of systems with mechanical or virtual springs 2013 IEEE International Conference on Robotics and Automation (ICRA) , 2509-14.
    Publ. IEEE, Piscataway, NJ, USA.
     
    inproceedings
    Abstract: In the last years the classical optimal control of rigid structures gives way to alternatives both in hardware (elastic structures) as in software (non-linear control). In this work we consider both alternatives together. We test the possibility to apply speed-gradient (SG) control [1] to elastic structures. The SG method has many advantages, e.g. exploitation of natural dynamics of the system and mathematically provable criteria of goal achievement. However the cost function that satisfies the requirements of the SG method may be difficult to find. In this work we propose two approaches to this problem: usage of virtual springs and usage of learning methods based on Slow Feature Analysis (SFA). A classical example of a cart-pole system and an example of a system which uses two serial springs for hopping show in simulation the viability of our approach. Proposed here combination of SG control with learning is a novel approach which opens interesting perspectives for further research on passive control.
    BibTeX:
    			
    			
                            @inproceedings{Mikhailova-2013,
                              author       = {Mikhailova, I.},
                              title        = {Energy-based state-feedback control of systems with mechanical or virtual springs},
                              booktitle    = {2013 IEEE International Conference on Robotics and Automation (ICRA)},
                              publisher    = {IEEE},
                              year         = {2013},
                              pages        = {2509--14},
    			  url          = {http://ieeexplore.ieee.org/document/6630919/},
                              doi          = {http://doi.org/10.1109/ICRA.2013.6630919}
                            }
    			
    			
    					
    Minh, H.Q. & Wiskott, L. 2013 Multivariate slow feature analysis and decorrelation filtering for blind source separation IEEE Trans Image Process , 22(7), 2737-2750.
     
    article
    Abstract: We generalize the method of Slow Feature Analysis (SFA) for vector-valued functions of several variables and apply it to the problem of blind source separation, in particular to image separation. It is generally necessary to use multivariate SFA instead of univariate SFA for separating multi-dimensional signals. For the linear case, an exact mathematical analysis is given, which shows in particular that the sources are perfectly separated by SFA if and only if they and their first-order derivatives are uncorrelated. When the sources are correlated, we apply the following technique called Decorrelation Filtering: use a linear filter to decorrelate the sources and their derivatives in the given mixture, then apply the unmixing matrix obtained on the filtered mixtures to the original mixtures. If the filtered sources are perfectly separated by this matrix, so are the original sources. A decorrelation filter can be numerically obtained by solving a nonlinear optimization problem. This technique can also be applied to other linear separation methods, whose output signals are decorrelated, such as ICA. When there are more mixtures than sources, one can determine the actual number of sources by using a regularized version of SFA with decorrelation filtering. Extensive numerical experiments using SFA and ICA with decorrelation filtering, supported by mathematical analysis, demonstrate the potential of our methods for solving problems involving blind source separation.
    BibTeX:
    			
    			
                            @article{MinhWiskott-2013,
                              author       = {Minh, H. Q. and Wiskott, L.},
                              title        = {Multivariate slow feature analysis and decorrelation filtering for blind source separation},
                              journal      = {IEEE Trans Image Process},
                              year         = {2013},
                              volume       = {22},
                              number       = {7},
                              pages        = {2737--2750},
    			  url          = {http://ieeexplore.ieee.org/document/6497610/},
                              doi          = {http://doi.org/10.1109/TIP.2013.2257808}
                            }
    			
    			
    					
    Mohmmed, U.S. & Saber, H. 2009 Blind separation of nonlinear mixing signals using kernel with slow feature analysis International Journal of Video & Image Processing and Network Security (IJVIPNS)International Journal of Video and Image Processing and Network Security IJVIPNS , 9(10).
     
    article
    Abstract: This paper describes a hybrid blind source separation approach (HBSSA) for nonlinear mixing model (NL-BSS). The proposed hybrid scheme combines simply the kernel-feature spaces separation technique (KTDSEP) and the principle of the slow feature analysis (SFA). The nonlinear mixed data is mapped to high dimensional feature space using kernel-based method. Then, the linear blind source separation (BSS) based on the slow feature analysis (SFA) is used to extract the most slowness vectors among the independent data vectors. The proposed scheme is based on the following four key features: 1) estimating an orthonormal bases, 2) mapping the data into the subspace using this orthonormal bases, 3) applying linear BSS on the mapping data to make the data vectors in the feature spaces are independent, 4) Applying the principle of slow feature analysis on the mapping data to select the desired signals. The SFA provides the dimension reduction according to the most independent and slowing variable signals. Moreover, the orthonormal bases estimation in the wavelet domain is introduced in this work to reduce the complexity of the KTDSEP algorithm. The motivation of using the wavelet transform, in estimating the orthonormal bases, is based on the fact that the low frequency band in the wavelet domain contains the significant power of the signal. The advantages of the proposed method are the fast estimation of the orthonormal bases and the dimension reduction of the estimating data vectors. Performed computer simulations have shown the effectiveness of the idea, even in presence of strong nonlinearities and synthetic mixture of real world data. Our extensive experiments have confirmed that the proposed procedure provides promising results.
    BibTeX:
    			
    			
                            @article{MohmmedSaber-2009,
                              author       = {Usama S. Mohmmed and Hany Saber},
                              title        = {Blind separation of nonlinear mixing signals using kernel with slow feature analysis},
                              booktitle    = {International Journal of Video and Image Processing and Network Security IJVIPNS},
                              journal      = {International Journal of Video \& Image Processing and Network Security (IJVIPNS)},
                              year         = {2009},
                              volume       = {9},
                              number       = {10},
    			  url          = {http://www.ijens.org/96310-7676%20IJVIPNS-IJENS.pdf}
                            }
    			
    			
    					
    Müller, M.G. 2016 Slow Feature Analysis with Neural Networks mathesis, Information and Computer Engineering, TU Graz, Information and Computer Engineering, TU Graz .
     
    mastersthesis
    BibTeX:
    			
    			
                            @mastersthesis{Mueller-2016,
                              author       = {Michael G. Müller},
                              title        = {Slow Feature Analysis with Neural Networks},
                              school       = {Information and Computer Engineering, TU Graz},
                              year         = {2016}
                            }
    			
    			
    					
    Nater, F.; Grabner, H. & Van Gool, L. 2011 Unsupervised workflow discovery in industrial environments 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops) , 1912-1919.
    Publ. Institute of Electrical and Electronics Engineers (IEEE).
     
    inproceedings
    Abstract: In this work, we present an approach for the automatic discovery of workflows in industrial environments. In such cluttered scenes, one faces many challenges, which limit the use of state-of-the-art object detection and tracking meth- ods. Instead we propose a purely data-driven method which exploits the temporal structure of the workflow. Our robust technique is free of human intervention and does not need parameter tuning. We show results on two camera views of a working cell in a car assembly line. Workflows are extracted robustly, they match well across the camera views and they are conform with human annotation. Furthermore, we show a simple but efficient extension to analyze the im- age stream in real time. This assures a smooth running of the workflow and enables the notification of different types of unexpected scenarios.
    BibTeX:
    			
    			
                            @inproceedings{NaterGrabnerEtAl-2011a,
                              author       = {Nater, Fabian and Grabner, Helmut and Van Gool, Luc},
                              title        = {Unsupervised workflow discovery in industrial environments},
                              booktitle    = {2011 {IEEE} International Conference on Computer Vision Workshops ({ICCV} Workshops)},
                              publisher    = {Institute of Electrical and Electronics Engineers ({IEEE})},
                              year         = {2011},
                              pages        = {1912--1919},
    			  url          = {http://ieeexplore.ieee.org/document/6130482/},
                              url2         = {https://pdfs.semanticscholar.org/8f77/bf698e34535e6b589939ec018c26910c6be3.pdf},
                              doi          = {http://doi.org/10.1109/iccvw.2011.6130482}
                            }
    			
    			
    					
    Nater, F.; Grabner, H. & Van Gool, L.J. 2011 Temporal relations in videos for unsupervised activity analysis. Procedings of the British Machine Vision Conference 2011 , 2, 8.
    Publ. British Machine Vision Association and Society for Pattern Recognition.
     
    inproceedings
    Abstract: Temporal consistency is a strong cue in continuous data streams and especially in videos. We exploit this concept and encode temporal relations between consecutive frames using discriminative slow feature analysis. Activities are automatically segmented and represented in a hierarchical coarse to fine structure. Simultaneously, they are mod- eled in a generative manner, in order to analyze unseen data. This analysis supports the detection of previously learned activities and of abnormal, novel patterns. Our technique is purely data-driven and feature-independent. Experiments validate the approach in sev- eral contexts, such as traffic flow analysis and the monitoring of human behavior. The results are competitive with the state-of-the-art in all cases.
    BibTeX:
    			
    			
                            @inproceedings{NaterGrabnerEtAl-2011,
                              author       = {Nater, Fabian and Grabner, Helmut and Van Gool, Luc J},
                              title        = {Temporal relations in videos for unsupervised activity analysis.},
                              booktitle    = {Procedings of the British Machine Vision Conference 2011},
                              publisher    = {British Machine Vision Association and Society for Pattern Recognition},
                              year         = {2011},
                              volume       = {2},
                              pages        = {8},
    			  url          = {http://www.bmva.org/bmvc/2011/proceedings/paper21/index.html},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.714.8738&rep=rep1&type=pdf},
                              doi          = {http://doi.org/10.5244/c.25.21}
                            }
    			
    			
    					
    Ngiam, J. & Baldassano, C. 2009 Studies in deep belief networks .
     
    misc
    Abstract: Deep networks are able to learn good representations of unlabelled data via a greedy layer-wise approach to training. One challenge arises in choosing the layer types to use, whether it is an autoencoder, restricted boltzmann machine, with and without sparsity regularization. The layer choice directly affects the type of representations learned. In this paper, we examine sparse autoencoders and characterize their behavior under different parameterizations. We also present preliminary results on quadratic layers with slowness.
    BibTeX:
    			
    			
                            @misc{NgiamBaldassano-2009,
                              author       = {Ngiam, Jiquan and Baldassano, Chris},
                              title        = {Studies in deep belief networks},
                              year         = {2009},
                              url2         = {http://cs229.stanford.edu/proj2009/NgiamBaldassano.pdf}
                            }
    			
    			
    					
    Nickisch, H. 2006 Extraction of visual features from natural video data using slow feature analysis Technische Universität Berlin, Fakultät für Elektrotechnik und Informatik, Technische Universität Berlin, Fakultät für Elektrotechnik und Informatik .
     
    mastersthesis
    Abstract: Das Forschungsprojekt NeuRoBot hat das unüberwachte Erlernen einer neu- ronal inspirierten Steuerungsarchitektur zum Ziel, und zwar unter den Rand- bedingungen biologischer Plausibilität und der Benutzung einer Kamera als einzigen Sensor. Visuelle Merkmale, die ein angemessenes Abbild der Umgebung liefern, sind unerlässlich, um das Ziel kollisionsfreier Naviga- tion zu erreichen. Zeitliche Kohärenz ist ein neues Lernprinzip, das in der Lage ist, Er- kenntnisse aus der Biologie des Sehens zu reproduzieren. Es wird durch die Beobachtung motiviert, dass die “Sensoren” der Retina auf deutlich kür- zeren Zeitskalen variieren als eine abstrakte Beschreibung. Zeitliche Lang- samkeitsanalyse löst das Problem, indem sie zeitlich langsam veränderliche Signale aus schnell veränderlichen Eingabesignalen extrahiert. Eine Verall- gemeinerung auf Signale, die nichtlinear von den Eingaben abhängen, ist durch die Anwendung des Kernel-Tricks möglich. Das einzig benutzte Vor- wissen ist die zeitliche Glattheit der gewonnenen Signale. In der vorliegenden Diplomarbeit wird Langsamkeitsanalyse auf Bild- ausschnitte von Videos einer Roboterkamera und einer Simulationsumge- bung angewendet. Zuallererst werden mittels Parameterexploration und Kreuzvalidierung die langsamst möglichen Funktionen bestimmt. Anschlie- ßend werden die Merkmalsfunktionen analysiert und einige Ansatzpunkte für ihre Interpretation angegeben. Aufgrund der sehr großen Datensätze und der umfangreichen Berechnungen behandelt ein Großteil dieser Arbeit auch Aufwandsbetrachtungen und Fragen der effizienten Berechnung. Kantendetektoren in verschiedenen Phasen und mit hauptsächlich hori- zontaler Orientierung stellen die wichtigsten aus der Analyse hervorgehen- den Funktionen dar. Eine Anwendung auf konkrete Navigationsaufgaben des Roboters konnte bisher nicht erreicht werden. Eine visuelle Interpreta- tion der erlernten Merkmale ist jedoch durchaus gegeben.
    BibTeX:
    			
    			
                            @mastersthesis{Nickisch-2006,
                              author       = {Nickisch, Hannes},
                              title        = {Extraction of visual features from natural video data using slow feature analysis},
                              school       = {Technische Universit{\"{a}}t Berlin, Fakult{\"{a}}t f{\"{u}}r Elektrotechnik und Informatik},
                              year         = {2006},
    			  url          = {http://www.kyb.mpg.de/fileadmin/user_upload/files/publications/thesis_4190[0].pdf},
                              url2         = {https://pdfs.semanticscholar.org/7cf4/4a7e2344b5b76e0debf90cdda37b98e7f2ff.pdf}
                            }
    			
    			
    					
    Nicolaou, M.A.; Zafeiriou, S. & Pantic, M. 2014 A unified framework for probabilistic component analysis Joint European Conference on Machine Learning and Knowledge Discovery in Databases , 469-484.
     
    inproceedings
    Abstract: We present a unifying framework which reduces the construction of probabilistic component analysis techniques to a mere selection of the latent neighbourhood, thus providing an elegant and principled framework for creating novel component analysis models as well as constructing probabilistic equivalents of deterministic component analysis methods. Under our framework, we unify many very popular and well-studied component analysis algorithms, such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), some of which have no probabilistic equivalents in literature thus far. We firstly define the Markov Random Fields (MRFs) which encapsulate the latent connectivity of the aforementioned component analysis techniques; subsequently, we show that the projection directions produced by all PCA, LDA, LPP and SFA are also produced by the Maximum Likelihood (ML) solution of a single joint probability density function, composed by selecting one of the defined MRF priors while utilising a simple observation model. Furthermore, we propose novel Expectation Maximization (EM) algorithms, exploiting the proposed joint PDF, while we generalize the proposed methodologies to arbitrary connectivities via parametrizable MRF products. Theoretical analysis and experiments on both simulated and real world data show the usefulness of the proposed framework, by deriving methods which well outperform state-of-the-art equivalents.
    BibTeX:
    			
    			
                            @inproceedings{NicolaouZafeiriouEtAl-2014,
                              author       = {Nicolaou, Mihalis A and Zafeiriou, Stefanos and Pantic, Maja},
                              title        = {A unified framework for probabilistic component analysis},
                              booktitle    = {Joint European Conference on Machine Learning and Knowledge Discovery in Databases},
                              year         = {2014},
                              pages        = {469--484},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-662-44851-9_30},
                              url2         = {https://pdfs.semanticscholar.org/73a5/54f9d2c96a7bb5455878a4361b6eda4ba683.pdf},
                              doi          = {http://doi.org/10.1007/978-3-662-44851-9_30}
                            }
    			
    			
    					
    Nishikawa, A.; Ogino, M. & Asada, M. 2011 Acquiring body representation for reinforcement learning based on slow feature analysis The 21st Annual Conference of the Japanese Neural Network Society , 3-29.
     
    inproceedings
    Abstract: The center of spatio-temporal represen- tation for own body and its surrounding space is sup- posed at the parietal cortex in human brains, but the mechanism how the brain computes them is still not clearly understood though its hierarchical represen- tation is expected. One of such hierarchical models, this paper propose a method which integrates multi- modal information based on the Slow Feature Analysis (SFA) that enables sensory data abstraction in one modality and integration of abstracted multi-modal sensory information. To verify the proposed method, the reinforcement learning of reaching behavior is ap- plied where the acquired representation from the visual and somatosensory information of arm movements of a robot is utilised as state space representation. The simulation result shows that multimodal information related to self movement is transformed into lower di- mensional data that changes slowly, which is useful for reinforcement learning to improve its performance.
    BibTeX:
    			
    			
                            @inproceedings{NishikawaOginoEtAl-2011,
                              author       = {Akihiko Nishikawa and Masaki Ogino and Minoru Asada},
                              title        = {Acquiring body representation for reinforcement learning based on slow feature analysis},
                              booktitle    = {The 21\textsuperscript{st} Annual Conference of the Japanese Neural Network Society},
                              year         = {2011},
                              pages        = {3--29},
                              url2         = {http://www.er.ams.eng.osaka-u.ac.jp/Paper/2011/Nishikawa11b.pdf}
                            }
    			
    			
    					
    Ogino, M.; Nishikawa, A. & Asada, M. 2013 A motivation model for interaction between parent and child based on the need for relatedness Frontiers in Psychology , 4, 618.
     
    article
    Abstract: In communication between parents and children, various kinds of intrinsic and extrinsic motivations affect the emotions that encourage actions to promote more interactions. This paper presents a motivation model for the interaction between an infant and a caregiver which models relatedness, one of the most important basic psychological needs, as a variable that increases with experiences of emotion sharing. Relatedness is not only an important factor of pleasure but also a meta-factor which affects other factors such as stress and emotion mirroring. In the simulation experiment, two agents, each of which has the proposed motivation model, show emotional communication depending on the relatedness level that is similar to actual human communication. Especially, the proposed model can reproduce a finding described by the "still-face paradigm", in which an infant shows unpleasant emotion when a caregiver suddenly stops facial expressions. The proposed model is implemented in an artificial agent with a recognition system for gestures and facial expressions. The baby-like agent successfully interacts with an actual human and shows reactions comparable to the "still-face paradigm".
    BibTeX:
    			
    			
                            @article{OginoNishikawaEtAl-2013,
                              author       = {Ogino, Masaki and Nishikawa, Akihiko and Asada, Minoru},
                              title        = {A motivation model for interaction between parent and child based on the need for relatedness},
                              journal      = {Frontiers in Psychology},
                              year         = {2013},
                              volume       = {4},
                              pages        = {618},
    			  url          = {http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00618/full},
                              doi          = {http://doi.org/10.3389/fpsyg.2013.00618}
                            }
    			
    			
    					
    Omori, T. 2013 Extracting latent dynamics from multi-dimensional data by probabilistic slow feature analysis Neural Information Processing , Lecture Notes in Computer Science , 8228, 108-116.
    Eds. Lee, M.; Hirose, A.; Hou, Z.-G. & Kil, R.
    Publ. Springer Berlin Heidelberg.
     
    incollection
    Abstract: Slow feature analysis (SFA) is a time-series analysis method for extracting slowly-varying latent features from multi-dimensional data. In this paper, the probabilistic version of SFA algorithms is discussed from a theoretical point of view. First, the fundamental notions of SFA algorithms are reviewed in order to show the mechanism of extracting the slowly-varying latent features by means of the SFA. Second, recent advances in the SFA algorithms are described on the emphasis of the probabilistic version of the SFA. Third, the probabilistic SFA with rigorously derived likelihood function is derived by means of belief propagation. Using the rigorously derived likelihood function, we simultaneously extracts slow features and underlying parameters for the latent dynamics. Finally, we show using synthetic data that the probabilistic SFA with rigorously derived likelihood function can estimate the slow feature accurately even under noisy environments.
    BibTeX:
    			
    			
                            @incollection{Omori-2013,
                              author       = {Omori, Toshiaki},
                              title        = {Extracting latent dynamics from multi-dimensional data by probabilistic slow feature analysis},
                              booktitle    = {Neural Information Processing},
                              publisher    = {Springer Berlin Heidelberg},
                              year         = {2013},
                              volume       = {8228},
                              pages        = {108--116},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-642-42051-1_15},
                              doi          = {http://doi.org/10.1007/978-3-642-42051-1_15}
                            }
    			
    			
    					
    Omori, T.; Sekiguchi, T. & Okada, M. 2017 Belief Propagation for Probabilistic Slow Feature Analysis Journal of the Physical Society of Japan , 86(8), 084802.
    Publ. The Physical Society of Japan.
     
    article
    BibTeX:
    			
    			
                            @article{OmoriSekiguchiEtAl-2017,
                              author       = {Omori, Toshiaki and Sekiguchi, Tomoki and Okada, Masato},
                              title        = {Belief Propagation for Probabilistic Slow Feature Analysis},
                              journal      = {Journal of the Physical Society of Japan},
                              publisher    = {The Physical Society of Japan},
                              year         = {2017},
                              volume       = {86},
                              number       = {8},
                              pages        = {084802},
                              doi          = {http://doi.org/10.7566/jpsj.86.084802}
                            }
    			
    			
    					
    Pagel, F. 2015 Unsupervised classification and visual representation of situations in surveillance videos using slow feature analysis for situation retrieval applications SPIE/IS&T Electronic Imaging , 94070H-94070H.
     
    inproceedings
    Abstract: Today, video surveillance systems produce thousands of terabytes of data. This source of information can be very valuable, as it contains spatio-temporal information about abnormal, similar or periodic activities. However, a search for certain situations or activities in unstructured large-scale video footage can be exhausting or even pointless. Searching surveillance video footage is extremely difficult due to the apparent similarity of situations, especially for human observers. In order to keep this amount manageable and hence usable, this paper aims at clustering situations regarding their visual content as well as motion patterns. Besides standard image content descriptors like HOG, we present and investigate novel descriptors, called Franklets, which explicitly encode motion patterns for certain image regions. Slow feature analysis (SFA) will be performed for dimension reduction based on the temporal variance of the features. By reducing the dimension with SFA, a higher feature discrimination can be reached compared to standard PCA dimension reduction. The effects of dimension reduction via SFA will be investigated in this paper. Cluster results on real data from the Hamburg Harbour Anniversary 2014 will be presented with both, HOG feature descriptors and Franklets. Furthermore, we could show that by using SFA an improvement to standard PCA techniques could be achieved. Finally, an application to visual clustering with self-organizing maps will be introduced.
    BibTeX:
    			
    			
                            @inproceedings{Pagel-2015,
                              author       = {Pagel, Frank},
                              title        = {Unsupervised classification and visual representation of situations in surveillance videos using slow feature analysis for situation retrieval applications},
                              booktitle    = {SPIE/IS\&T Electronic Imaging},
                              year         = {2015},
                              pages        = {94070H--94070H},
    			  url          = {http://akme-a2.iosb.fraunhofer.de/EatThisGoogleScholar/d/2015_Unsupervised%20classification%20and%20visual%20representation%20of%20situations%20in%20surveillance%20videos%20using%20slow%20feature%20analysis%20for%20situation%20retrieval%20applica.pdf},
                              doi          = {http://doi.org/10.1117/12.2076740}
                            }
    			
    			
    					
    Pan, X.; Wang, G. & Yang, P. 2018 Extracting the signal of driving force from hierarchical system by Slow Feature Analysis EGU General Assembly Conference Abstracts , 20, 3895.
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{PanWangEtAl-2018,
                              author       = {Pan, Xinnong and Wang, Geli and Yang, Peicai},
                              title        = {Extracting the signal of driving force from hierarchical system by Slow Feature Analysis},
                              booktitle    = {EGU General Assembly Conference Abstracts},
                              year         = {2018},
                              volume       = {20},
                              pages        = {3895}
                            }
    			
    			
    					
    Pang, C.; Wang, M.; Liu, W. & Li, B. 2016 Learning features for discriminative behavior analysis of evolutionary algorithms via slow feature analysis Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion , 1437-1444.
    Publ. Association for Computing Machinery (ACM).
     
    inproceedings
    Abstract: Evolutionary algorithms (EAs) are a kind of stochastic optimization methods, which have been testified to be powerful in solving many real-world hard problems in past decades. But till now, we are still short of effective methods to represent and investigate their collective behaviors in various environments, which are very useful for researchers and engineers in Evolutionary Computation to understand the algorithms better. This paper is a preliminary effort to tackle above issue. We attempt to analyze the generation-wise collective behavior of EAs via an approach called feature learning. An unsupervised feature learning framework based on Slow Feature Analysis (SFA) is presented to extract discriminative features from the generation-wise collective behavior data of several EAs on various fitness landscapes, with the purpose of finding out whether there exist differences between the searching behavior of different EAs running on the same fitness landscape; and whether there are differences between the behavior of one algorithm running on different fitness landscapes. Besides, the relationship between the fitness landscape and the searching behavior of EA is also studied. In the experiments, several typical EAs and classical benchmark functions with typical landscapes are selected as the study subjects. The collective behaviors of various EAs are visualized and compared in the extracted feature space.
    BibTeX:
    			
    			
                            @inproceedings{PangWangEtAl-2016,
                              author       = {Pang, Chengshan and Wang, Mang and Liu, Weiming and Li, Bin},
                              title        = {Learning features for discriminative behavior analysis of evolutionary algorithms via slow feature analysis},
                              booktitle    = {Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion},
                              publisher    = {Association for Computing Machinery ({ACM})},
                              year         = {2016},
                              pages        = {1437--1444},
    			  url          = {http://dl.acm.org/citation.cfm?doid=2908961.2935617},
                              doi          = {http://doi.org/10.1145/2908961.2935617}
                            }
    			
    			
    					
    Qi, X.; Li, C.; Zhao, G.; Hong, X. & Pietikäinen, M. 2015 Dynamic texture and scene classification by transferring deep image features e-print arXiv:1502.00303 .
     
    misc
    Abstract: Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changing, or even camera motion, and/or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract feature for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a mid-level feature extractor to extract features from each frame, and then form a representation of a video by concatenating the first and the second order statis- tics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover we explore two different implementations of the TCoF scheme, i.e., the spatial TCoF and the temporal TCoF, in which the mean-removed frames and the difference between two adjacent frames are used as the inputs of the ConvNet, respectively. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.
    BibTeX:
    			
    			
                            @misc{QiLiEtAl-2015,
                              author       = {Xianbiao Qi and Chun{-}Guang Li and Guoying Zhao and Xiaopeng Hong and Matti Pietik{\"{a}}inen},
                              title        = {Dynamic texture and scene classification by transferring deep image features},
                              year         = {2015},
                              howpublished = {e-print arXiv:1502.00303},
    			  url          = {https://arxiv.org/pdf/1502.00303.pdf}
                            }
    			
    			
    					
    Qi, X.; Li, C.-G.; Zhao, G.; Hong, X. & Pietikäinen, M. 2016 Dynamic texture and scene classification by transferring deep image features Neurocomputing , 171, 1230-1241.
    Publ. Elsevier.
     
    article
    Abstract: Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However, the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changes, or even camera motion, and/or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract features for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be more specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a feature extractor to extract mid-level features from each frame, and then form the video-level representation by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover, we explore two different implementations of the TCoF scheme, i.e., the spatial TCoF and the temporal TCoF. In the spatial TCoF, the mean-removed frames are used as the inputs of the ConvNet; whereas in the temporal TCoF, the differences between two adjacent frames are used as the inputs of the ConvNet. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.
    BibTeX:
    			
    			
                            @article{QiLiEtAl-2016,
                              author       = {Qi, Xianbiao and Li, Chun-Guang and Zhao, Guoying and Hong, Xiaopeng and Pietik{\"a}inen, Matti},
                              title        = {Dynamic texture and scene classification by transferring deep image features},
                              journal      = {Neurocomputing},
                              publisher    = {Elsevier},
                              year         = {2016},
                              volume       = {171},
                              pages        = {1230--1241},
    			  url          = {http://www.sciencedirect.com/science/article/pii/S0925231215010784},
                              doi          = {http://doi.org/10.1016/j.neucom.2015.07.071}
                            }
    			
    			
    					
    Qian, H.; Zhou, J.; Lu, X. & Wu, X. 2014 Human activities recognition based on poisson equation evaluation and bidirectional 2DPCA Conference on Control Automation Robotics & Vision (ICARCV), 2014 13th International , 787-792.
     
    inproceedings
    Abstract: A novel algorithm for the human activities recognition based on the Poisson images and via bidirectional two-dimensional principal component analysis (2DPCA) is presented in this note, where the Poisson images are defined by solving the Poisson equations to re-interpret the motion accumulation image (MAI). More precisely, firstly, object detection based on the Gaussian Mixture Model (GMM) is applied to acquire the binary images including moving human blobs; secondly, the Poisson image is defined to make the features extracted in the sequel robust to possible incomplete human blobs; thirdly, the principal component analysis (PCA), 2DPCA and bidirectional 2DPCA are applied, respectively, to extract the feature vectors; and finally, the nearest neighbour (NN) classifier is used to recognize the human activities. Simulation results on Weizmann database confirm the recognition performance of the proposed algorithm. Comparisons in terms of classification accuracy and time consumption in between the three methods show that the bidirectional 2DPCA is optimal.
    BibTeX:
    			
    			
                            @inproceedings{QianZhouEtAl-2014,
                              author       = {Qian, Huimin and Zhou, Jun and Lu, Xinbiao and Wu, Xinye},
                              title        = {Human activities recognition based on poisson equation evaluation and bidirectional {2DPCA}},
                              booktitle    = {Conference on Control Automation Robotics \& Vision (ICARCV), 2014 13\textsuperscript{th} International},
                              year         = {2014},
                              pages        = {787--792},
    			  url          = {http://ieeexplore.ieee.org/document/7064404/},
                              doi          = {http://doi.org/10.1109/icarcv.2014.7064404}
                            }
    			
    			
    					
    Rehn, E.M. 2013 On the slowness principle and learning in hierarchical temporal memory Bernstein Center for Computational Neuroscience, Bernstein Center for Computational Neuroscience .
    , Berlin, Germany  
    mastersthesis
    Abstract: The slowness principle is believed to be one clue to how the brain solves the problem of invariant object recognition. It states that external causes for sensory activation, i.e., distal stimuli, often vary on a much slower time scale than the sensory activation itself. Slowness is thus a plausible objective when the brain learns invariant representations of its environment. Here we review two approaches to slowness learning: Slow Feature Analysis (SFA) and Hierarchical Temporal Memory (HTM), and show how Generalized SFA (GSFA) links the two. The connection between SFA, Linear Discriminant Analysis (LDA), and Locality Preserving Projections (LPP) is also investigated. Experimental work is presented which demonstrates how the local neighborhood implicit in the original SFA formulation, by the use of the temporal derivative of the input, renders SFA more efficient than LDA when applied to supervised pattern recognition, if the data has a low-dimensional manifold structure. Furthermore, a novel object recognition model, called Hierarchical Generalized Slow Feature Analysis (HGSFA), is proposed. Through the use of GSFA, the model enables a possible manifold structure in the training data to be exploited during training, and the experimental evaluation shows how this leads to greatly increased classification accuracy on the NORB object recognition dataset, compared to previously published results. Lastly, a novel gradient-based fine-tuning algorithm for HTM is proposed and evaluated. This error backpropagation can be naturally and elegantly implemented through native HTM belief propagation, and experimental results show that a two- stage training process composed by temporal unsupervised pre-training and supervised refinement is very effective. This is in line with recent findings on other deep architectures, where generative pre-training is complemented by discriminant fine-tuning.
    BibTeX:
    			
    			
                            @mastersthesis{Rehn-2013,
                              author       = {Erik M. Rehn},
                              title        = {On the slowness principle and learning in hierarchical temporal memory},
                              school       = {Bernstein Center for Computational Neuroscience},
                              year         = {2013}
                            }
    			
    			
    					
    Rehn, E.M. & Sprekeler, H. 2014 Nonlinear supervised locality preserving projections for visual pattern discrimination. ICPR , 1568-1573.
     
    inproceedings
    Abstract: Learning representations that disentangle hidden explanatory factors in data has proven beneficial for effective pattern classification. Slow feature analysis (SFA) is a nonlinear dimensionality reduction technique that provides a useful representation for classification if the training data is sequential and transitions between classes are rare. The pattern discrimination ability of SFA has been attributed to the equivalence of linear SFA and linear discriminant analysis (LDA) under certain conditions. LDA, however, is often outperformed by locality preserving projections (LPP) when the data lies on or near a low-dimensional manifold. Here, we take a unified manifold learning perspective on LPP, LDA and SFA. We suggest that the discrimination ability of SFA is better explained by its relation to LPP than to LDA, and give an example of a situation where linear SFA outperforms LDA. We then propose a novel supervised manifold learning architecture that combines hierarchical nonlinear expansions, as commonly used for SFA, with supervised LPP. It learns a nonlinear parametric data representation that explicitly takes both the class labels and the manifold structure of the data into account. As an experimental validation, we show that this approach outperforms previously proposed models on the NORB object recognition dataset.
    BibTeX:
    			
    			
                            @inproceedings{RehnSprekeler-2014,
                              author       = {Rehn, Erik M and Sprekeler, Henning},
                              title        = {Nonlinear supervised locality preserving projections for visual pattern discrimination.},
                              booktitle    = {ICPR},
                              year         = {2014},
                              pages        = {1568--1573},
    			  url          = {http://ieeexplore.ieee.org/document/6976988/},
                              url2         = {https://pdfs.semanticscholar.org/9f06/75fc265f9902d7d0f1c9460c7cf7ebbd1542.pdf},
                              doi          = {http://doi.org/10.1109/icpr.2014.278}
                            }
    			
    			
    					
    Richthofer, S. & Wiskott, L. 2018 Global Navigation Using Predictable and Slow Feature Analysis in Multiroom Environments, Path Planning and Other Control Tasks CoRR , abs/1805.08565.
     
    article
    BibTeX:
    			
    			
                            @article{RichthoferWiskott-2018,
                              author       = {Stefan Richthofer and Laurenz Wiskott},
                              title        = {Global Navigation Using Predictable and Slow Feature Analysis in Multiroom Environments, Path Planning and Other Control Tasks},
                              journal      = {CoRR},
                              year         = {2018},
                              volume       = {abs/1805.08565}
                            }
    			
    			
    					
    Rubia, L.B. & Manimala, K. 2013 Slow feature analysis for recognizing prisoner’s activities to assist jail authorities International Journal of Advances in Engineering and Emerging Technology (IJAEET) , 1(1).
     
    article
    Abstract: Slow Feature Analysis (SFA) has been established as a robust and versatile technique from the neurosciences to learn slowly varying functions from quick ly changing signals. SFA framework is introduced to the problem of recognizing prisoner’s actions by incorporating the supervised information with the original unsupervised SFA learning. Firstly, large amount of cuboids are collected in the motion boundaries, and local feature is described with SFA method. Each action sequence is represented by the Accumulated Squared Derivative (ASD), which is a statistical distribution of the slow features in an action sequence [1]. The descriptive statistical features are extracted inorder to reduce the dimension of the ASD feature is proposed. Finally, one against all support vector machine (SVM) is trained to classify action represented by statistical featur
    BibTeX:
    			
    			
                            @article{RubiaManimala-2013,
                              author       = {Rubia, L Berwin and Manimala, K},
                              title        = {Slow feature analysis for recognizing prisoner{\textquoteright}s activities to assist jail authorities},
                              journal      = {International Journal of Advances in Engineering and Emerging Technology (IJAEET)},
                              year         = {2013},
                              volume       = {1},
                              number       = {1},
    			  url          = {http://erlibrary.org/papers/ijaeet/v1/i1/ERL-101227.pdf}
                            }
    			
    			
    					
    Rubia, L.B. & Manimala, K. 2017 Slow Feature Analysis for Recognizing Prisoner ’ s Activities to Assist Jail Authorities .
     
    inproceedings
    BibTeX:
    			
    			
                            @inproceedings{RubiaManimala-2017,
                              author       = {L. Berwin Rubia and K. Manimala},
                              title        = {Slow Feature Analysis for Recognizing Prisoner ’ s Activities to Assist Jail Authorities},
                              year         = {2017}
                            }
    			
    			
    					
    Schill 2009 Modellierung invarianter Ortserkennung mittels Slow Feature Analysis .
     
    misc
    BibTeX:
    			
    			
                            @misc{Schill-2009,
                              author       = {Schill},
                              title        = {Modellierung invarianter {O}rtserkennung mittels {S}low {F}eature {A}nalysis},
                              year         = {2009}
                            }
    			
    			
    					
    Schönfeld, F. & Wiskott, L. 2012 Spatial representation in the hippocampus Poster at the g-Node GPU Workshop, Apr 11--13, Munich, Germany .
     
    misc
    BibTeX:
    			
    			
                            @misc{SchoenfeldWiskott-2012a,
                              author       = {Fabian Sch\"onfeld and Laurenz Wiskott},
                              title        = {Spatial representation in the hippocampus},
                              year         = {2012},
                              howpublished = {Poster at the g-Node GPU Workshop, Apr 11--13, Munich, Germany}
                            }
    			
    			
    					
    Schönfeld, F. & Wiskott, L. 2012 Sensory integration of place and head-direction cells in a virtual environment Poster at the 8th FENS Forum of Neuroscience, Jul 14--18, Barcelona, Spain .
     
    misc
    BibTeX:
    			
    			
                            @misc{SchoenfeldWiskott-2012b,
                              author       = {Fabian Sch\"onfeld and Laurenz Wiskott},
                              title        = {Sensory integration of place and head-direction cells in a virtual environment},
                              year         = {2012},
                              howpublished = {Poster at the 8\textsuperscript{th} FENS Forum of Neuroscience, Jul 14--18, Barcelona, Spain}
                            }
    			
    			
    					
    Schönfeld, F. & Wiskott, L. 2013 RatLab: an easy to use tool for place code simulations Frontiers in Computational Neuroscience , 104(7).
     
    article
    Abstract: In this paper we present the RatLab toolkit, a software framework designed to set up and simulate a wide range of studies targeting the encoding of space in rats. It provides open access to our modeling approach to establish place and head direction cells within unknown environments and it offers a set of parameters to allow for the easy construction of a variety of enclosures for a virtual rat as well as controlling its movement pattern over the course of experiments. Once a spatial code is formed RatLab can be used to modify aspects of the enclosure or movement pattern and plot the effect of such modifications on the spatial representation, i.e., place and head direction cell activity. The simulation is based on a hierarchical Slow Feature Analysis (SFA) network that has been shown before to establish a spatial encoding of new environments using visual input data only. RatLab encapsulates such a network, generates the visual training data, and performs all sampling automatically—with each of these stages being further configurable by the user. RatLab was written with the intention to make our SFA model more accessible to the community and to that end features a range of elements to allow for experimentation with the model without the need for specific programming skills.
    BibTeX:
    			
    			
                            @article{SchoenfeldWiskott-2013a,
                              author       = {F. Sch\"onfeld and L. Wiskott},
                              title        = {{RatLab}: an easy to use tool for place code simulations},
                              journal      = {Frontiers in Computational Neuroscience},
                              year         = {2013},
                              volume       = {104},
                              number       = {7},
    			  url          = {http://journal.frontiersin.org/article/10.3389/fncom.2013.00104/full},
                              url2         = {http://www.ini.rub.de/PEOPLE/wiskott/Reprints/SchoenfeldWiskott-2013-FrontiersCompNeurosci-RatLab.pdf},
                              doi          = {http://doi.org/10.3389/fncom.2013.00104}
                            }
    			
    			
    					
    Schönfeld, F. & Wiskott, L. 2013 Theoretical neuroscience: finding your way into the light IGSN report , 47-49.
     
    misc
    BibTeX:
    			
    			
                            @misc{SchoenfeldWiskott-2013b,
                              author       = {Fabian Sch\"onfeld and Laurenz Wiskott},
                              title        = {Theoretical neuroscience: finding your way into the light},
                              journal      = {IGSN report},
                              year         = {2013},
                              pages        = {47--49}
                            }
    			
    			
    					
    Schönfeld, F. & Wiskott, L. 2015 Modeling place field activity with hierarchical slow feature analysis frontiers in Computational Neuroscience , 9(51).
     
    article
    Abstract: What are the computational laws of hippocampal activity? In this paper we argue for the slowness principle as a fundamental processing paradigm behind hippocampal place cell firing. We present six different studies from the experimental literature, performed with real-life rats, that we replicated in computer simulations. Each of the chosen studies allows rodents to develop stable place fields and then examines a distinct property of the established spatial encoding: adaptation to cue relocation and removal; directional dependent firing in the linear track and open field; and morphing and scaling the environment itself. Simulations are based on a hierarchical Slow Feature Analysis (SFA) network topped by a principal component analysis (ICA) output layer. The slowness principle is shown to account for the main findings of the presented experimental studies. The SFA network generates its responses using raw visual input only, which adds to its biological plausibility but requires experiments performed in light conditions. Future iterations of the model will thus have to incorporate additional information, such as path integration and grid cell activity, in order to be able to also replicate studies that take place during darkness.
    BibTeX:
    			
    			
                            @article{SchoenfeldWiskott-2015,
                              author       = {Fabian Sch{\"{o}}nfeld and Laurenz Wiskott},
                              title        = {Modeling place field activity with hierarchical slow feature analysis},
                              journal      = {frontiers in Computational Neuroscience},
                              year         = {2015},
                              volume       = {9},
                              number       = {51},
    			  url          = {http://journal.frontiersin.org/article/10.3389/fncom.2015.00051/full},
                              doi          = {http://doi.org/10.3389/fncom.2015.00051}
                            }
    			
    			
    					
    Schüler, M.; Hlynsson, H.D. & Wiskott, L. 2018 Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening CoRR , abs/1808.08833.
     
    article
    BibTeX:
    			
    			
                            @article{SchuelerHlynssonEtAl-2018,
                              author       = {Merlin Sch{\"u}ler and Hlynur Dav{\'i}ð Hlynsson and Laurenz Wiskott},
                              title        = {Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening},
                              journal      = {CoRR},
                              year         = {2018},
                              volume       = {abs/1808.08833}
                            }
    			
    			
    					
    Schüler, M.; Hlynsson, H.D. & Wiskott, L. 2018 Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening arXiv preprint arXiv:1808.08833 .
     
    article
    BibTeX:
    			
    			
                            @article{SchuelerHlynssonEtAl-2018a,
                              author       = {Sch{\"u}ler, Merlin and Hlynsson, Hlynur Dav{\'\i}{\dh} and Wiskott, Laurenz},
                              title        = {Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening},
                              journal      = {arXiv preprint arXiv:1808.08833},
                              year         = {2018}
                            }
    			
    			
    					
    Schumann, M. 2011 Analyse und Vergleich von Algorithmen zur Bestimmung des optischen Flusses Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Informatik, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, Institut für Informatik .
     
    mastersthesis
    BibTeX:
    			
    			
                            @mastersthesis{Schumann-2011,
                              author       = {Schumann, Martin},
                              title        = {Analyse und {V}ergleich von {A}lgorithmen zur {B}estimmung des optischen {F}lusses},
                              school       = {Humboldt-Universit{\"{a}}t zu Berlin, Mathematisch-Naturwissenschaftliche Fakult{\"{a}}t II, Institut f{\"{u}}r Informatik},
                              year         = {2011},
                              url2         = {http://www.neurorobotics.eu/downloads/publications/2011%20Schumann%20-%20Analyse%20und%20Vergleich%20von%20Algorithmen%20zur%20Bestimmung%20des%20optischen%20Flusses.pdf}
                            }
    			
    			
    					
    Schwartz, A.D. 2009 On the pattern classification of structured data using the neocortexinspired memory-prediction framework Technical Report, University of Southern Denmark, Faculty of Engineering, Mærsk Mc-Kinney Møller Institute, Odense, Denmark, Sigma Space Corp. (090612).
     
    mastersthesis
    Abstract: In this master thesis project, we have researched how a theoretical model of the neo- cortex can be implemented as a hierarchical Bayesian network. The report is based on the theoretical Memory-prediction Framework (MPF) by Hawkins & Blakeslee (2004), which was later implemented in the Hierarchical Temporal Memory (HTM) by George & Hawkins (2005). The assumption of the master thesis project is that the HTM is unable to implement fundamental concepts of the MPF and is furthermore based on methods and tools that do not scale well with complexity when they are applied to realistic and complex problems. In this thesis we have been inspired by the work of Lee & Mumford (2003) and Dean (2006) in formulating an alternative model. The resulting novel Dynamic Hierarchical Nonparametric Belief Propagation (DHNBP) framework is based on the principals of the MPF framework and is able to facilitate representation of spatiotemporal sequences of features in a Dynamic Markov Network. The DHNBP framework is a novel extension of the Nonparametric Belief Propagation framework by Sudderth (2006) into hierarchies and time. In this report we provide algorithms for implementation, however, the DHNBP framework still has open-ended aspects that require further research
    BibTeX:
    			
    			
                            @mastersthesis{Schwartz-2009,
                              author       = {Schwartz, Anders Due},
                              title        = {On the pattern classification of structured data using the neocortexinspired memory-prediction framework},
                              school       = {University of Southern Denmark, Faculty of Engineering, M{\ae}rsk Mc-Kinney M{\o}ller Institute, Odense, Denmark},
                              year         = {2009},
                              number       = {090612},
    			  url          = {http://www.eing.dk/Files/Thesis.pdf},
                              url2         = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.454.2875&rep=rep1&type=pdf}
                            }
    			
    			
    					
    Shan, Y.; Zhang, Z. & Huang, K. 2014 Learning skeleton stream patterns with slow feature analysis for action recognition European Conference on Computer Vision , 111-121.
     
    inproceedings
    Abstract: Previous studies on MoCap (Motion Capturing (MoCap) System tracks the key points which are marked with conspicuous color or other materials (such as LED lights). The motion sequences are collected into MoCap action datasets, e.g., 1973 [3] and CMU [4] MoCap action datasets.) action data suggest that skeleton joint streams contain sufficient intrinsic information for understanding human body actions. With the advancement in depth sensors, e.g., Kinect, pose estimation with depth image provides more available realistic skeleton stream data. However, the locations of joints are always unstable due to noises. Moreover, as the estimated skeletons of different persons are not the same, the variance of intra-class is large. In this paper, we first expand the coordinate stream of each joint into multi-order streams by fusing hierarchical global information to improve the stability of joint streams. Then, Slow Feature Analysis is applied to learn the visual pattern of each joint, and the high-level information in the learnt general patterns is encoded into each skeleton to reduce the intra-variance of the skeletons. Temporal pyramid of posture word histograms is used to describe the global temporal information of action sequence. Our approach is verified with Support Vector Machine (SVM) classifier on MSR Action3D dataset, and the experimental results demonstrate that our approach achieves the state-of-the-art level.
    BibTeX:
    			
    			
                            @inproceedings{ShanZhangEtAl-2014,
                              author       = {Shan, Yanhu and Zhang, Zhang and Huang, Kaiqi},
                              title        = {Learning skeleton stream patterns with slow feature analysis for action recognition},
                              booktitle    = {European Conference on Computer Vision},
                              year         = {2014},
                              pages        = {111--121},
    			  url          = {http://link.springer.com/chapter/10.1007%2F978-3-319-16199-0_8},
                              doi          = {http://doi.org/10.1007/978-3-319-16199-0_8}
                            }
    			
    			
    					
    Shan, Y.; Zhang, Z.; Wang, S.; Huang, K. & Tan, T. 2011 Surveillance event detection IRDS-CASIA at TRECVid 2011, Benchmarking Activity .
     
    inproceedings
    Abstract: This paper proposes the event detection system for TRECVid 2011 surveillance event detection. ”CellToEar”, ”Embrace”, ”ObjectPut”, ”PeopleMeet”, ”PeopleSplit- Up”, ”PersonRuns” and ”Pointing” are the 7 events we detect in our system. Firstly, interest points are detected in the Local spatial and temporal regions, and local feature is described with SFA (slow feature analysis) method. We apply lib-SVM to classify the 7 events and the 7 scores cor- responding to foregoing events are the original result of the local region. Post-processing is used to generate the global result and reduce the false alarm.
    BibTeX:
    			
    			
                            @inproceedings{ShanZhangEtAl-2011,
                              author       = {Shan, Yanhu and Zhang, Zhang and Wang, Shiquan and Huang, Kaiqi and Tan, Tieniu},
                              title        = {Surveillance event detection},
                              booktitle    = {IRDS-CASIA at TRECVid 2011, Benchmarking Activity},
                              year         = {2011},
                              url2         = {https://pdfs.semanticscholar.org/7a32/2fb1426e67adcbd6c7c0650e2dbc418fc367.pdf}
                            }
    			
    			
    					
    Shang, C.; Huang, B.; Lu, Y.; Yang, F. & Huang, D. 2016 Dynamic modeling of gross errors via probabilistic slow feature analysis applied to a mining slurry preparation process IFAC-PapersOnLine , 49(20), 25-30.
    Publ. Elsevier.
     
    article
    BibTeX:
    			
    			
                            @article{ShangHuangEtAl-2016b,
                              author       = {Shang, Chao and Huang, Biao and Lu, Yaojie and Yang, Fan and Huang, Dexian},
                              title        = {Dynamic modeling of gross errors via probabilistic slow feature analysis applied to a mining slurry preparation process},
                              journal      = {IFAC-PapersOnLine},
                              publisher    = {Elsevier},
                              year         = {2016},
                              volume       = {49},
                              number       = {20},
                              pages        = {25--30},
                              url2         = {https://www.researchgate.net/profile/Chao_Shang2/publication/310390674_Dynamic_Modeling_of_Gross_Errors_via_Probabilistic_Slow_Feature_Analysis_Applied_to_a_Mining_Slurry_Preparation_Process/links/582dc5f908ae102f072da843.pdf}
                            }
    			
    			
    					
    Shang, C.; Huang, B.; Yang, F. & Huang, D. 2015 Probabilistic slow feature analysis-based representation learning from massive process data for soft sensor modeling AIChE Journal , 61(12), 4126-4139.
    Publ. Wiley Online Library.
     
    article