Theory of Neural Systems

 

What we do

We are an interdisciplinary research group focusing on principles of self-organization in neural systems, ranging from artificial neural networks to the hippocampus/memory system. By bringing together machine learning and computational neuroscience we explore ways of extracting representations from data that are useful for goal-directed learning.

On the machine learning side our work is centered around reinforcement learning, where an agent learns to interact with its environment. In this context we investigate learning of representations for different kinds of data, such as visual data and graphs, by means of deep learning as well as classical unsupervised methods. Additionally, we research model-based agents that can remember their environment and are capable of planning ahead. On all these frontiers, we do not only seek to improve algorithmic performance but also to develop new ways of building more interpretable, explainable and human-friendly AI.

On the neuroscience side, our work focuses on computational modeling of brain functions concerned with encoding, storage and recall of memories. Through this we aim to understand how information is learned, represented within different types of memory and finally reconstructed from memory.

 

Representation Learning for Reinforcement Learning

Reinforcement learning has become an increasingly powerful tool with the advent of neural networks, which provide a powerful all-in-one solution to solve complex tasks. A major downside of this approach, however, is its lack of transparency and transferability - which are important qualities if these systems are to be applied in real-world applications. This is especially important for the many cases where safety or fairness play an important role. By separating the learning of useful data representations from the process of solving a task, we aim to improve explainability as well as flexibility in reinforcement learning to address these issues; With a good representation of input data, the model required to solve a task can be simpler and thus easier to understand. At the same time, it becomes more feasible to transfer already established, transparent solutions to similar tasks.

Our group has developed the highly cited Slow Feature Analysis (SFA), which is able to extract slowly moving features from data. Over recent years, we have extended SFA and applied it to various problem settings. Nowadays, we are also working with more recent deep representation learning methods - especially different flavors of variational autoencoders (VAEs).

 

Memory-based Reinforcement Learning

Besides the representation learning aspect outlined above, we are interested in model based deep reinforcement learning algorithms for complex domains. In model based reinforcement learning, the goal is to first learn the environmental transition and reward dynamics of a given problem and then use the learned dynamics to solve the problem more data-efficiently. Since stochasticity or partial observability tend to play an important role in complex and high-dimensional settings, memory techniques are commonly used to aggregate information over the course of multiple (possibly many) time steps. Furthermore, to increase robustness and model performance, sampling models have become an increasingly popular choice over to the more common expectation models in recent years.

Our main research focus in this setting lies on the improvement of the planning performance of learned environment models. Thereby, we are specifically interested in designing model architectures that qualify for efficient and precise planning by construction instead of working on the planning procedure itself. To achieve this, we combine the above mentioned sampling models and memory techniques with principles from hierarchical reinforcement learning.

 

System Level Modeling of the Hippocampus

Remembering what we have done and experienced in the past is essential for defining what we are and deciding what to do in the future.  However, our so-called episodic memory is far less reliable than one might think.  We neglect, change, and even add things to our memories, often in ways that makes them more in line with what we would generally expect or like.  Thus, episodic memory seems to be largely a generative process, where incomplete memory traces are enriched and modified by general so-called semantic information and expectations about the world.

In the neuroscience side of our group we use advanced machine learning techniques and develop a model of the interaction between the episodic and semantic memory system, mainly during storage and retrieval of episodic memories, but also for learning semantics. Our model will describe the interplay between hippocampus and neocortex. We hypothesize that the hippocampus stores and retrieves selected aspects of an episode, which are necessarily incomplete, and the neocortex reasonably fills in the missing information based on general semantic information. For modeling we have used many generative models ranging from restricted boltzmann machines (RBM) to variational autoencoders (VAE) and vector quantized variational autoencoders (VQVAE) and also PixelCNN.

This research is part of an interdisciplinary DFG funded research group "Constructing scenarios of the past: A new framework in episodic memory". We have close interaction with partners from psychology collaborators that study the neural mechanisms via fMRI and behavioral experiments and philosophy partners that address fundamental questions that arise within and about our framework. 

Our Mission Statement

Career: Help young scientists realize their potential in science and society.
Research: Discover principles of self-organization of intelligent systems.
Education: Teach creativity through mathematical intuition.
Society: Contribute to an informed discussion about artificial intelligence.
Technology: Support technological progress by sharing our expertise.
2020

Exploration of Deep Double Descent through Ensemble Formation

We empirically decompose the generalization loss of deep neural networks into bias and variance components on an image classification task by constructing ensembles using geometric-mean averaging of the sub-net outputs and we isolate double descent at the variance component of the loss. Our results show that small models afford ensembles that outperform single large models while requiring considerably fewer parameters and computational steps. We also find that deep double descent that depends on the existence of label noise can be mitigated by using ensembles of models subject to identical label noise almost as thoroughly as by ensembles of networks each trained subject to i.i.d. noise.

2019

Combat Task Interference in Multi-Task Model-Based Reinforcement Learning by Using Separate Dynamics Models

In model-free multi-task reinforcement learning (RL), abundant work shows that a shared policy network can improve performance across the different tasks. The rationale behind this is that an agent can learn similarities that all tasks have in common and thus effectively enrich the sample count for all tasks at hand. In model-based multi-task RL however, we found evidence suggesting that a dynamics model can suffer from task confusion or catastrophic interference if it is trained on multiple tasks at once.

2018

Learning gradient-based ICA by neurally estimating mutual information

Several methods of estimating the mutual information of random variables have been developed in recent years. They can prove valuable for novel approaches to learning statistically independent features. In this paper, we use one of these methods, a mutual information neural estimation (MINE) network, to present a proof-of-concept of how a neural network can perform linear ICA. We minimize the mutual information,as estimated by a MINE network, between the output units of a differentiable encoder network. This is done by simple alternate optimization of the two networks. The method is shown to get a qualitatively equal solution to FastICA on blind-source-separation of noisy sources.

2018

Measuring the Data Efficiency of Deep Learning Methods

We propose a new experimental protocol and use it to benchmark the data efficiency — performance as a function of training set size — of two deep learning algorithms, convolutional neural networks (CNNs) and hierarchical information-preserving graph-based slow feature analysis (HiGSFA), for tasks in classification and transfer learning scenarios.

2010

Ongoing project by Stefan Richthofer: Predictable Feature Analysis

To apply Slow Feature Analysis (SFA) to interactive scenarios it needs to deal with a control signal. Predictability is a crucial property of features involving control and this project deals with Predictable Feature Analysis (PFA): an SFA-inspired approach to extract predictable features and leverage them to solve continuous control tasks.

    2024

  • Empowering Advisors: Designing a Dashboard for University Student Guidance
    Baucks, F., & Wiskott, L.
    In P. Salden & Leschke, J. (Eds.), Learning Analytics und Künstliche Intelligenz in Studium und Lehre. Erfahrungen und Schlussfolgerungen aus einer hochschulweiten Erprobung. (p. accepted) Wiesbaden, Germany: Springer VS Fachmedien
  • *Best Paper Nominee* Gaining Insights into Course Difficulty Variations Using Item Response Theory
    Baucks, F., Schmucker, R., & Wiskott, L.
    In LAK24: 14th International Learning Analytics and Knowledge Conference (pp. 450–461) New York, NY, USA: Association for Computing Machinery
  • Gaining Insights into Course Difficulty Variations Using Item Response Theory
    Baucks, F., Schmucker, R., & Wiskott, L.
    In Proceedings of the 14th Learning Analytics and Knowledge Conference (LAK′24), Kyoto, Japan (pp. 450–461) New York, NY, USA: Association for Computing Machinery
  • tachAId—An interactive tool supporting the design of human-centered AI solutions
    Bauroth, M., Rath-Manakidis, P., Langholf, V., Wiskott, L., & Glasmachers, T.
    Frontiers in Artificial Intelligence, 7
  • Ökolopoly: Case Study on Large Action Spaces in Reinforcement Learning
    Engelhardt, R. C., Raycheva, R., Lange, M., Wiskott, L., & Konen, W.
    In G. Nicosia, Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P. M., & Umeton, R. (Eds.), Machine Learning, Optimization, and Data Science (pp. 109–123) Cham: Springer Nature Switzerland
  • Interpretable Brain-Inspired Representations Improve RL Performance on Visual Navigation Tasks
    Lange, M., Engelhardt, R. C., Konen, W., & Wiskott, L.
    In eXplainable AI approaches for Deep Reinforcement Learning
  • *Best Paper Award* Improving Reinforcement Learning Efficiency with Auxiliary Tasks in Non-visual Environments: A Comparison
    Lange, M., Krystiniak, N., Engelhardt, R. C., Konen, W., & Wiskott, L.
    In G. Nicosia, Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P. M., & Umeton, R. (Eds.), Machine Learning, Optimization, and Data Science (pp. 177–191) Cham: Springer Nature Switzerland
  • ProtoP-OD: Explainable Object Detection with Prototypical Parts
    Rath-Manakidis, P., Strothmann, F., Glasmachers, T., & Wiskott, L.
    arXiv
  • Classification and Reconstruction Processes in Deep Predictive Coding Networks: Antagonists or Allies?
    Rathjens, J., & Wiskott, L.
    arXiv
  • 2023

  • A Tutorial on the Spectral Theory of Markov Chains
    Seabrook, E., & Wiskott, L.
    Neural Computation, 35(11), 1713–1796
  • A map of spatial navigation for neuroscience
    Parra-Barrero, E., Vijayabaskaran, S., Seabrook, E., Wiskott, L., & Cheng, S.
    Neuroscience & Biobehavioral Reviews, 152, 105200
  • Hierarchical Transformer VQ-VAE: An investigation of attentional selection in a generative model of episodic memory
    Reyhanian, S., Fayyaz, Z., & Wiskott, L.
    Bernstein Conference
  • Von der Forschung in die Praxis: Entwicklung eines Dashboards für die Studienberatung
    Baucks, F., & Wiskott, L.
    Abstract presented at 2nd Learning AID
  • Ein Dashboard für die Studienberatung: Technische Infrastruktur und Studienverlaufsplanung im Projekt KI:edu.nrw
    Baucks, F., Leschke, J., Metzger, C., & Wiskott, L.
    In Workshop Proceedings of the 21th Fachtagung Bildungstechnologien (DELFI) (pp. 185–188) Bonn: Gesellschaft für Informatik e.V.
  • *Best Paper Nominee* Mitigating Biases using an Additive Grade Point Model: Towards Trustworthy Curriculum Analytics Measures
    Baucks, F., & Wiskott, L.
    In 21. Fachtagung Bildungstechnologien (DELFI) (pp. 41–52) Bonn: Gesellschaft für Informatik e.V.
  • Tracing Changes in University Course Difficulty Using Item-Response Theory
    Baucks*, F., Schmucker*, R., & Wiskott, L.
    AAAI Workshop on AI for Education: https://ai4ed.cc/workshops/aaai2023
  • Sample-Based Rule Extraction for Explainable Reinforcement Learning
    Engelhardt, R. C., Lange, M., Wiskott, L., & Konen, W.
    In G. Nicosia, Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P., Di Fatta, G., et al. (Eds.), Machine Learning, Optimization, and Data Science (pp. 330–345) Cham: Springer Nature Switzerland
  • Iterative Oblique Decision Trees Deliver Explainable RL Models
    Engelhardt, R. C., Oedingen, M., Lange, M., Wiskott, L., & Konen, W.
    Algorithms, 16(6)
  • Benchmarks for Physical Reasoning AI
    Melnik, A., Schiewer, R., Lange, M., Muresanu, A. I., Saeidi, M., Garg, A., & Ritter, H.
    Transactions on Machine Learning Research
  • Modeling the function of episodic memory in spatial learning
    Zeng, X., Diekmann, N., Wiskott, L., & Cheng, S.
    Frontiers in Psychology, 14
  • 2022

  • A Model of Semantic Completion in Generative Episodic Memory
    Fayyaz, Z., Altamimi, A., Zoellner, C., Klein, N., Wolf, O. T., Cheng, S., & Wiskott, L.
    Neural Computation, 34(9), 1841–1870
  • Simulating Policy Changes in Prerequisite-Free Curricula: A Supervised Data-Driven Approach
    Baucks, F., & Wiskott, L.
    In Proceedings of the 15th International Conference on Educational Data Mining (pp. 470–476) International Educational Data Mining Society
  • Latent Representation Prediction Networks
    Hlynsson, H. D., Schüler, M., Schiewer, R., Glasmachers, T., & Wiskott, L.
    International Journal of Pattern Recognition and Artificial Intelligence, 36(01), 2251002
  • Reduction of Variance-related Error through Ensembling: Deep Double Descent and Out-of-Distribution Generalization
    Rath-Manakidis, P., Hlynsson, H. D., & Wiskott, L.
    In Proceedings of the 11th International Conference on Pattern Recognition Applications and Methods - ICPRAM, (pp. 31–40) SciTePress
  • Modular Networks Prevent Catastrophic Interference in Model-Based Multi-task Reinforcement Learning
    Schiewer, R., & Wiskott, L.
    In Machine Learning, Optimization, and Data Science (pp. 299–313) Springer International Publishing
  • 2021

  • Context-dependent extinction learning emerging from raw sensory inputs: a reinforcement learning approach
    Walther, T., Diekmann, N., Vijayabaskaran, S., Donoso, J. R., Manahan-Vaughan, D., Wiskott, L., & Cheng, S.
    Scientific Reports, 11(1)
  • Reward prediction for representation learning and reward shaping
    Hlynsson, H. D., & Wiskott, L.
    arXiv
  • Fully Automated, Realistic License Plate Substitution in Real-Life Images
    Kacmaz, U., Melchior, J., Horn, D., Witte, A., Schoenen, S., & Houben, S.
    In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC) (pp. 2972–2979)
  • Exploring Slow Feature Analysis for Extracting Generative Latent Factors
    Menne, M., Schüler, M., & Wiskott, L.
    In Proceedings of the 10th International Conference on Pattern Recognition Applications and Methods SCITEPRESS - Science and Technology Publications
  • Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation
    Müller, N., & Glasmachers, T.
    In Proceedings of the 16th ACM/SIGEVO Conference on Foundations of Genetic Algorithms (FOGA′21) Association for Computing Machinery
  • 2020

  • Improving sensory representations using episodic memory
    Görler, R., Wiskott, L., & Cheng, S.
    Hippocampus, 30(6), 638–656
  • Z2 vortices in the ground states of classical Kitaev-Heisenberg models
    Seabrook, E., Baez, M. L., & Reuther, J.
    Physical Review B, 101(17)
  • Latent Representation Prediction Networks
    Hlynsson, H. D., Schüler, M., Schiewer, R., Glasmachers, T., & Wiskott, L.
    arXiv preprint arXiv:2009.09439
  • Singular Sturm-Liouville Problems with Zero Potential (q=0) and Singular Slow Feature Analysis
    Richthofer, S., & Wiskott, L.
    CoRR e-print arXiv:2011.04765
  • 2019

  • Improved graph-based SFA: information preservation complements the slowness principle
    Escalante-B., A. N., & Wiskott, L.
    Machine Learning
  • Measuring the Data Efficiency of Deep Learning Methods
    Hlynsson, H., Escalante-B., A., & Wiskott, L.
    In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods SCITEPRESS - Science and Technology Publications
  • Learning Gradient-Based ICA by Neurally Estimating Mutual Information
    Hlynsson, H. D. ′ið, & Wiskott, L.
    In C. Benzmüller & Stuckenschmidt, H. (Eds.), KI 2019: Advances in Artificial Intelligence (pp. 182–187) Cham: Springer International Publishing
  • Learning gradient-based ICA by neurally estimating mutual information
    Hlynsson, H. D., & Wiskott, L.
    arXiv, arXiv–1904.09858
  • Measuring the Data Efficiency of Deep Learning Methods
    Hlynsson, H. D., Wiskott, L., & others,
    arXiv, arXiv–1907
  • A Hippocampus Model for Online One-Shot Storage of Pattern Sequences
    Melchior, J., Bayati, M., Azizi, A., Cheng, S., & Wiskott, L.
    CoRR e-print arXiv:1905.12937
  • Dual SVM Training on a Budget
    Qaadan, S., Schüler, M., & Glasmachers, T.
    In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods SCITEPRESS - Science and Technology Publications
  • Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening
    Schüler, M., Hlynsson, H. D. ′ið, & Wiskott, L.
    In W. S. Lee & Suzuki, T. (Eds.), Proceedings of The Eleventh Asian Conference on Machine Learning (Vol. 101, pp. 316–331) Nagoya, Japan: PMLR
  • 2018

  • Storage fidelity for sequence memory in the hippocampal circuit
    Bayati, M., Neher, T., Melchior, J., Diba, K., Wiskott, L., & Cheng, S.
    PLOS ONE, 13(10), e0204685
  • Utilizing Slow Feature Analysis for Lipreading
    Freiwald, J., Karbasi, M., Zeiler, S., Melchior, J., Kompella, V., Wiskott, L., & Kolossa, D.
    In Speech Communication; 13th ITG-Symposium (pp. 191–195) VDE Verlag GmbH
  • The Interaction between Semantic Representation and Episodic Memory
    Fang, J., Rüther, N., Bellebaum, C., Wiskott, L., & Cheng, S.
    Neural Computation, 30(2), 293–332
  • Challenges in High-dimensional Controller Design with Evolution Strategies
    Müller, N., & Glasmachers, T.
    In Parallel Problem Solving from Nature (PPSN XVI) Springer
  • Global Navigation Using Predictable and Slow Feature Analysis in Multiroom Environments, Path Planning and Other Control Tasks
    Richthofer, S., & Wiskott, L.
    CoRR e-print arXiv:1805.08565
  • Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening
    Schüler, M., Hlynsson, H. D., & Wiskott, L.
    CoRR e-print arXiv:1808.08833
  • Slowness as a Proxy for Temporal Predictability: An Empirical Comparison
    Weghenkel, B., & Wiskott, L.
    Neural Computation, 30(5), 1151–1179
  • 2017

  • PFAx: Predictable Feature Analysis to Perform Control
    Richthofer, S., & Wiskott, L.
    CoRR e-print arXiv:1712.00634
  • Gaussian-binary restricted Boltzmann machines for modeling natural image statistics
    Melchior, J., Wang, N., & Wiskott, L.
    PLOS ONE, 12(2), 1–24
  • Generating sequences in recurrent neural networks for storing and retrieving episodic memories
    Bayati, M., Melchior, J., Wiskott, L., & Cheng, S.
    In Proc. 26th Annual Computational Neuroscience Meeting (CNS*2017): Part 2
  • Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information
    Draht, F., Zhang, S., Rayan, A., Schönfeld, F., Wiskott, L., & Manahan-Vaughan, D.
    Frontiers in Behavioral Neuroscience, 11(92)
  • Extensions of Hierarchical Slow Feature Analysis for Efficient Classification and Regression on High-Dimensional Data
    Escalante-B., A. N.
    Doctoral thesis, Ruhr University Bochum, Faculty of Electrical Engineering and Information Technology
  • Intrinsically Motivated Acquisition of Modular Slow Features for Humanoids in Continuous and Non-Stationary Environments
    Kompella, V. R., & Wiskott, L.
    arXiv preprint arXiv:1701.04663
  • Graph-based predictable feature analysis
    Weghenkel, B., Fischer, A., & Wiskott, L.
    Machine Learning, 1–22
  • 2016

  • Graph-based Predictable Feature Analysis
    Weghenkel, B., Fischer, A., & Wiskott, L.
    e-print arXiv:1602.00554v1
  • Theoretical analysis of the optimal free responses of graph-based SFA for the design of training graphs.
    Escalante-B., A. N., & Wiskott, L.
    Journal of Machine Learning Research, 17(157), 1–36
  • Improved graph-based SFA: Information preservation complements the slowness principle
    Escalante-B., A. N., & Wiskott, L.
    e-print arXiv:1601.03945
  • How to Center Deep Boltzmann Machines
    Melchior, J., Fischer, A., & Wiskott, L.
    Journal of Machine Learning Research, 17(99), 1–61
  • A computational model of spatial encoding in the hippocampus
    Schönfeld, F.
    Doctoral thesis, Ruhr-Universität Bochum
  • 2015

  • Using SFA and PFA to solve navigation tasks in multi room environments
    Richthofer, S.
    Weekly Seminar talk, Institut für Neuroinformatik, Ruhr-Universität Bochum, Apr 1st, 2015, Bochum, Germany
  • Theoretical Analysis of the Optimal Free Responses of Graph-Based SFA for the Design of Training Graphs
    Escalante-B., A. N., & Wiskott, L.
    e-print arXiv:1509.08329 (Accepted in Journal of Machine Learning Research)
  • Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots
    Kompella, V. R., Stollenga, M., Luciw, M., & Schmidhuber, J.
    Artificial Intelligence
  • Memory Storage Fidelity in the Hippocampal Circuit: The Role of Subregions and Input Statistics
    Neher, T., Cheng, S., & Wiskott, L.
    PLoS Computational Biology, 11(5), e1004250
  • Predictable Feature Analysis
    Richthofer, S., & Wiskott, L.
    In 14th IEEE International Conference on Machine Learning and Applications, ICMLA 2015, Miami, FL, USA, December 9-11, 2015 (pp. 190–196)
  • Predictable Feature Analysis
    Richthofer, S., & Wiskott, L.
    In Workshop New Challenges in Neural Computation 2015 (NC2) (pp. 68–75)
  • Predictable Feature Analysis.
    Richthofer, S., & Wiskott, L.
    In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA) (pp. 190–196)
  • Modeling place field activity with hierarchical slow feature analysis
    Schoenfeld, F., & Wiskott, L.
    Frontiers in Computational Neuroscience, 9(51)
  • Modeling place field activity with hierarchical slow feature analysis
    Schönfeld, F., & Wiskott, L.
    frontiers in Computational Neuroscience, 9(51)
  • 2014

  • Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells.
    Dähne, S., Wilbert, N., & Wiskott, L.
    PLoS Comput Biol, 10(5), e1003564
  • Slow Feature Analysis for Curiosity-Driven Agents, 2014 IEEE WCCI Tutorial
  • Slowness Learning for Curiosity-Driven Agents
    Kompella, V. R.
    Doctoral thesis, Università della svizzera italiana (USI)
  • An Anti-hebbian Learning Rule to Represent Drive Motivations for Reinforcement Learning
    Kompella, V. R., Kazerounian, S., & Schmidhuber, J.
    In From Animals to Animats 13 (pp. 176–187) Springer International Publishing
  • Explore to See, Learn to Perceive, Get the Actions for Free: SKILLABILITY
    Kompella, V. R., Stollenga, M. F., Luciw, M. D., & Schmidhuber, J.
    In Proceedings of IEEE Joint Conference of Neural Networks (IJCNN)
  • An Extension of Slow Feature Analysis for Nonlinear Blind Source Separation
    Sprekeler, H., Zito, T., & Wiskott, L.
    Journal of Machine Learning Research, 15, 921–947
  • Modeling correlations in spontaneous activity of visual cortex with Gaussian-binary deep Boltzmann machines
    Wang, N., Jancke, D., & Wiskott, L.
    In Proc. Bernstein Conference for Computational Neuroscience, Sep 3–5,Göttingen, Germany (pp. 263–264) BFNT Göttingen
  • Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines
    Wang, N., Jancke, D., & Wiskott, L.
    In Proc. International Conference of Learning Representations (ICLR′14, workshop), Apr 14–16,Banff, Alberta, Canada
  • Gaussian-binary Restricted Boltzmann Machines on Modeling Natural Image statistics
    Wang, N., Melchior, J., & Wiskott, L.
    (Vol. 1401.5900) arXiv.org e-Print archive
  • Learning predictive partitions for continuous feature spaces
    Weghenkel, B., & Wiskott, L.
    In Proc. 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Apr 23-25, Bruges, Belgium (pp. 577–582)
  • Elastic Bunch Graph Matching
    Wiskott, L., Würtz, R. P., & Westphal, G.
    Scholarpedia, 9, 10587
  • Spatial representations of place cells in darkness are supported by path integration and border information
    Zhang, S., Schoenfeld, F., Wiskott, L., & Manahan-Vaughan, D.
    Frontiers in Behavioral Neuroscience, 8(222)
  • 2013

  • A computational model for preplay in the hippocampus
    Azizi, A. H., Wiskott, L., & Cheng, S.
    Frontiers in Computational Neuroscience, 7, 161
  • How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis
    Escalante-B., A. -N., & Wiskott, L.
    Cognitive Sciences EPrint Archive (CogPrints)
  • How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis
    Escalante-B., A. N., & Wiskott, L.
    Journal of Machine Learning Research, 14, 3683–3719
  • Deep Hierarchies in the Primate Visual Cortex: What Can We Learn For Computer Vision?
    Krüger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al.
    IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871
  • An intrinsic value system for developing multiple invariant representations with incremental slowness learning
    Luciw*, M., Kompella*, V., Kazerounian, S., & Schmidhuber, J.
    Frontiers in neurorobotics, 7, *Joint first authors
  • How to Center Binary Restricted Boltzmann Machines
    Melchior, J., Fischer, A., Wang, N., & Wiskott, L.
    (Vol. 1311.1354) arXiv.org e-Print archive
  • Are memories really stored in the hippocampal CA3 region?
    Neher, T., Cheng, S., & Wiskott, L.
    BoNeuroMed
  • Are memories really stored in the hippocampal CA3 region?
    Neher, T., Cheng, S., & Wiskott, L.
    In Proc. 10th Göttinger Meeting of the German Neuroscience Society, Mar 13-16, Göttingen, Germany (p. 104)
  • Predictable Feature Analysis
    Richthofer, S., & Wiskott, L.
    CoRR e-print arXiv:1311.2503
  • RatLab: An easy to use tool for place code simulations
    Schoenfeld, F., & Wiskott, L.
    Frontiers in Computational Neuroscience, 7(104)
  • Modeling correlations in spontaneous activity of visual cortex with centered Gaussian-binary deep Boltzmann machines
    Wang, N., Jancke, D., & Wiskott, L.
    arXiv preprint arXiv:1312.6108
  • 2012

  • Slow Feature Analysis: Perspectives for Technical Applications of a Versatile Learning Algorithm
    Escalante-B., A. N., & Wiskott, L.
    Künstliche Intelligenz [Artificial Intelligence], 26(4), 341–348
  • Incremental slow feature analysis: Adaptive low-complexity slow feature updating from high-dimensional input streams
    Kompella, V. R., Luciw, M., & Schmidhuber, J.
    Neural Computation, 24(11), 2994–3024
  • Autonomous learning of abstractions using curiosity-driven modular incremental slow feature analysis
    Kompella, V. R., Luciw, M., Stollenga, M., Pape, L., & Schmidhuber, J.
    In Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on (pp. 1–8) IEEE
  • Collective-reward based approach for detection of semi-transparent objects in single images
    Kompella, V. R., & Sturm, P.
    Computer Vision and Image Understanding, 116(4), 484–499
  • Hierarchical incremental slow feature analysis
    Luciw, M., Kompella, V. R., & Schmidhuber, J.
    Workshop on Deep Hierarchies in Vision
  • Predictable Feature Analysis
    Richthofer, S., Weghenkel, B., & Wiskott, L.
    In Frontiers in Computational Neuroscience
  • Sensory integration of place and head-direction cells in a virtual environment
    Schönfeld, F., & Wiskott, L.
    Poster at NeuroVisionen 8, 26. Oct 2012, Aachen, Germany
  • Sensory integration of place and head-direction cells in a virtual environment
    Schönfeld, F., & Wiskott, L.
    Poster at the 8th FENS Forum of Neuroscience, Jul 14–18, Barcelona, Spain
  • An Analysis of Gaussian-Binary Restricted Boltzmann Machines for Natural Images
    Wang, N., Melchior, J., & Wiskott, L.
    In Proc. 20th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Apr 25–27, Bruges, Belgium (pp. 287–292)
  • 2011

  • Slow feature analysis
    Wiskott, L., Berkes, P., Franzius, M., Sprekeler, H., & Wilbert, N.
    Scholarpedia, 6(4), 5282
  • Heuristic Evaluation of Expansions for Non-Linear Hierarchical Slow Feature Analysis.
    Escalante, A., & Wiskott, L.
    In Proc. The 10th Intl. Conf. on Machine Learning and Applications (ICMLA′11), Dec 18–21, Honolulu, Hawaii (pp. 133–138) IEEE Computer Society
  • Incremental Slow Feature Analysis.
    Kompella, V. R., Luciw, M. D., & Schmidhuber, J.
    IJCAI, 11, 1354–1359
  • Autoincsfa and vision-based developmental learning for humanoid robots
    Kompella, V. R., Pape, L., Masci, J., Frank, M., & Schmidhuber, J.
    In Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference on (pp. 622–629) IEEE
  • Detection and avoidance of semi-transparent obstacles using a collective-reward based approach
    Kompella, V. R., & Sturm, P.
    In Robotics and Automation (ICRA), 2011 IEEE International Conference on (pp. 3469–3474) IEEE
  • 2010

  • 3-SAT on CUDA: Towards a massively parallel SAT solver
    Meyer, Q., Schönfeld, F., Stamminger, M., & Wanka, R.
    In 2010 International Conference on High Performance Computing Simulation (pp. 306–313)
  • Building a Side Channel Based Disassembler
    Eisenbarth, T., Paar, C., & Weghenkel, B.
    In M. L. Gavrilova, Tan, C. J. K., & Moreno, E. D. (Eds.), Transactions on Computational Science X: Special Issue on Security in Computing, Part I (pp. 78–99) Berlin, Heidelberg: Springer Berlin Heidelberg
  • Gender and Age Estimation from Synthetic Face Images with Hierarchical Slow Feature Analysis.
    Escalante, A., & Wiskott, L.
    In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU′10), Jun 28 – Jul 2, Dortmund
  • 2006

  • Analytical derivation of complex cell properties from the slowness principle
    Sprekeler, H., & Wiskott, L.
    In Proc. 2nd Bernstein Symposium for Computational Neuroscience, Oct 1–3, Berlin, Germany (p. 67) Bernstein Center for Computational Neuroscience (BCCN) Berlin
  • Analytical derivation of complex cell properties from the slowness principle
    Sprekeler, H., & Wiskott, L.
    In Proc. Berlin Neuroscience Forum, Jun 8–10, Bad Liebenwalde, Germany (pp. 65–66) Berlin: Max-Delbrück-Centrum für Molekulare Medizin (MDC)
  • Analytical derivation of complex cell properties from the slowness principle
    Sprekeler, H., & Wiskott, L.
    In Proc. 15th Annual Computational Neuroscience Meeting (CNS′06), Jul 16–20, Edinburgh, Scotland

    2021

  • Interaction of Ensembling and Double Descent in Deep Neural Networks
    Rath-Manakidis, P.
    Master’s thesis, Cognitive Science, Ruhr University Bochum, Germany

A brief introduction to Slow Feature Analysis

One of the main research topics of the TNS group is called Slow Feature Analysis. Slow feature analysis (SFA) is an unsupervised learning method to extract the slowest or smoothest underlying functions or features from a time series. This can be used for dimensionality reduction, regression and classification. In this post we will provide a code example where SFA is applied, to help motivate the method. Then we will go into more detail about the math behind the method and finally provide links to other good resources on the material.

An extension to Slow Feature Analysis (xSFA)

Following our previous tutorial on Slow Feature Analysis (SFA) we now talk about xSFA - an unsupervised learning algorithm and extension to the original SFA algorithm that utilizes the slow features generated by SFA to reconstruct the individual sources of a nonlinear mixture, a process also known as Blind Source Separation (e.g. the reconstruction of individual voices from the recording of a conversation between multiple people). In this tutorial, we will provide a short example to demonstrate the capabilities of xSFA, discuss its limits, and offer some pointers on how and when to apply it. We also take a closer look at the theoretical background of xSFA to provide an intuition for the mathematics behind it.

Modeling the hippocampus, part I: Why the hippcampus?

In this multi-part series I'd like to give an introduction into how computational neuroscience can work hand in hand with experimental neuroscience in order to help us understand how the mammalian brain works. As a case study we'll take a look at modeling the hippocampus, a central and essential structure in our daily dealings with reality. In part I of the series we first take a look at the hippocampus, its role in the brain, and what makes this particular structure so uniquely fascinating.

Modeling the hippocampus, part II: Hippocampal function.

In this multi-part series I'd like to give an introduction into how computational neuroscience can work hand in hand with experimental neuroscience. In part II of this series we take a look at some of the fundamental problems of understanding brain computations. In order to get an idea about hippocampal function we also talk about its involvement in human memory and how we came to know about it.

Modeling the hippocampus, part III: Spatial processing in the hippocampus.

In this multi-part series I'd like to give an introduction into how computational neuroscience can work hand in hand with experimental neuroscience to understand the mammalian hippocampus. In this third part of the series we take a look at the role of the hippocampus in spatial processing in rodents to get a better idea of the computation the hippocampus provides our brains with.

Institute for Neural Computation

Department of Mathematics

Department of Electrical Engineering

The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer vision.

Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany

Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210