• RUB
  • INI
  • Blog
  • Ratlab: a toolkit for studying spatially selective neurons with Slow Feature Analysis
Ratlab: a toolkit for studying spatially selective neurons with Slow Feature Analysis

Image by vaun0815 on Unsplash

I recently began my PhD in neuroscience at the Ruhr-University Bochum in Germany. My supervisor is Prof. Laurenz Wiskott, who has worked on a broad range of topics spanning the gap between computational neuroscience and machine learning. One of the main methods used by the group is Slow Feature Analysis (SFA), an unsupervised technique for extracting slowly varying features from temporal data. A notable result obtained with this algorithm is that it is capable of generating firing patterns that match the spatial encoding of neurons found in the mammalian brain. Ratlab (Fabian Schönfeld 2016 — https://gitlab.com/fabschon/ratlab) is a software that explores this property by simulating the exploratory behavior of rats and processing the sensory input with a hierarchical SFA network. I recently upgraded this software from Python 2 to Python 3: https://github.com/wiskott-lab/Ratlabv3

I’ve enjoyed using this software a lot and I believe it’s a valuable tool in understanding SFA, particularly in its application to spatial navigation. Furthermore, the software does not require thorough experience in programming, making it a very accessible tool to those working in this area of neuroscience. Here I give a (brief) overview of SFA, before introducing Ratlab and its main modules, and finally showing some of the different results that you can get with this software. I strongly recommend that you install Ratlab yourself, and explore it with me as you read. I hope that you enjoy learning about this fun and versatile toolkit!

Contents

  1. Intro to SFA
  2. Intro to Ratlab
  3. Simulating
  4. Training
  5. Sampling
  6. Tying it all together

Intro to SFA

SFA is motivated by the slowness principle. This states that in a multidimensional time dependent signal x(t), the more meaningful features in this signal typically vary on a slower time scale than the less meaningful features. As an example, consider the fact that our visual system has evolved in order to provide us with behaviorally relevant information about the outside world. This information can take many forms, including the positions of objects or other animals in our surroundings, as well as their relative movements. Typically, such environmental features change on the timescale of seconds. However, the same cannot be said for the individual components that make up this stream of information, such as the signals received by single receptors in the retina (or, equivalently a single pixel in a computer vision system). This latter type of information varies on a much faster time scale than any of the features which are relevant to animals. SFA makes use of this observation, and seeks to extract meaningful information from x(t) by only considering those features which vary slowly.

For a detailed and mathematical account of how SFA does this, please see this post written by my colleague Hlynur David Hlynsson (https://www.ini.rub.de/research/blog/a_brief_introduction_to_slow_feature_analysis/). Alternatively, for those readers who are interested in the broader scope of SFA within neuroscience, Scholarpedia catalogues a wide variety of results it has obtained (http://www.scholarpedia.org/article/Slow_feature_analysis)

Hierarchical SFA (HSFA) networks have the benefit of avoiding the “curse of dimensionality” that can be encountered with SFA’s non-linear expansion. Networks of this form have been applied to the raw visual input of virtual rats (Franzius et. al 2007). Interestingly, the slow features generated by this network resembled firing patterns of several spatially selective neurons found in the mammalian brain. Depending on the movement statistics of the virtual rat, these slow features took the form of place cells, head direction cells, spatial-view cells and also reproduced some properties of grid cells.

Ratlab allows us to explore some of these results by conducting our very own simulated experiments on a virtual rat. A variety of experimental setups and control parameters are possible in this software which vastly increases its scope. The software uses a feedforward Hierarchical SFA network, composed of 3 layers, to process the visual input of the virtual rat as it explores. In each layer, filters receive input from a previous layer and perform the SFA algorithm on this input. The receptive field sizes of these filters increases with each layer of the network, such that the third SFA layer is only a single node and has an effective receptive field size covering the whole visual input of the virtual rat. Ratlab is capable of studying the firing of place cells and head direction cells. Since it was shown in this paper that Independent Component Analysis (ICA) is necessary to transform the raw SFA output into place field firing (Franzius et. al 2007), Ratlab also offers the option to include this. A sketch of Ratlab’s network architecture can be seen below.

The network architecture used by Ratlab —image from Schönfeld & Wiskott (2013)

Intro to Ratlab

The Ratlab software is written in Python, with the most recent version using Python 3(https://github.com/wiskott-lab/Ratlabv3). In this post I will try to outline the wider scope of the software and to demonstrate its use as a tool for understanding the spatial encoding of neural systems.

If interested, readers can see the Gitlab page of the software’s earlier version (https://gitlab.com/fabschon/ratlab) which also gives some examples of the software syntax and functionality. Additionally, Schönfeld & Wiskott (2013) formally introduce Ratlab in this paper.

The Ratlab toolkit is comprised of four core modules:

ratlab.py, convert.py, train.py, sample.py

The first of these two deals with the simulation stage of the experiment, and generates data which can subsequently be used to train the HSFA network by the third module, train.py. Finally, the last module takes the slow features generated in the training phase, and samples them over the environment so that you can assess any spatial selectivity achieved. Entering the help command after any of these modules provides a detailed description of their role as well as any relevant parameters they require.

Additionally, the software has its own GUI for setting up experiments and choosing simulation parameters, and is a good place to start if one wants a quick overview of the software’s scope. Starting in the Ratlab home directory, the GUI is opened by running:

python RatLabGUI.py 

which should produce the following window:

Ratlab’s GUI — image generated with RatLabv3

In the top left are the different environment geometries available, as well as fields for entering the respective dimensions and a recording limit for your simulation. In the bottom are other fields for specifying movement statistics of the artificial rat. These two categories of information are used in ratlab.py. On the right hand side, training parameters are specified for use in train.py.

Once these fields have been completed, you click on ‘SCRIPT!’ to generate a bash script in the directory, so that the experiment can be run from start to end using:

./RatLabScript

Once you start the experiment, a /current_experiment folder will be created, which will be the place where all relevant files for your experiment will be stored.

While the GUI is useful for becoming initially familiar with how experiments are run, Ratlab works best as a command line tool. In the following sections, I will outline the different setups and computations that can be used when running Ratlab in this way. I encourage readers to make use of the help menus for each module as they read along.

Simulating

Environments

Something I really enjoy about this software is the flexibility it offers in designing the environment. The ratlab.py module comes with 4 predefined geometries: rectangular, star-maze, T-maze and circle. The rectangular and T-mazes are shaped as the names suggest. The star-maze is constructed out of several corridors leading away from a central region (with the number of corridors specified by the user). The circle maze is only approximately circular, and takes the shape of a regular N-sided polygon (with N fixed by the user).

Additionally, the custom_maze option allows you to design your own maze manually be specifying the texture of each wall as well as the intersection points in a text file (an example custom maze is already included: tools/custom_example_world. To use this, it must first be moved to the current_experiment folder)

Below you can see an example of each environmental setup with the corresponding script they were generated with. In each image, an aerial view is shown above (white dots=previous positions of the rat, red arrow=current heading), with the current view of the rat shown below. Image (a) shows that the rectangular geometry is activated by default (when no additional parameters are called it has dimensions of 60(l)x40(w)x10(h)).

The different environment types allowed in Ratlab: rectangular, star_maze, t_maze, circle_maze and custom — images generated with Ratlabv3

Other than the environment geometry, Ratlab allows you to insert obstacles into the environment using the box and dbox commands.

Movement parameters

Once you have chosen your environment, you can decide between (i) fixing points for the virtual rat to navigate between, or (ii) letting the rat explore with fixed movement statistics.

(i) Fixed points

A benefit of this option is that you can guarantee that the virtual rat will explore certain areas of the environment. It makes use of the path command, which requires .txt file to define the points that the virtual rat must navigate to. As an example, try running the radial_points.py file in the /tools folder and saving the output as ‘path.txt’ in the /current_experiment folder. This is a path designed for a five arm radial maze with appropriate arm dimensions. In order to run the simulation with all of this specified, simply enter:

python ratlab.py star_maze 5 12 40 8 path path.txt

which should lead to the rat sequentially exploring each arm before returning to the center of the maze, as in image (a) below:

Using the ‘path’ command in ratlab.py — images generated with Ratlabv3

It’s important that the path respects the environment’s geometry, otherwise the rat can wander through the walls of the laboratory! As an example, if you call the same simulation as before but use star maze with shorter arms, you get something like image (b) above.

Lastly, the extra loop command determines what the virtual rat should do when it reaches the final node. If loop is called, it will navigate using the shortest path between the ending and starting node, otherwise it will just appear back at the first node immediately after it reaches the final node (if you choose to call loop, be careful that this also respects the environment geometry. Otherwise, situations can arise like that shown in image (c) above).

(ii) Free exploration

If instead you wish to let the rat freely explore, Ratlab offers a variety of control parameters that determine the virtual rat’s movement statistics. Here I give a brief overview of the most important ones.

arc: In order to provide irregular trajectories, the rotations of the virtual rat are generated using a Gaussian white noise term centered around the current heading of the rat. This captures realistic exploration since smaller rotations (i.e. <20 deg) will be much more likely than large rotations (i.e. >90 deg). However, since we don’t want to generate fully random rotations, one might further want to limit the range of angles allowed. This parameter does exactly this, and sets the decision arc of rotations. The default value 320. Below the diagram illustrates the arc.

A schematic image of the ‘arc’ parameter in ratlab.py — image by Author

mom: This term describes an effective momentum of the rat. For large values, one can think of the virtual rat as a high mass particle whose heading direction is highly resistant to noise. In this case, the resulting path is smooth. For low values, the rat’s heading direction fluctuates much more due to noise and the resulting path is more jagged. The default value is 0.55

(Note: some extreme values such as arc=0 or mom=1 will cause the rat to get stuck when it hits a wall so just watch out for this!)

bias: This is a useful parameter to use if you want the virtual rat to have anisotropic movement statistics. By specifying a special direction in 2D space, the rat will now move faster in this direction than it does in the orthogonal direction. The magnitude of this bias needs to be specified.

Experimental Parameters

Now that you have chosen your environment and movement parameters, you are almost ready to start your experiment! But, before you do this you need to decide on some final details.

limit: this controls how long you want the experiment to last, and ultimately how much data you want to train the SFA network on. Obviously, the larger data sets can offer better convergence, but at the cost of computation time. On the other hand, a data set that is too small will likely produce a singular covariance matrix in the SFA expansion. I recommend starting off with a few thousand time steps.

image color: By default, the training images are saved in color, but Ratlab also allows you to save in greyscale or both color schemes simultaneously.

Once you have decided on these, you can begin the experiment with your selected options by including the final command: record. I recommend clearing out the current_experiment folder before you do this so that the new training images are not mixed up with those from previous experiments. An example that includes a mixture of the options discussed so far could look like this:

python ratlab.py star_maze 5 12 40 8 mom 0.5 grey limit 5000 record

The generated training data will then be stored in the folder: current_experiment/sequence. If you want to check that it saved correctly, take a look in this folder and you should see a bunch of .png images taken from the rat’s field of view as it moved through the environment. There should be the same number of images as the parameter limit. Also, the file exp_finish.png will be saved at the end of the simulation and will let you see the total trajectory of the virtual rat.

As a final step, we need to reformat the data so that it’s ready to be trained by the network. We do this using:

python convert.py

There are no parameters for this step and it shouldn’t take too long till the training data is compiled into the file current_experiment/sequence_data.

Training

Now that you are ready to train your network, it’s important to think about what you hope to achieve from this specific experiment. As explained in the intro, Ratlab is capable of studying firing rates of both place cells and head direction cells. In the case of place cells (i.e. position selectivity), the additional ICA node is needed and is called with:

python train.py ICA

SFA is already partially sensitive to head direction without ICA, but including it increases the precision of this selectivity. In either case, Ratlab allows you add the ICA node retroactively without training the whole network again using the add_ICA command, so you can also look at both and compare the results (which I highly recommend).

Another useful option at the training stage is to introduce noise into the network using the parameter noise. As explained in the previous section, training on a small data set can lead to redundancies in the covariance matrix. While one solution to this is to use a bigger data set, you can also try adding noise into the network as a way to reduce this redundancy.

Ratlab will update you about the training process layer by layer and when complete will save the trained network as a .tsn file in the current_experiment folder.

Sampling

Once your network is trained, you are finally ready to visualize the slow features that it has learnt. This is done by sampling the visual field of the rat over multiple positions and directions in the environment, and recording the response of the network. As with the training stage, your choice here should reflect what you are trying to study. There are two types of sampling that Ratlab can do: spatial sampling and directional sampling. As you might guess, the former is appropriate for studying spatial selectivity (place cells) and the latter for directional selectivity (head direction cells). In both cases, you must specify how many slow features you want to sample. In Ratlab this is limited to 32, and choosing a smaller number can greatly reduce computation time.

Spatial sampling: this method works by positioning the virtual rat at periodic locations in the environment (forming a grid). The user can specify how regularly these sample points occur using the period parameter. Lastly, and most importantly, the environment is sampled in separate runs with the rat facing a different direction each time. There are 8 directions (N, NE, E, SE etc.) and the user can either choose a specific selection or use all of them. Ratlab will also automatically produce an extra set of outputs by averaging over the all directions selected by the user. Comparing the individual directions and the average output allows us to check if the network output is dependent on direction. Because this should not be the case for place fields, it acts as a useful diagnostic tool for interpreting the resuls. The image below shows an example of spatially sampled outputs. These firing patterns are spatially localized and the different directions show little difference to the average firing, indicating very little direction selectivity. In this case then, Ratlab has generated place fields that describe different regions ofthe environment!

Spatially sampled slow features over a rectangular maze — images generated with Ratlabv3

Directional sampling: the second type of sampling works by fixing the head direction of the virtual rat and sampling the network over a grid of locations in the environment before averaging over those locations. This is done for all 360 integer angles, and the resulting plots allow you to visualize how directionally selective the network’s slow features are. Below is an image taken from Schönfeld & Wiskott 2013, showing the response of 10 slow features using directional sampling. In this case, the additional ICA layer was included and the results show a clear preference for certain directions (although some low level responses still appear at other angles).

An example of directional sample plots generated by Ratlab — taken from Schönfeld & Wiskott (2013)

Tying it all together

Now that you are familiar with each of Ratlab’s main modules, you can begin to design experiments that are tailored to your specific interests. In order to get the most out of the software, it’s well worth running a number of simulations with different environments and parameters so that you first have a feel for the experimental scope, and so that you can get a wider picture of how SFA performs in different scenarios.

A good example of this are the arc and mom parameters, which control the straightness and smoothness of the rat’s movement respectively. If you try running a simulation with these parameters set to their default values, you will see that the rat turns a lot and has a rather jagged path. Since SFA extracts slowly varying features, it is no surprise that with these parameter values the head direction plots show virtually no direction selectivity. Alternatively, by reducing arc and/or increasing mom, you reduce the size of rotations at each time step, meaning that over the course of the trajectory the head direction varies slower. Hence, as you might expect, if we run the same simulation as before but make these changes, the head direction plots now show much better direction selectivity. Both of these scenarios are depicted below, where the trajectories of the rat (left) and head direction plot (right) for the slowest SFA feature are shown for both parameter specifications.

A comparison of the effect of the ‘arc’ and ‘mom’ parameters on head direction selectivity — images generated with Ratlabv3

The bias parameter also can have a clearly visible effect on the results, particularly for spatial sampling. Intuitively, one would expect a directional movement bias to reduce symmetry in any place field firing patterns. This is exactly what happens, as shown in the image below. In this experiment, the rat explored a rectangular environment with default size and all parameters other than bias set to default. I entered a bias direction of 0 1 (i.e. the vertical axis in the environment) with a scaling factor of 1 which means movement in this direction is twice as fast as movement along the horizontal axis. The figure below shows the first 8 slow features generated. The asymmetry between the vertical and horizontal axes is clearly visible. In this case, since the horizontal position changes slower than the vertical position, the generated SFA outputs are more sensitive to horizontal position, i.e. more localized horizontally.

Asymmetric place cell firing generated with using the ‘bias’ parameter in ratlab.py — images generated with Ratlabv3

So far all results shown were obtained with a network that included the optional ICA node after the final SFA layer. Comparing these to a network with no ICA node generates interesting results in the case of spatial sampling. This can be seen in the image below, where 16 slow features have been sampled over a rectangular environment (again with default dimensions). These outputs match those found without ICA in an Franzius et. al 2007, with periodic firing patterns that partly resemble the firing of grid cells in the entorhinal cortex. In this earlier paper, the authors explained this result with theoretical arguments, demonstrating that the slow features found by SFA form a Fourier basis, with the slowness of each feature being related to its frequency. This can indeed be seen in the results below, which clearly depict harmonic functions of increasing frequency.

The first 16 outputs generated with default movement statistics in a rectangular environment without ICA. Outputs are ordered by slowness and show increasing frequency — images generated with Ratlabv3

Since it would take too long to give a full account of all the things you can do with Ratlab, I encourage you to dive into the software and experiment with your own setups and parameter selections.

I hope you have enjoyed this introduction to Ratlab and that you get some use out of the software for your own studies. All the best!

References and links

  1. Franzius, M., and Wiskott, L. (2007). Slowness and sparseness lead to place-, head direction-, and spatial-view cells. PLoS Comput. Biol. 3:e166. doi: 10.1371/journal.pcbi.0030166
  2. Schönfeld, F., and Wiskott, L. (2013). Ratlab: an easy to use tool for place code simulations. Front. Comput. Neurosci. 7:104. doi: 10.3389/fncom.2013.00104
  3. Schönfeld, F., and Wiskott, L. (2015). Modeling place field activity with hierarchical slow feature analysis. Front. Comput. Neurosci. 9:51. doi: 10.3389/fncom.2015.00051
  4. http://www.scholarpedia.org/article/Slow_feature_analysis
  5. My research group’s webpage: https://www.ini.rub.de/research/groups/theory_of_neural_systems/

The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer vision.

Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany

Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210