Lecture: Monday, 12-14 in the seminar room of the ITB,
Invalidenstr. 43.
Exercises: Thursday, 18-20 in the seminar
room.
Computer course: Friday, 18-20 in the PC-Pool, biophysics
building, Invalidenstr. 42.
Scripts and Links:
- News: The projects for the computer course can be found here .
- Backpropagation Algorithm (script in german)
- For whoever is interested: The sound example of the network who learns to talk (NETtalk) and the
original publication by
Sejnowski & Rosenberg (link)
- Script for the lecture on Hebbian learning (pdf-format with small papersize).
Contents
- Single Neurons:
- Weights, threshold and non-linearity
- McCulloch-Pitts unit
- Neural Networks: An overview
- Feedforward vs. recurrent networks
- Feedforward networks: Function approximators and classifiers
- Recurrent networks:
- Synchronous vs. asynchronous update
- Stable states, limit cycles
- Attractor bassins
- Linear Classification (Perceptron)
- Architecture
- Linear separability
- Units without threshold
- Learning rule
- Difficult data sets and speed-up of convergence
- (Convergence proof)
- Linear regression
- Learning problem
- Error function
- Gradient descent learning rule
- Incremental learning rule
- Difficult error landscapes
- Backpropagation algorithm (script in german)
- Generalization
- Bias-Variance Dilemma
- Bias and Variance
- Cross Validation
- Control of Network Complexity
- Catastrophic Interference
- Problem
- Interleaved Training
- Dual Networks
- Hebbian Learning:
- Explicit and Implicit Weight Normalization
- Linear Stability Analysis
- Learning Principle Subspaces
- Principle Components of Natural Images
- Receptive Fields in the Visual System
- Coding Schemes:
- Compact Coding and Principle Component Analysis
- Sparse Coding
- Independent Component Analysis (ICA)
- Slow Feature Analysis
- The Slowness Principle
- Nonlinear Expansion
- The SFA Algorithm
- Receptive Fields of Complex Cells
- Competitive Learning
- Vector Quantization/Clustering
- Neural Map Formation
- Cortical Maps
- Selforganizing Feature Maps
- Recurrent Networks: Introduction
- What is computation?
- Model of the network dnamics
- Single unit with feedback
- Auto-associative memories
- Stability of a single memory pattern
- Stability of multiple patterns
- Memory capacity of the network
- Network dynamics: network flow diagrams
- Lyapunov functions
Literature
Some of the books listed below can be found at the
Zweigbibliothek Physik in the areas ST 152 and ST 300.
-
Amit, D. J. (1989). Modeling Brain Function: The world of
Attractor Neural Networks. Cambridge University Press.
(systematic derivation of Hopfield-networks based on statistical physics)
-
Bishop, C. M. (1995). Neural Networks for Pattern
Recognition. Oxford University Press.
(systematic derivation of feed-forward networks based Bayesian theory)
-
Hertz, J., Krogh, A., and Palmer, R. G. (1991).
Introduction to the Theory of Neural Computation. Addison-Wesley,
Redwood City, CA.
(a good english textbook, especially for physicists)
-
Kandel, E. R., Schwartz, J. H., and Jessel, T. M., editors (1995).
Neurowissenschaften: Eine Einführung.
Spektrum Akademischer Verlag, Heidelberg.
(a good introduction to neurobiology)
-
Rojas, R. (1996). Theorie der neuronalen Netze: Eine
systematische Einführung. Springer Verlag, Berlin. 4.,
korrigierter Nachdruck.
(a good german textbook , especially for computer scientists)
-
Stanley, J. and Bak, E. (1991). Neuronale Netze:
Computersimulation biologischer Intelligenz. Systhema Verlag,
München.
(a popular scientific introduction)
Henning Sprekeler, http://itb.biologie.hu-berlin.de/~sprekeler/
Last modified: 23/01/2006 16:20