Theory of Machine Learning

Creating Autonomous Agents

A long-standing goal of reinforcement learning is to create truly autonomous systems, e.g., in robotics and in virtual environments. We are working towards this vision. To this end we aim to equip deep reinforcement learning systems with additional structure, e.g., for efficient navigation and object-oriented actions.

Machine Learning Applications and Transfer

We have several ongoing projects aiming to transfer machine learning as a technology into different application areas, in academia as well as in industry. In these activities we keep an eye on problems of general interest, like transfer learning, automated machine learning, long-term maintainability and human factors.

Optimization

Optimization is underlying most of machine learning. It is used for training models, but also for tuning hyper-parameters and for automated model selection. Complementing gradient-based methods, we pursue research in the area of evolutionary optimization, where we are interested in algorithm design and provable performance guarantees.

    2024

  • Volume Determination Challenges in Waste Sorting Facilities: Observations and Strategies
    Maus, T., Zengeler, N., Sänger, D., & Glasmachers, T.
    MDPI Sensors, 24(7), 2114
  • tachAId—An interactive tool supporting the design of human-centered AI solutions
    Bauroth, M., Rath-Manakidis, P., Langholf, V., Wiskott, L., & Glasmachers, T.
    Frontiers in Artificial Intelligence, 7
  • Solving a Real-World Optimization Problem Using Proximal Policy Optimization with Curriculum Learning and Reward Engineering
    Pendyala, A., Atamna, A., & Glasmachers, T.
    In Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track (ECML-PKDD) (pp. 150–165) Cham: Springer Nature Switzerland
  • ContainerGym: A Real-World Reinforcement Learning Benchmark for Resource Allocation
    Pendyala, A., Dettmer, J., Glasmachers, T., & Atamna, A.
    In Machine Learning, Optimization, and Data Science (pp. 78–92) Cham: Springer Nature Switzerland
  • ProtoP-OD: Explainable Object Detection with Prototypical Parts
    Rath-Manakidis, P., Strothmann, F., Glasmachers, T., & Wiskott, L.
    arXiv
  • 2023

  • Leveraging Topological Maps in Deep Reinforcement Learning for Multi-Object Navigation
    Hakenes, S., & Glasmachers, T.
    arXiv
  • 2022

  • Global linear convergence of evolution strategies on more than smooth strongly convex functions
    Akimoto, Y., Auger, A., Glasmachers, T., & Morinaga, D.
    SIAM Journal on Optimization, 32(2), 1402–1429
  • Recipe for Fast Large-scale SVM Training: Polishing, Parallelism, and more RAM!
    Glasmachers, T.
    arXiv.org
  • Convergence Analysis of the Hessian Estimation Evolution Strategy
    Glasmachers, T., & Krause, O.
    Evolutionary Computation Journal (ECJ), 30(1), 27–50
  • Latent Representation Prediction Networks
    Hlynsson, H. D., Schüler, M., Schiewer, R., Glasmachers, T., & Wiskott, L.
    International Journal of Pattern Recognition and Artificial Intelligence, 36(01), 2251002
  • AFRNN: Stable RNN with Top Down Feedback and Antisymmetry
    Schwabe, T., Glasmachers, T., & Acosta, M.
    In Proceedings of the 14th Asian Conference on Machine Learning (ACML). To Appear
  • 2021

  • Improved Protein Function Prediction by Combining Clustering with Ensemble Classification
    Altartouri, H., & Glasmachers, T.
    Journal of Advances in Information Technology (JAIT)
  • Application of Reinforcement Learning to a Mining System
    Fidencio, A., Naro, D., & Glasmachers, T.
    In 19th IEEE World Symposium on Applied Machine Intelligence and Informatics (SAMI′2021)
  • The (1+1)-ES Reliably Overcomes Saddle Points
    Glasmachers, T.
    arXiv.org
  • Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation
    Müller, N., & Glasmachers, T.
    In Proceedings of the 16th ACM/SIGEVO Conference on Foundations of Genetic Algorithms (FOGA′21) Association for Computing Machinery
  • 2020

  • Improving the performance of EEG decoding using anchored-STFT in conjunction with gradient norm adversarial augmentation
    Ali, O., Saif-ur-Rehman, M., Dyck, S., Glasmachers, T., Iossifidis, I., & Klaes, C.
    arXiv.org
  • A Versatile Combination of Classifiers for Protein Function Prediction
    Altartouri, H., & Glasmachers, T.
    The Twelfth International Conference on Bioinformatics, Biocomputational Systems and Biotechnologies
  • Global Convergence of the (1+1) Evolution Strategy
    Glasmachers, T.
    Evolutionary Computation Journal (ECJ), 28(1), 27–53
  • The Hessian Estimation Evolution Strategy
    Glasmachers, T., & Krause, O.
    In Parallel Problem Solving from Nature (PPSN XVII) Springer
  • Latent Representation Prediction Networks
    Hlynsson, H. D., Schüler, M., Schiewer, R., Glasmachers, T., & Wiskott, L.
    arXiv preprint arXiv:2009.09439
  • Analyzing Reinforcement Learning Benchmarks with Random Weight Guessing
    Oller, D., Cuccu, G., & Glasmachers, T.
    In International Conference on Autonomous Agents and Multi-Agent Systems
  • SpikeDeep-Classifier: A deep-learning based fully automatic offline spike sorting algorithm
    Saif-ur-Rehman, M., Ali, O., Dyck, S., Lienkämper, R., Metzler, M., Parpaley, Y., et al.
    Journal of Neural Engineering
  • AI for Social Good: Unlocking the Opportunity for Positive Impact
    Tomašev, N., Cornebise, J., Hutter, F., Picciariello, A., Connelly, B., Belgrave, D. C. M., et al.
    Nature Communications, (2468)
  • 2019

  • Moment Vector Encoding of Protein Sequences for Supervised Classification
    Altartouri, H., & Glasmachers, T.
    In Practical Applications of Computational Biology and Bioinformatics, 13th International Conference (pp. 25–35) Springer International Publishing
  • Challenges of Convex Quadratic Bi-objective Benchmark Problems
    Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) (pp. 559–567) ACM
  • Boosting Reinforcement Learning with Unsupervised Feature Extraction
    Hakenes, S., & Glasmachers, T.
    In Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation (pp. 555–566) Springer International Publishing
  • Vehicle Shape and Color Classification Using Convolutional NeuralNetwork
    Nafzi, M., Brauckmann, M., & Glasmachers, T.
    arxiv.org
  • Dual SVM Training on a Budget
    Qaadan, S., Schüler, M., & Glasmachers, T.
    In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods SCITEPRESS - Science and Technology Publications
  • Modeling Macroscopic Material Behavior With Machine Learning Algorithms Trained by Micromechanical Simulations
    Reimann, D., Nidadavolu, K., ul Hassan, H., Vajragupta, N., Glasmachers, T., Junker, P., & Hartmaier, A.
    Frontiers in Materials, 6, 181
  • SpikeDeeptector: A deep-learning based method for detection of neural spiking activity
    Saif-ur-Rehman, M., Lienkämper, R., Parpaley, Y., Wellmer, J., Liu, C., Lee, B., et al.
    Journal of Neural Engineering
  • 2018

  • Drift Theory in Continuous Search Spaces: Expected Hitting Time of the (1+1)-ES with 1/5 Success Rule
    Akimoto, Y., Auger, A., & Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO) ACM
  • Speeding Up Budgeted Dual SVM Training with Precomputed GSS
    Glasmachers, T., & Qaadan, S.
    (M. M. -y-G. Ruben Vera-Rodriguez Sergio Velastin & Morales, A., Eds.), The 23rd Iberoamerican Congress on Pattern Recognition
  • Speeding Up Budgeted Stochastic Gradient Descent SVM Training with Precomputed Golden Section Search
    Glasmachers, T., & Qaadan, S.
    In G. Nicosia, Pardalos, P., Giuffrida, G., Umeton, R., & Sciacca, V. (Eds.), The 4th International Conference on machine Learning, Optimization and Data science - LOD 2018
  • Large Scale Black-box Optimization by Limited-Memory Matrix Adaptation
    Loshchilov, I., Glasmachers, T., & Beyer, H. -G.
    IEEE Transactions on Evolutionary Computation, 99
  • Challenges in High-dimensional Controller Design with Evolution Strategies
    Müller, N., & Glasmachers, T.
    In Parallel Problem Solving from Nature (PPSN XVI) Springer
  • Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training
    Qaadan, S., & Glasmachers, T.
    13th WiML Workshop, Co-located with NeurIPS, Montreal, QC, Canada
  • Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training
    Qaadan, S., & Glasmachers, T.
    arXiv.org
  • Multi-Merge Budget Maintenance for Stochastic Coordinate Ascent SVM Training
    Qaadan, S., & Glasmachers, T.
    Artificial Intelligence International Conference – A2IC 2018
  • User-Centered Development of a Pedestrian Assistance System Using End-to-End Learning
    Qureshi, H. S., Glasmachers, T., & Wiczorek, R.
    In 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 808–813) IEEE
  • 2017

  • A Fast Incremental BSP Tree Archive for Non-dominated Points
    Glasmachers, T.
    In Evolutionary Multi-Criterion Optimization (EMO) Springer
  • Limits of End-to-End Learning
    Glasmachers, T.
    In Proceedings of the 9th Asian Conference on Machine Learning (ACML)
  • Texture attribute synthesis and transfer using feed-forward CNNs
    Irmer, T., Glasmachers, T., & Maji, S.
    In Winter Conference on Applications of Computer Vision (WACV) IEEE
  • Qualitative and Quantitative Assessment of Step Size Adaptation Rules
    Krause, O., Glasmachers, T., & Igel, C.
    In Conference on Foundations of Genetic Algorithms (FOGA) ACM
  • 2016

  • Fast model selection by limiting SVM training times
    Demircioğlu, A., Horn, D., Glasmachers, T., Bischl, B., & Weihs, C.
    arxiv.org
  • A Unified View on Multi-class Support Vector Classification
    Doğan, Ü., Glasmachers, T., & Igel, C.
    Journal of Machine Learning Research, 17(45), 1–32
  • Finite Sum Acceleration vs. Adaptive Learning Rates for the Training of Kernel Machines on a Budget
    Glasmachers, T.
    In NIPS workshop on Optimization for Machine Learning
  • Small Stochastic Average Gradient Steps
    Glasmachers, T.
    In NIPS workshop on Optimizing the Optimizers
  • A Comparative Study on Large Scale Kernelized Support Vector Machines
    Horn, D., Demircioğlu, A., Bischl, B., Glasmachers, T., & Weihs, C.
    Advances in Data Analysis and Classification (ADAC), 1–17
  • Unbounded Population MO-CMA-ES for the Bi-Objective BBOB Test Suite
    Krause, O., Glasmachers, T., Hansen, N., & Igel, C.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • Multi-objective Optimization with Unbounded Solution Sets
    Krause, O., Glasmachers, T., & Igel, C.
    In NIPS workshop on Bayesian Optimization
  • Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES)
    Loshchilov, I., & Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • Supervised Classification
    Weihs, C., & Glasmachers, T.
    In C. Weihs, Jannach, D., Vatolkin, I., & Rudolph, G. (Eds.), Music Data Analysis: Foundations and Applications
  • 2015

  • A CMA-ES with Multiplicative Covariance Matrix Updates
    Krause, O., & Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • 2014

  • Testing Hypotheses by Regularized Maximum Mean Discrepancy
    Danafar, S., Rancoita, P. M. V., Glasmachers, T., Whittingstall, K., & Schmidhuber, J.
    International Journal of Computer and Information Technology (IJCIT), 3(2)
  • Handling Sharp Ridges with Local Supremum Transformations
    Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • Optimized Approximation Sets for Low-dimensional Benchmark Pareto Fronts
    Glasmachers, T.
    In Parallel Problem Solving from Nature (PPSN) Springer
  • Start Small, Grow Big - Saving Multiobjective Function Evaluations
    Glasmachers, T., Naujoks, B., & Rudolph, G.
    In Parallel Problem Solving from Nature (PPSN) Springer
  • Natural Evolution Strategies
    Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., & Schmidhuber, J.
    Journal of Machine Learning Research, 15, 949–980
  • 2013

  • A Natural Evolution Strategy with Asynchronous Strategy Updates
    Glasmachers, T.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • The Planning-ahead SMO Algorithm
    Glasmachers, T.
    arxiv.org
  • Accelerated Coordinate Descent with Adaptive Coordinate Frequencies
    Glasmachers, T., & Doğan, Ü.
    In Proceedings of the fifth Asian Conference on Machine Learning (ACML)
  • Approximation properties of DBNs with binary hidden units and real-valued visible units
    Krause, O., Fischer, A., Glasmachers, T., & Igel, C.
    In Proceedings of the International Conference on Machine Learning (ICML)
  • 2012

  • Turning Binary Large-margin Bounds into Multi-class Bounds
    Doğan, Ü., Glasmachers, T., & Igel, C.
    In ICML workshop on RKHS and kernel-based methods
  • A Note on Extending Generalization Bounds for Binary Large-margin Classifiers to Multiple Classes
    Doğan, Ü., Glasmachers, T., & Igel, C.
    In Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD)
  • Convergence of the IGO-Flow of Isotropic Gaussian Distributions on Convex Quadratic Problems
    Glasmachers, T.
    In C. C. Coello, Cutello, V., Deb, K., Forrest, S., Nicosia, G., & Pavone, M. (Eds.), Parallel Problem Solving from Nature (PPSN) Springer
  • Kernel Representations for Evolving Continuous Functions
    Glasmachers, T., Koutník, J., & Schmidhuber, J.
    Journal of Evolutionary Intelligence, 5(3), 171–187
  • 2011

  • Novelty Restarts for Evolution Strategies
    Cuccu, G., Gomez, F., & Glasmachers, T.
    In Proceedings of the IEEE Congress on Evolutionary Computation (CEC) IEEE
  • Optimal Direct Policy Search
    Glasmachers, T., & Schmidhuber, J.
    In Proceedings of the 4th Conference on Artificial General Intelligence (AGI)
  • Artificial Curiosity for Autonomous Space Exploration
    Graziano, V., Glasmachers, T., Schaul, T., Pape, L., Cuccu, G., Leitner, J., & Schmidhuber, J.
    ACTA FUTURA
  • High Dimensions and Heavy Tails for Natural Evolution Strategies
    Schaul, T., Glasmachers, T., & Schmidhuber, J.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • Coherence Progress: A Measure of Interestingness Based on Fixed Compressors
    Schaul, T., Pape, L., Glasmachers, T., Graziano, V., & Schmidhuber, J.
    In Proceedings of the 4th Conference on Artificial General Intelligence (AGI)
  • 2010

  • Universal Consistency of Multi-Class Support Vector Classification
    Glasmachers, T.
    In Advances in Neural Information Processing Systems (NIPS)
  • Maximum Likelihood Model Selection for 1-Norm Soft Margin SVMs with Multiple Parameters
    Glasmachers, T., & Igel, C.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8), 1522–1528
  • A Natural Evolution Strategy for Multi-Objective Optimization
    Glasmachers, T., Schaul, T., & Schmidhuber, J.
    In Parallel Problem Solving from Nature (PPSN) Springer
  • Exponential Natural Evolution Strategies
    Glasmachers, T., Schaul, T., Sun, Y., Wierstra, D., & Schmidhuber, J.
    In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)
  • Frontier Search
    Sun, Y., Glasmachers, T., Schaul, T., & Schmidhuber, J.
    In Proceedings of the 3rd Conference on Artificial General Intelligence (AGI)
  • 2008

  • On related violating pairs for working set selection in SMO algorithms
    Glasmachers, T.
    In M. Verleysen (Ed.), Proceedings of the 16th European Symposium on Artificial Neural Networks (ESANN) d-side publications
  • Gradient Based Optimization of Support Vector Machines
    Glasmachers, T.
    Doctoral thesis, Fakultät für Mathematik, Ruhr-Universität Bochum, Germany
  • Second-Order SMO Improves SVM Online and Active Learning
    Glasmachers, T., & Igel, C.
    Neural Computation, 20(2), 374–382
  • Uncertainty Handling in Model Selection for Support Vector Machines
    Glasmachers, T., & Igel, C.
    In G. Rudolph, Jansen, T., Lucas, S., Poloni, C., & Beume, N. (Eds.), Parallel Problem Solving from Nature (PPSN) (pp. 185–194) Springer
  • Shark
    Igel, C., Heidrich-Meisner, V., & Glasmachers, T.
    Journal of Machine Learning Research, 9, 993–996
  • 2007

  • Gradient-Based Optimization of Kernel-Target Alignment for Sequence Kernels Applied to Bacterial Gene Start Detection
    Igel, C., Glasmachers, T., Mersch, B., Pfeifer, N., & Meinicke, P.
    IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 4(2), 216–226
  • Evolutionary Optimization of Sequence Kernels for Detection of Bacterial Gene Starts
    Mersch, B., Glasmachers, T., Meinicke, P., & Igel, C.
    International Journal of Neural Systems, 17(5), 369–381
  • 2006

  • Degeneracy in Model Selection for SVMs with Radial Gaussian Kernel
    Glasmachers, T.
    In M. Verleysen (Ed.), Proceedings of the 14th European Symposium on Artificial Neural Networks (ESANN) d-side publications
  • Maximum-Gain Working Set Selection for Support Vector Machines
    Glasmachers, T., & Igel, C.
    Journal of Machine Learning Research, 7, 1437–1466
  • Evolutionary Optimization of Sequence Kernels for Detection of Bacterial Gene Starts
    Mersch, B., Glasmachers, T., Meinicke, P., & Igel, C.
    In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN) Springer-Verlag
  • 2005

  • Gradient-based Adaptation of General Gaussian Kernels
    Glasmachers, T., & Igel, C.
    Neural Computation, 17(10), 2099–2105

    2024

  • Entwicklung eines Reinforcement Learning Agenten für das Kartenspiel Triple Triad
    Langenströer, R.
    Applied Computer Science, Ruhr University Bochum, Germany
  • 2023

  • Fast object detection for microresonators with machine learning methods
    Gerk, P.
    Applied Computer Science, Ruhr University Bochum, Germany
  • ML basierte Positionierung von Einblendungen in Videospielaufnahmen
    Schweitzer, T.
    Applied Computer Science, Ruhr University Bochum, Germany
  • 2022

  • Template Matching for Generic Aiming Behavior with Deep Learning
    Bachert, M.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Lernen einer Spielstrategie für das Kartenspiel Triple-Triad
    Beckers, A.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Procedurally generating biome-based fauna spawn points in the Snowdrop Engine
    Bunkowski, D.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Short-term forecasting of COVID-19 indicators in Germany using supervised machine learning methods on spatio-temporal data
    Roller, C.
    Applied Computer Science, Ruhr University Bochum, Germany
  • 2021

  • A Comparison Study of Constraint Handling Techniques in Evolution Strategies
    Lazaro, R. R.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Weiterentwicklung eines strukturierten und differenzierbaren Speichers für effizientes Reinforcement-Lernen
    Weber, T.
  • 2020

  • Development of a controller for corridor following based on visual inputs with deep reinforcement learning
    Bachert, M.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Creating Song Lyrics using Natural Language Generation - A Comparison of Machine Learning Methods
    Dettmer, J.
    Applied Computer Science, Ruhr University Bochum, Germany
  • Vergleich von Reinforcement Learning, Generative Adversarial Imitation Learning und Behavioral Cloning in einfachen Computerspielen
    Helmig, D.
    Applied Computer Science, Ruhr University Bochum, Germany
  • 2019

  • Einzelbildbasierte Wegplanung in virtuellen Umgebungen
    Akkiraz, T.
  • Detektion von Wolken in Satellitendaten
    Gergs, L.
  • Experimental evaluation of neural monocular depth estimation methods on video games
    Gondermann, N.
  • 2018

  • Auxiliary Unsupervised Methods for Deep Reinforcement Learning
    Hakenes, S.
  • Applying Budget Maintenance strategy on the Adaptive Coordinate Frequency-Coordinate Descent
    Pendyala, A.
  • Predicting the number of incoming groupage shipments using LSTM neural networks
    Vakavchiev, V.
    Master’s thesis, Applied Computer Science, Ruhr University Bochum, Germany

The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer vision.

Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany

Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210