#### Jun.-Prof. Dr. Tobias Glasmachers

Universitätsstraße 150

Building NB, Room NB 3/27

# About Me

I am a junior professor for theory of machine learning at the Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany. My research interests are (supervised) machine learning and optimization.

# Short CV

- 2004-2008: Ph.D. in Christian Igel's group at the Institut für Neuroinformatik in Bochum. I received my Ph.D. in 2008 from the Faculty of Mathematics, Ruhr-Universität Bochum, Germany.
- 2008-2009: Post-doc in the same group.
- 2009-2011: Post-doc in Jürgen Schmidhuber's group at IDSIA, Lugano, Switzerland.
- since 2012: Junior professor for theory of machine learning at the Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany. I am the head of the optimization of adaptive systems group.

# Research

My research is located in the area of machine learning, a modern branch of artificial intelligence research. This is an interdisciplinary research topic in between computer science, statistics, and optimization, with connections to the neurosciences and applications in robotics, engineering, medicine, economics, and many more disciplines. Within this wide area I am focusing on two aspects: supervised learning (including modern deep learning), and optimization with simple gradient-based methods and evolutionary algorithms.

## Machine Learning

Supervised learning is a learning paradigm with endless (mostly technical) applications. A learning machine (algorithm) builds a predictive model from data provided in the form of input/output pairs. This allows for the automated solution of classification and regression problems. A primary example is classification of objects in images, a classic computer vision task. I have recently started to reach out to reinforcement learning problems in 3D environments for fully autonomous behavior learning of robots or computer game agents (bots). My research activities include both theoretical and practical aspects.

## Optimization

Gradient-based optimization, particularly relatively simple first order methods like (stochastic) gradient descent and coordinate descent, are at the heart of many modern training procedures for learning machines, in particular for (possibly regularized) empirical risk minimization. This includes backpropagation based training of (deep) neural networks, as well as convex (primal or dual) optimization, e.g., for support vector machine training.

Evolutionary Algorithms (EAs) are a class of nature-inspired algorithms that mimic the process of Darwinian evolution. This process is resolved into the components inheritance, variation, and selection. It has been widely recognized that EAs are useful for search and optimization, in particular when derivatives are not available. Formally they can be understood as randomized direct search heuristics. They are suitable for tackling black-box optimization problems. I focus on evolution strategies, a class of optimization algorithms for continuous variables, and on multi-objective optimization.

## Shark

I am an active developer of the Shark Machine Learning Library. Shark is an open-source, modular, and fast C++ library. Check it out!

## Asynchronous ES

An asynchronous natural evolution strategy.

## Adaptive Coordinate Frequencies Coordinate Descent

Coordinate descent with online adaptation of coordinate frequencies for fast training of linear models.

LASSO-code, modified liblinear.

## Hypervolume Maximization

Maximization of dominated hypervolume for multi-objective benchmark problems.

## xCMA-ES

CMA-ES with multiplicative covariance update.

## Pareto Archive

An efficient archiving algorithm for non-dominated solutions in multi-objective optimization.

## Stochastic Gradient Optimization

Comparison of SGD, SAG, SVRG, and ADAM for training kernel machines on a budget.

## Limits of End-to-end Learning

## Duales Training nichtlinearer Support-Vektor-Maschinen mit Budget

This DFG funded research project has started in October 2016.

## The Black-box Optimization Competition (BBComp)

The Black-box Optimization Competition (BBComp) is an online competition for black-box optimization in the continuous domain. It is the first competition of its kind where problems are truly black-boxes to participants. This competition allows for a fair and unbiased (as unbiased as possible) comparison of black box optimization methods. The large problem suite and the black-box interface avoid over-fitting to narrow suites of benchmark problems.

## Support-Vektor-Maschinen für extrem große Datenmengen

This research project had started in November 2013 and ended in February 2016. It was conducted in cooperation with the chair Computergestützte Statistik at the Technical University of Dortmund. It was funded by the Mercator Research Center Ruhr (MERCUR). The official project homepage is found here.

Lectures | Machine Learning: Supervised Methods |

Seminars | Master Seminar Supervised Methods |

Lectures | Machine Learning: Evolutionary Algorithms |

Lectures | Machine Learning: Supervised Methods |

Lectures | Machine Learning: Evolutionary Algorithms |

I am offering Master theses in the areas of machine learning and optimization. Prerequisites:

- completed at least one of my lectures
- programming skills and/or solid mathematical background

Please contact me for details and for currently open topics.