Parking Space Detection Real-Time Computer Vision
Collaborator: Daniela Horn, Sebastian Houben

ParkingDetectionLogo


Motivation

The search for a parking space in urban areas is often time-consuming and nerve-racking. Efficient car park guidance systems could support drivers in their search for an available parking space. Video-based systems are a reasonably priced alternative to systems employing other sensor types and their camera input can be used for various tasks within the system.

Current systems detecting vacant parking spaces are either very expensive due to their hardware requirements or do not provide a detailed occupancy map. While several sensor types feature individual parking space surveillance, their installation and maintenance costs are relatively high. The system developed in this research group has minimal hardware requirements, which makes it less expensive and easy to install. At the same time, our video-based approach offers flexibility regarding information usage and site of operation.

System Overview

The system consists of one or more wide-angle lens cameras in combination with a standard desktop computer. The cameras can be optionally equipped with a microcomputer each to calculate metadata before sending it through the network, thus complying with possible data protection guidelines. Each camera can monitor up to 36 parking lots, depending on its position.

Once set up and calibrated, the system uses different image features and machine learning algorithms to determine whether or not an individual parking space is occupied. This classification is acquired in real time, i.e. on 5 frames per second. The resulting information on vacant parking spaces can be requested via smartphone app.

We trained and tested a number of SVM- and kNN-classifiers on HoG- and color features, e.g. (Tschentscher, 2015), of occupied and free parking space snippets. After the training phase the system was able to classify parking spaces with an accuracy of 99.8 % employing temporal filtering.

Synthetic Data Usage

In order to meet the requirements of machine learning algorithms regarding the necessary amount of data, a simulated environment has been developed to create further image data for the evaluation of existing classifiers and the training of a new classifier for the parking space detection task. The simulated environment allows the automatic generation of large amounts of image data and ground-truth information as well as the reconstruction of special cases by full human interaction control. Variable image data was created this way, including different lighting and weather conditions. Some of our results can be seen in this video:

 

 

In order to validate the approach of using synthetic image data as training input for machine learning algorithms, we used the same classifiers as before (trained on traditional real-world images) and evaluated them on simulated video data with comparable results (Tschentscher, 2017).

Finally, we trained a new classifier on purely simulated image data and evaluated its performance on unseen natural video sequences (Horn, 2018). The results were similar to those produced with the previously described classifiers.


Awards

  • Best Poster Paper Award at the 2017 IEEE Intelligent Vehicles Symposium (IV' 2017)

 

Publicity


Publications

    2018

  • Evaluation of Synthetic Video Data in Machine Learning Approaches for Parking Space Classification
    Horn, D., & Houben, S.
    In Proceedings of the IEEE Intelligent Vehicles Symposium (IV) (pp. 2157–2162)
  • 2017

  • A simulated car-park environment for the evaluation of video-based on-site parking guidance systems
    Tschentscher, M., Pruß, B., & Horn, D.
    In Proceedings of the IEEE Intelligent Vehicles Symposium (IV) (pp. 1571–1576)
  • 2015

  • Video-based Parking Space Detection: Localisation of Vehicles and Development of an Infrastructure for a Routeing System
    Horn, D., & Brüggenthies, M.
    In Proceedings of the Forum Bauinformatik (pp. 175–182)
  • Scalable real-time parking lot classification: An evaluation of image features and supervised learning algorithms
    Tschentscher, M., Koch, C., König, M., Salmen, J., & Schlipsing, M.
    In Proceedings of the IEEE International Joint Conference on Neural Networks
  • 2013

  • Comparing Image Features and Machine Learning Algorithms for Real-time Parking-space Classifiaction
    Tschentscher, M., Neuhausen, M., Koch, C., König, M., Salmen, J., & Schlipsing, M.
    In Proceedings of the ASCE International Workshop on Computing in Civil Engineering (pp. 363–370)
  • 2012

  • Video-based Parking-space Detection
    Tschentscher, M., & Neuhausen, M.
    In Proceedings of the Forum Bauinformatik (pp. 159–166)

The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer vision.

Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany

Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210