Learning 3-Dimensional Maps of Unstructured Environments on a Mobile Robot

Project Information
Funding Agency DFG
My Role Researcher
Project Website http://gepris.dfg.de/gepr...
Duration February 2005 to January 2010

My role in the project

As a researcher, I contributed to the pose-graph SLAM backend.

Executive Summary

This DFG-funded project focused on advances in 3D Simultaneous Localization and Mapping. Major outcomes included fast segmentation of planes in 2.5D range images as well as a fast and robust registration method based on these planar segments. Together with a pose-graph SLAM framework, these methods were shown to work efficiently on several datasets from high resolution 3D laser scanners to simple tilting laser scanners.

Project Objectives

The project deals with learning of a three dimensional representation of an unstructured indoor environment by an autonomous mobile robot. The general problem of mapping is a fundamental issue in robotics, as maps are crucial for the success of any non-trivial mission. In the past, significant efforts were devoted to build two dimensional maps like floorplans. These approaches simplify the problem, but they are well developed and they are sufficient for solving basic tasks. But mobile robots increasingly operate in three dimensions. This is due to significant advances in locomotion and the actual need to overcome for example slopes or stairs in any realistic environment. We propose an on-line learning algorithm, which creates a metric 3D representation encoded in the Virtual Reality Modeling Language (VRML) of an unstructured environment. The system is targeted for the real world, i.e., it has to cope with the unreliable and noisy navigation- and range-sensor data of a robot. The algorithm uses an evolutionary method where a VRML neighborhood model is extracted in realtime from a local 3D occupancy grid while the robot moves along. Inspired by previous successful work on realtime learning of 3D eye-hand coordination with a robot-arm, the evolutionary algorithm tries to generate VRML code that reproduce the vast amounts of sensor data. In doing so, we use a so-called neighborhood principle. Roughly speaking, the neighborhood principle uses a probabilistic estimation of the robots position in the world called the standpoint, which greatly reduces the complexity of learning steps. As demonstrator scenario, the system will be implemented on the IUB rescue robots.

Related Papers

A. Birk, K. Pathak, N. Vaskevicius, M. Pfingsthorn, J. Poppinga and S. Schwertfeger. Surface Representations for 3D Mapping: A Case for a Paradigm Shift. KI - Künstliche Intelligenz, Volume 24, Issue 3, pp 249-254, September 2010.
K. Pathak, A. Birk, N. Vaskevicius, M. Pfingsthorn, S. Schwertfeger and J. Poppinga. Online three-dimensional SLAM by registration of large planar surface segments and closed-form pose-graph relaxation. Journal of Field Robotics, Special Issue: Three-Dimensional Mapping, Part 3, Volume 27, Issue 1, pages 52–84, January 2010.
© 2015-2019 Max Pfingsthorn. Made with Hugo, Bootstrap, and FontAwesome.