Project Information | |
---|---|
Funding Agency | DFG |
My Role | Researcher |
Project Website | http://gepris.dfg.de/gepr... |
Duration | February 2005 to January 2010 |
As a researcher, I contributed to the pose-graph SLAM backend.
This DFG-funded project focused on advances in 3D Simultaneous Localization and Mapping. Major outcomes included fast segmentation of planes in 2.5D range images as well as a fast and robust registration method based on these planar segments. Together with a pose-graph SLAM framework, these methods were shown to work efficiently on several datasets from high resolution 3D laser scanners to simple tilting laser scanners.
The project deals with learning of a three dimensional representation of an unstructured indoor environment by an autonomous mobile robot. The general problem of mapping is a fundamental issue in robotics, as maps are crucial for the success of any non-trivial mission. In the past, significant efforts were devoted to build two dimensional maps like floorplans. These approaches simplify the problem, but they are well developed and they are sufficient for solving basic tasks. But mobile robots increasingly operate in three dimensions. This is due to significant advances in locomotion and the actual need to overcome for example slopes or stairs in any realistic environment. We propose an on-line learning algorithm, which creates a metric 3D representation encoded in the Virtual Reality Modeling Language (VRML) of an unstructured environment. The system is targeted for the real world, i.e., it has to cope with the unreliable and noisy navigation- and range-sensor data of a robot. The algorithm uses an evolutionary method where a VRML neighborhood model is extracted in realtime from a local 3D occupancy grid while the robot moves along. Inspired by previous successful work on realtime learning of 3D eye-hand coordination with a robot-arm, the evolutionary algorithm tries to generate VRML code that reproduce the vast amounts of sensor data. In doing so, we use a so-called neighborhood principle. Roughly speaking, the neighborhood principle uses a probabilistic estimation of the robots position in the world called the standpoint, which greatly reduces the complexity of learning steps. As demonstrator scenario, the system will be implemented on the IUB rescue robots.