CADDY
Cognitive autonomous diving buddy

Project Information
Funding Agency EU FP7
My Role Senior Researcher
Project Website http://caddy-fp7.eu
Duration January 2014 to December 2016
Coordinator University of Zagreb, Croatia
Partners University of Vienna, Austria
Jacobs University Bremen gGmbH, Germany
Consiglio Nazionale delle Ricerche (CNR), Italy
Divers Alert Network Europe Foundation, Malta
Instituto Superior Tecnico, Lisbon, Portugal
University of Newcastle Upon Tyne, UK

My role in the project

Within our work package on diver detection and gesture recognition, I co-supervised the work of two PhD students. My own research contributions are within the localization and mapping capabilities of the CADDY system. I am also partially responsible for producing technical and financial reports to the EC.

Executive Summary

CADDY investigates underwater human-robot interaction with a focus on supporting and monitoring recreational and professional scuba divers. A gesture language is used to communicate commands to the robot underwater. The robot will perform three roles: Observing the diver’s health and status, performing actions on the diver’s behalf (including mapping an area), and guiding the diver to points of interest.

See also the project entry on Cordis.

Video - First Field Trials

Project Objectives

Divers operate in harsh and poorly monitored environments in which the slightest unexpected disturbance, technical malfunction, or lack of attention can have catastrophic consequences. They manoeuvre in complex 3D environments, carry cumbersome equipment, while performing their mission. To overcome these problems, CADDY aims to establish an innovative set-up between a diver and companion autonomous robots (underwater and surface) that exhibit cognitive behaviour through learning, interpreting, and adapting to the diver’s behaviour, physical state, and actions.The CADDY project replaces a human buddy diver with an autonomous underwater vehicle and adds a new autonomous surface vehicle to improve monitoring, assistance, and safety of the diver’s mission. The resulting system plays a threefold role similar to those that a human buddy diver should have: i) the buddy “observer” that continuously monitors the diver; ii) the buddy “slave” that is the diver’s “extended hand” during underwater operations performing tasks such as “do a mosaic of that area”, “take a photo of that” or “illuminate that”; and iii) the buddy “guide” that leads the diver through the underwater environment.The envisioned threefold functionality will be realized through S&T objectives which are to be achieved within three core research themes: the “Seeing the Diver” research theme focuses on 3D reconstruction of the diver model (pose estimation and recognition of hand gestures) through remote and local sensing technologies, thus enabling behaviour interpretation; the “Understanding the Diver” theme focuses on adaptive interpretation of the model and physiological measurements of the diver in order to determine the state of the diver; while the “Diver-Robot Cooperation and Control” theme is the link that enables diver interaction with underwater vehicles with rich sensory-motor skills, focusing on cooperative control and optimal formation keeping with the diver as an integral part of the formation.

Related Papers

T. Łuczyński, M. Pfingsthorn and A. Birk. The Pinax-Model for Accurate and Efficient Refraction Correction of Underwater Cameras in Flat-Pane Housings. Ocean Engineering, Vol. 133, pp. 9-22, March 2017. Open Access
M. Pfingsthorn and A. Birk. Generalized Graph SLAM: Solving Local and Global Ambiguities through Multimodal and Hyperedge Constraints. The International Journal of Robotics Research, 35: 601-630, May 2016. Open Access
M. Pfingsthorn, R. Rathnam, T. Luczynski and A. Birk. Full 3D Navigation Correction using Low Frequency Visual Tracking with a Stereo Camera. IEEE/MTS OCEANS 2016 Shanghai, IEEE Press, April 2016.
A. Gomez Chavez, M. Pfingsthorn, R. Rathnam and A. Birk. Visual Speed Adaptation for Improved Sensor Coverage in a Multi-Vehicle Survey Mission. IEEE/MTS OCEANS 2016 Shanghai, IEEE Press, April 2016.
I. Enchev, M. Pfingsthorn, T. Luczynski, I. Sokolovski, A. Birk and D. Tietjen. Underwater place recognition in noisy stereo data using FAB-MAP with a multimodal vocabulary from 2D texture and 3D surface descriptors. IEEE/MTS OCEANS 2015 Genova, Italy, IEEE Press, May 2015.
A. Gomez Chavez, M. Pfingsthorn, A. Birk, I. Rendulić and N. Misković. Visual diver detection using multi-descriptor nearest-class-mean random forests in the context of underwater Human Robot Interaction (HRI). IEEE/MTS OCEANS 2015 Genova, Italy, IEEE Press, May 2015.
© 2015-2019 Max Pfingsthorn. Made with Hugo, Bootstrap, and FontAwesome.