[robotics-worldwide] [jobs] Postdoc and PhD positions on Visual SLAM for humanoid roboitcs at the CNRS-I3S/UNS Sophia Antipolis,

comport Andrew.Comport at cnrs.fr
Tue Apr 14 03:23:13 PDT 2015


The CNRS-I3S (Université de Nice-Sophia Antipolis) is looking for Phd and
Postdoc candidates for “Real-time visual localisation and mapping for
humanoid robots in a manufacturing environment”, starting in September 2015.
The two positions are available in the context of the European H2020 projet
COMANOID (CNRS, INRIA, DLR, Univ. Rome 1, Airbus) :
1.  Postdoc position
<http://www.i3s.unice.fr/~comport/index.php?option=com_content&view=article&id=129%3Anew-postdoc-position-2015&catid=33%3Anews&Itemid=90>  
on RGB-D localisation and mapping for humanoid robots in a manufacturing
environment : 
2.  PhD position
<http://www.i3s.unice.fr/~comport/index.php?option=com_content&view=article&id=130%3Anew-phd-position-2015&catid=33%3Anews&Itemid=90> 
on Object recognition in RGB-D images For the accurate localisation of a
manufacturing collaborative robot in direct collaboration with Airbus Group.
The aim of the project is the accurate localization of a humanoid
collaborative robot, in an aircraft assembly line. The position is part of
the European H2020 project COMANOID aimed at assisting workers for system
installation and verification. With a budget of 4.25 M€, COMANOID brings
together the CNRS and INRIA in France, the DLR from Germany, the University
of Rome 1 from Italy and Airbus Group.
Orientable RGB-D sensor (providing color and depth) will be used to estimate
the pose (position and orientation) of the humanoid robot with respect to a
reconstruction of the aircraft fuselage in real-time. Particular attention
will be paid to the robustness of the approach so that the sensors provide
continuously safe and accurate measurements, especially in the context of an
incrementally changing environment over the aircraft building cycles.
Robustly localizing the pose of the robot using embedded real-time RGB-D
sensor data as in [1] will provide essential information for multi-contact
and walking planning of the robot in its environment and performing visual
servoing tasks with respect to object parts. This process will be assisted
by simultaneously maintaining a complete 3D model that contains the fuselage
and all the 3D parts. Since the aircraft, its sub-parts and the shop floors
are fully modeled in 3D, a wealth of prior knowledge is available from the
DMU (Digital Mock-Up). The solution will involve leveraging this very rich
information to increase the robot’s localization accuracy with enough
precision to perform installation tasks.
The applicant should have previous successful experience in computer vision,
C++ programming, 3D geometry, OpenGL. A good knowledge of RGBD sensors and
data, real-time vision and/or of ROS environment would be appreciated.
Applicants should send their resume and motivation letter to:
Andrew.Comport at cnrs.fr
[1] M. Meilland and A. I. Comport. "On unifying key-frame and voxel-based
dense visual SLAM at large scales". International Conference on Intelligent
Robots and Systems. Tokyo, Japan. 3-8 November. 2013



--
View this message in context: http://robotics-worldwide.1046236.n5.nabble.com/jobs-Postdoc-and-PhD-positions-on-Visual-SLAM-for-humanoid-roboitcs-at-the-CNRS-I3S-UNS-Sophia-Antip-tp5711232.html
Sent from the robotics-worldwide mailing list archive at Nabble.com.


More information about the robotics-worldwide mailing list