[robotics-worldwide] [jobs] phd positions at the iCub Facility, Italian Insitute of Technology

Lorenzo Natale Lorenzo.Natale at iit.it
Thu Apr 23 03:25:42 PDT 2015


This is to advertise the following Phd positions with scholarship at the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics. Successful applicants will join the iCub Facility department at the Istituto Italiano di Tecnologia in Genova, Morego (http://www.iit.it/icub).

********
* Theme #4. Tactile object manipulation and perception

Description: recent progress in tactile technology allows studying algorithms for advanced manipulation and exploration of objects. Tactile and, more in general haptic feedback, allows extracting local features like surface texture, curvature and hardness. These are fundamental features to guide object manipulation and perform for example object re-grasping, bimanual manipulation and controlled slip. Controlled object interaction in turn produces information about the physical and geometrical properties of objects that can complement or even substitute vision for object recognition. The goal of this PhD is to study signal processing algorithms and control strategies for extracting object features from vision and tactile feedback during object manipulation. We will investigate how the features collected during manipulation can be integrated and used for object recognition. This project will be carried out on the iCub humanoid robot using the tactile system on the hand [1][2]).

Requirements: the ideal candidate would have a degree in Computer Science, Engineering or related disciplines; a background in control theory and machine learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
[1] Maiolino, P., Maggiali, M., Cannata, G., Metta, G., Natale, L., A Flexible and Robust Large Scale Capacitive Tactile System for Robots, IEEE Sensors Journal, vol. 13, no. 10, pp. 3910-3917, 2013.
[2] Schmitz A., Maiolino P., Maggiali M., Natale L., Cannata G., Metta G., Methods and Technologies for the Implementation of Large Scale Robot Tactile Sensors, IEEE Transactions on Robotics, Volume 27, Issue 3, pp. 389-400, 2011.

Contact: Lorenzo Natale (email: name.surname at iit.it)

********
* Theme #5. Real-time software architectures for humanoid robots

Description: In the past few years humanoid robots have evolved at a rapid pace. Research has made impressive advancements on both mechatronics and cognitive capabilities, including learning, perception and control. At the same time the explosion of the market of mobile devices has led to a remarkable increase in the computational capabilities of embedded CPUs and has made available higher resolution sensors and inertial units.
Research on humanoid robotics is attacking problems such as human-robot interaction, whole body control, object manipulation and tool use. These research efforts must be supported by an adequate software infrastructure which allows experimenting with new hardware and algorithms while at the same time reducing debugging time and maximizing code-reuse. The complexity of humanoid robots and their peculiar application domain require that further software engineering efforts are devoted to support the proper integration of diverse capabilities. The goal of this PhD project is to develop novel solutions to these problems. Topics include:

- Domain Specific Languages and their application to software development in humanoid robotics
- Interoperability between heterogeneous systems
- Strategies for coordination and arbitration of software components
- Programming of behaviors for natural HRI
- Real-time architectures and communication protocols
- Increasing robustness of robot software through testing, self-monitoring, fault tolerance, and autonomous system health inspection

Requirements: the ideal candidate would have a degree in Computer Science, Engineering or related disciplines with background in robotics and software engineering. He would also be highly motivated to work on a real robotic platform.

References:
[1] Paikan, A., Fitzpatrick, P., Metta, G., and Natale, L., Data Flow Port's Monitoring and Arbitration, Journal of Software Engineering for Robotics, vol. 5, no. 1, pp. 80-88, 2014.
[2] Fitzpatrick, P., Ceseracciu, E., Domenichelli, D., Paikan, A., Metta, G., and Natale, L., A middle way for robotics middleware, Journal of Software Engineering for Robotics, vol. 5, no. 2, pp. 42-49, 2014.
[3] R. Simmons, D. Kortencamp, D. Brugali. Robotics Systems Architectures and programming, Handbook of Robotics II ed., Springer 2013.
[4] D. Brugali and A. Shakhimardanov. Component-based Robotic Engineering. Part II: Models and systems. In IEEE Robotics and Automation Magazine, March 2010.
[5] D. Brugali and P. Scandurra. Component-based Robotic Engineering. Part I: Reusable building blocks. In IEEE Robotics and Automation Magazine, December 2009.

Contact: Lorenzo Natale (email: name.surname at iit.it)

********
* Theme 6. Sensing humans: enhancing social abilities of the iCub platform

Description: there is general consensus that robots in the future will work in close interaction with humans. This requires that robots are endowed with the ability to detect humans and interact with them. However, treating humans as simple animated entities is not enough: meaningful human-robot interaction entails the ability to interpret social cues and human intentions. Such capabilities are fundamental prerequisites to program the robot to react appropriately to humans and to bias the interpretation of the scene using nonverbal cues (gaze or body gestures).
The aim of this project is to endow the iCub with a fundamental layer of capabilities for detecting humans, their posture and social intentions. Examples could be the ability to detect if a person is attempting to interact with the robot or his posture and intentions. Conventional research in Computer Vision and Machine Learning focuses on applications in which the image patch of a whole person (or group of people) is visible without strong occlusions in. On the other hand, face-to-face interaction requires developing novel algorithms for coping with situations in which large areas of the body are occluded or only partially visible.

Requirements: This PhD project will be carried out within the iCub Facility in collaboration with the Department of Pattern Analysis and Computer Vision (PAVIS). The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Computer Vision and/or Machine Learning. He should also be highly motivated to work on a robotic platform and have computer programming skills.

Contacts: Lorenzo Natale and Alessio Del Bue (email: name.surname at iit.it)

********
* Theme 7. Spatial modeling of three-dimensional maps from stereo and RGBD sensors

Description: the ability to acquire high-resolution depth information allows for using three-dimensional geometries to build detailed shapes. The latter is fundamental for tasks that require complex interaction between the robot and the environment like balancing, walking or object grasping/manipulation. Local shape information can in fact used to segment the scene and plan foot or hands placement to stabilize the robot or the fingers to achieve a stable grip of objects. This is a challenging task because it requires observations on both the geometry and the visual appearance of the surrounding surfaces, in relation to the body of the robot. To perform such tasks, features need to be extracted from the data allowing different regions to be compared and matched. Depending on the complexity of the viewed scene, these features can be extracted from the depth data alone or need to be augmented with those extracted from images. The aim of this PhD is to study the general problem of scene understanding by combining three-dimensional depth observations with visual appearance from images. The goal is to investigate novel features for local shape description and machine learning techniques for classification. We consider tasks like locomotion and object manipulation in scenarios that involve whole-body control.

Requirements: the ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

Contacts: Lorenzo Natale and Tariq Abuhashim (email: name.surname at iit.it)

********
* Theme 8. Event-driven human and action detection for the iCub

Description: Interacting with a dynamical environment is one of the major challenges of robotics. Biology clearly outperforms robotic systems when acting in real scenarios in terms of appropriateness of the behavioural response, robustness to interference and noise, adaptation to ever changing environmental conditions, and energy efficiency. All these properties arise from the characteristics of the radically different style of sensing and computation used by the biological brain.
In conventional robots, sensory information is available in a sequence of static snapshots and high dynamics can be sensed only by increasing the sampling rate. However the available bandwidth limits the amount of information that can be transmitted forcing a compromise between resolution and speed. Event-driven vision sensors transmit information as soon as a change occurs in their visual field, achieving incredibly high temporal resolution, coupled with extremely low data rate and automatic segmentation of significant events.
The proposed theme aims at the exploitation of highly dynamical and sparse information from event-driven sensors for fast detection of humans and action recognition. The automatic motion based segmentation performed by the event-driven cameras allows for fast detection of regions of interest. The research focus will be the exploitation of this segmentation for the detection and recognition of moving body parts to implement fast real-time human detection and action recognition. The ultimate goal is to improve awareness and interaction skills of the iCub robot.

Requirements: degree in Computer Science or Engineering (or equivalent) and background in Computer Vision and/or Machine Learning. High motivation to work on a robotic platform and good programming skills.

Reference: Benosman, R.; Clercq, C.; Lagorce, X.; Sio-Hoi Ieng; Bartolozzi, C., "Event-Based Visual Flow," Neural Networks and Learning Systems, IEEE Transactions on , vol.25, no.2, pp.407,417, Feb. 2014, doi: 10.1109/TNNLS.2013.2273537

Contacts: Chiara Bartolozzi and Lorenzo Natale (email: name.surname at iit.it)

********
* How to apply

Notice that the positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics, offered jointly by IIT and the University of Genova. The official call and application forms are available on the website of the University of Genova.

The official calls are available here:
http://phd.dibris.unige.it/biorob/index.php/how-to-apply/themes (Advanced and Humanoid Robotics)

Applications must be submitted online, instructions for applicants are available here:
http://phd.dibris.unige.it/biorob/index.php/how-to-apply

* Application deadline: 10 June 2015, 12pm Italian time

* Applicants are strongly encouraged to get in touch the contact person(s) for the individual themes.

--
Istituto Italiano di Tecnologia
Lorenzo Natale, PhD
lorenzo.natale at iit.it
via Morego, 30 16163 Genova
Ph: +39 010 71781946
Fax: +39 010 7170205
www.iit.it




More information about the robotics-worldwide mailing list