[robotics-worldwide] PhD Openings in Human-Robot Interaction and Visual perception of social cues at the Italian Institute of Technology and University of Genoa

Alessandra Sciutti alessandra.sciutti at gmail.com
Wed Aug 7 09:16:16 PDT 2013


Apologies for cross-posting

===========================================================================
PhD Openings in Human-Robot Interaction and Visual perception of social cues

RBCS - IIT and  DIBRIS - University of Genoa
Cognitive Robotics, Interaction and Rehabilitation technologies curriculum
===========================================================================

The PhD Program for the Cognitive Robotics, Interaction and Rehabilitation
technologies curriculum provides interdisciplinary training at the interface
between technology and biomedicine. The general objective of the program is
to form scientists and research technologists capable to deal with
multidisciplinary projects at the interface between technology and
life-sciences. The themes offered this year as part of this curriculum are
supported by the Robotics, Brain and Cognitive Sciences Department of the
Italian Institute of Technologies (RBCS) and by the Department of
Informatics, Bioengineering, Robotics and System Engineering (DIBRIS) of the
University of Genoa.

Among the different research themes proposed I would like to promote two
topics: 

++ PhD Opening in Human-robot interaction
++ PhD Opening in Visual perception and the interpretation of social cues 

The ideal candidates are students with a higher level university degree
willing to invest extra time and effort in blending into a multidisciplinary
team composed of neuroscientists, engineers, psychologists, physicists
working together to investigate brain functions and realize intelligent
machines, rehabilitation protocols and advanced prosthesis. International
applications are encouraged and will receive logistic support with visa
issues, relocation, etc.

Below you can find more details related to each position.

Please refer to http://www.iit.it/phdschool (Curriculum Cognitive Robotics,
Interaction and Rehabilitation technologies) for further details on themes
and on-line application procedures. 
***Application DEADLINE: September 20st @ noon (CET)***
Feel free to contact me for further information.

-----------------------------------------
Alessandra Sciutti (PhD)
Robotics Brain and Cognitive Sciences Dept. - Italian Institute of
Technology
Via Morego 30, 16163 Genova, Italy
email: alessandra.sciutti at iit.it 


===========================================================================
Theme 11: Human-robot interaction 
Tutor: Alessandra Sciutti 
Department: RBCS (Istituto Italiano di Tecnologia) 
http://www.iit.it/rbcs
===========================================================================

Description: Realizing proficient human robot collaboration is a fundamental
challenge for robotics and in recent years it is becoming more and more
central, as robots use becomes more pervasive in society. In particular, an
important question is how to translate humans’ social skills into
human-robot interaction. We suggest that the efficient human mutual
understanding relies substantially on the implicit, non-verbal communication
associated to human motion (1, 2, 3). Our approach aims at understanding
which robotic features promote natural and fluid human-robot interaction,
using robots both as tools to investigate human brain functions and as test
bed for the derived models. Currently we are testing the motor and
perceptual principles usually adopted in human collaboration - as goal
anticipation (4, 5), automatic imitation (5), context-dependent perception
(6), weight understanding from action observation (7,8) - during human-robot
interaction experiments with the iCub platform. The topic we propose as PhD
activity is to further investigate how to establish efficient human - robot
collaboration by exploiting the natural mechanisms in place for human
interaction. The research will be based on designing different Human-Robot
interactive scenarios and assessing the efficiency of the collaboration and
the acceptance by the user, by means of multiple techniques, as
eye-tracking, motion capture and force measurements. The successful
candidate will also have the opportunity to spend part of his/her PhD at the
Osaka University and the University of Tokyo in the framework of the Marie
Curie IRSES project CODEFROR, with the purpose of integrating his/her
knowledge with the different expertise available at these institutes. 

Requirements: Background in computer sciences, robotics, computational or
behavioral neurosciences is required, as also willingness to make
experiments with human participants and strong motivation to work and adapt
to a multidisciplinary environment. 

Contact: alessandra.sciutti at iit.it

References 
[1] Rizzolatti G, Fadiga L, Fogassi L, Gallese V (1999) Resonance behaviors
and mirror neurons. Arch Ital Biol 137 (2-3):85—100 13 
[2] Reichelt, A. F., Ash, A. M., Baugh, L. A., Johansson, R. S., & Flanagan,
J. R. (2013). Adaptation of lift forces in object manipulation through
action observation. Experimental Brain Research, 1-14. 
[3] Chartrand TL, Bargh JA (1999) The chameleon effect: the
perception-behavior link and social interaction. J Pers Soc Psychol 76
(6):893—910 
[4] Sciutti A, Bisio A, Nori F, Metta G, Fadiga L, Sandini G (2012)
Anticipatory gaze in human-robot interactions. “Gaze in HRI From Modeling to
Communication" workshop at the 7th ACM/IEEE International Conference on
Human-Robot Interaction, 2012. Boston, Massachusetts, USA. 
[5] Sciutti A.*, Bisio A.*, Nori F., Metta G., Fadiga L., Pozzo T., Sandini
G. (2012) Measuring human-robot interaction through motor resonance.
International Journal of Social Robotics. Doi: 10.1007/s12369-012-0143-1 
[6] Sciutti, A., Del Prete, A., Natale, L., Burr, D.C., Sandini G. & Gori M.
2013, ‘Perception during interaction is not based on statistical context’,
Human Robot Interaction Conference 2013, Tokyo, Japan, March 3-6, 2013 
[7] Sciutti, A., Patanè, L., Nori, F., Sandini, G.: Understanding object
properties from the observation of the action partner. Human-Robot
Collaboration workshop at RSS 2013, Berlin, Germany, June 28, 2013 
[8] Sciutti, A., Patanè, L., Nori, F., Sandini, G.: Do humans need learning
to read humanoid lifting actions? Third Joint IEEE International Conference
of Development and Learning and on Epigenetic Robotics 2013, Osaka, Japan,
August 18-22, 2013 

===========================================================================
Theme 16: Visual perception and the interpretation of social cues 
Tutors: Francesca Odone, Giulio Sandini, Alessandra Sciutti 
Department: DIBRIS (University of Genova) – RBCS (Istituto Italiano di
Tecnologia) 
http://www.dibris.unige.it 
===========================================================================

Description: Social skills play a fundamental role since the very early
development stages. Infants present a marked inclination for social
interaction, ranging from the preference for biological motion and mutual
gaze to imitation of facial expressions and auditory, oral, and motor
matching. Such social inclination is supported by the gradual evolution of
the production and understanding of social signals, such as gestures, gaze
direction and emotional displays. To gain a deeper understanding of this
crucial stage of human development our goal is to design models of the
perceptual primitives supporting the acquisition of social skills in infants
and to test them on a humanoid robot. Computer vision and machine learning
state-of-the-art methods will be the building blocks of the research to be
carried out. 

Requirements: degree in robotics, bioengineering, computer science, computer
engineering, or related disciplines, attitude for problem solving. A
background on computer vision is an asset. 

Contacts: francesca.odone at unige.it, giulio.sandini at iit.it, or
alessandra.sciutti at iit.it

===========================================================================




More information about the robotics-worldwide mailing list