[robotics-worldwide] [meetings] [Call for papers] Ro-MAN 2017 Workshop on Social Interaction and Multimodal Expression For Socially Intelligent Robots (WS-SIME)

Luis Santos luis at isr.uc.pt
Wed May 31 07:10:58 PDT 2017


[1st Call for papers]  Ro-MAN 2017 Workshop on Social Interaction and Multimodal Expression For Socially Intelligent Robots (WS-SIME) 
Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__ws-2Dsime.com_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=6jeW0AjYQIv5PEHQvCtrVRZVzhg-00xmcmxiqZUoq3c&s=PDTfEqXHe7WbNyQjvRe-s0Fko7GbY9GTDnKjj_XtKcw&e=  <https://urldefense.proofpoint.com/v2/url?u=http-3A__ws-2Dsime.com_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=6jeW0AjYQIv5PEHQvCtrVRZVzhg-00xmcmxiqZUoq3c&s=PDTfEqXHe7WbNyQjvRe-s0Fko7GbY9GTDnKjj_XtKcw&e= >
August 28, 2017
Pestana Palace Hotel, Lisbon, Portugal 
Workshop organized in conjunction with the RO-MAN 2017 <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ro-2Dman2017.org_site_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=6jeW0AjYQIv5PEHQvCtrVRZVzhg-00xmcmxiqZUoq3c&s=LsdvDYUbPHgXWgUEPQx69_HINmk-oh5RYpWqjjQZ6LE&e= > conference.

Important Dates
- Paper submission deadline: June 9, 2017
- Author notification: June 22, 2017
- Camera-ready submission: July 5, 2017
- Main conference: August 28- September 1, 2017
- Workshop day: August 28, 2017


Overview
The aim of this full-day workshop is to present rigorous scientific advances on social interaction and multimodal expression for socially intelligent robots, address current challenges in this area, and to set a research agenda to foster interdisciplinary collaboration between researchers on the domain.

Recent advances in the field of robotics and artificial intelligence contributed to the development of "socially interactive robots" that engage in social interactions with humans and exhibit certain human-like social characteristics, including the abilities to communicate with high-level dialogue, to perceive and express emotions using natural multimodal cues (e.g., facial expression, gaze, body posture) and to exhibit distinctive personalities and characters. Applications for socially interactive robots are plentiful: companions for children and elderly, household assistants, partners in industries, guides in public spaces, educational tutors at school and so on. Despite this progress, the social interaction and multimodal expression capabilities of robots are still far behind the intuitiveness and naturalness that is required to allow uninformed users to interact, establish and maintain social relationships with them in their everyday lives. 

The area of social interaction and multimodal expression for socially intelligent robots remains very much an active research area with significant challenges in practice due to limitations both in technology and in our understanding of how different modalities must work together to convey human-like levels of social intelligence. Designing reliable and believable social behaviors for robots is an interdisciplinary challenge that cannot be solely approached from a pure engineering perspective. Human sciences, social sciences, and cognitive sciences play a primary role in the development and the enhancement of social interaction skills for socially intelligent robots. 

This workshop will bring together a multidisciplinary audience interested in the study of multimodal human-human and human-robot interactions to address challenges in these areas, and elaborate on novel ways to advance research in the field, based on theories of human communication and empirical findings validated human-robot interaction studies. We welcome contributions on both theoretical aspects as well as practical applications. The analysis of human-human interactions is of particular importance to understand how humans send and receive social signals multimodally, through both parallel and sequential use of multiple modalities (e.g., eye gaze, touch, vocal, body, and facial expressions. Results achieved by researchers studying human-robot interactions offers researchers the opportunity to understand how uninformed interaction partners perceive the multimodal communication skills developed for social robots (e.g., children, elderly) and how they influence the interaction process (e.g., regarding usability and acceptance). 

Topics of interest 
Workshop topics include, but are not limited to:

1.       Contributions of fundamental nature 
a.        Psychophysical studies and empirical research about multimodality
2.       Technical contributions on multimodal interaction
a.        Novel strategies of multimodal human-robot interactions
b.       Dialogue management using multimodal output
c.        Work focusing on novel modalities (e.g., touch) 
3.       Multimodal interaction evaluation 
a.        Evaluation and benchmarking of multimodal human-robot interactions
b.       Empirical HRI studies with (partial) functional systems
c.        Methodologies for the recording, annotation, and analysis of multimodal interactions
4.       Applications for multimodal interaction with social robots 
a.        Novel application domains for multimodal interaction
5.       Position papers and reviews of the state-of-the-art and ongoing research 

Format
WS-SIME is a full-day workshop, including two invited talks, two rounds of paper presentations and an interactive poster session, from an interdisciplinary set of selected participants. After the presentations, there will be we a brainstorming discussion, and the workshop will terminate with a final summary and outlook.
 
Submissions
We invite two types of submissions:
 
1) FULL PAPERS (4 pages): Accepted submissions will be presented in an oral presentation at the workshop (15 minutes + 5 minutes of Q&A).
 
2) SHORT PAPERS (2 pages):  Accepted submissions will have a 1-minute “teaser” presentation and will be presented in an interactive poster session at the workshop.
 
All papers will go through a single-blind peer review and selected based on relevance, originality and theoretical and/or technical quality.  Papers must be formatted according to the guidelines and templates on the main RO-MAN <https://urldefense.proofpoint.com/v2/url?u=http-3A__ras.papercept.net_conferences_support_support.php&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=6jeW0AjYQIv5PEHQvCtrVRZVzhg-00xmcmxiqZUoq3c&s=q1hTEXumhyIzItDPL2SA3OyFyNMGjQQLprCJTrS3IQk&e= > 2017 conference website, anonymized, and submitted in PDF format through the EasyChair conference system <https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Dwssime2017&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=6jeW0AjYQIv5PEHQvCtrVRZVzhg-00xmcmxiqZUoq3c&s=aeHjLERABJA1V3xNGrUuHGS1Oyv4wJx4vgJDGoJeIQs&e= >. At least one author will be required to register and attend the workshop. Accepted full paper contributions will be published in the online CEUR Workshop Proceedings <applewebdata://2196B139-3D15-4503-9D13-8FF67BF657B2/CEUR-WS.org>.

Invited speakers
Ana Paiva – Instituto Superior Técnico, Technical University of Lisbon
Jorge Dias - Institute of Systems and Robotics, University of Coimbra

Organisers
Christiana Tsiourti (Institute of Service Science, University of Geneva, Switzerland)
Jorge Dias (Institute of Systems and Robotics, University of Coimbra, Portugal)
Astrid Weiss(ACIN Insititute of Automation and Control, Vienna University of Technology, Austria)
Sten Hanke (Center for Health & Bioresources, AIT Austrian Institute of Technology GmbH, Austria)
Julian Angel-Fernandez (ACIN Insititute of Automation and Control, Vienna University of Technology, Austria)
 
Contacts
Christiana Tsiourti: christiana.tsiourti at unige.ch <mailto:christiana.tsiourti at unige.ch>
Sten Hanke: sten.hanke at ait.ac.at <mailto:sten.hanke at ait.ac.at>


More information about the robotics-worldwide mailing list