[robotics-worldwide] [Call for Participation] ICMI 2012 Workshop on Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents

Ross Mead rossmead at usc.edu
Mon Oct 1 18:00:51 PDT 2012


WORKSHOP CALL FOR PARTICIPATION

ICMI 2012 Workshop on Speech and Gesture Production in Virtually and
Physically Embodied Conversational Agents

URL: http://robotics.usc.edu/~icmi

CONFERENCE: 14th ACM International Conference on Multimodal Interaction
(ICMI-2012)
LOCATION: DoubleTree Suites - Carousel Ballroom C, Santa Monica,
California, USA
DATE: Friday, October 26, 2012

DESCRIPTION:
You are invited to participate in this full day workshop which aims to
bring together researchers from the embodied conversational agent (ECA) and
sociable robotics communities to spark discussion and collaboration between
the related fields. The focus of the workshop will be on co-verbal behavior
production -- specifically, synchronized speech and gesture -- for
virtually and physically embodied platforms. It will elucidate the subject
in consideration of aspects regarding planning and realization of
multimodal behavior production. Topics discussed will highlight common and
distinguishing factors of their implementations within each respective
field. The workshop will feature a panel discussion with experts from the
relevant communities, and a breakout session encouraging participants to
identify design and implementation principles common to both virtually and
physically embodied sociable agents.

The workshop schedule, including the list of accepted papers and speakers,
is available here:
http://robotics.usc.edu/~icmi/2012/schedule.php

TOPICS:
Under the focus of speech-gesture-based multimodal human-agent interaction,
the workshop will address the following topics:
* Computational approaches to:
  - Content and behavior planning, e.g., rule-based or probabilistic models
  - Behavior realization for virtual agents or sociable robots (one or the
other, or both)
* From ECAs to physical robots: potential and challenges of cross-platform
approaches
* Behavior specification languages and standards, e.g., FML, BML, MURML
* Speech-gesture synchronization, e.g., open-loop vs. closed-loop approaches
* Situatedness within social/environmental contexts
* Feedback-based user adaptation
* Cognitive modeling of gesture and speech

INVITED SPEAKER: Stefan Kopp (Bielefeld University)

PANEL MEMBERS:
* Dan Bohus (Microsoft Research)
* Justine Cassell (Carnegie Mellon University) [Tentative]
* Stacy Marsella (USC Institute for Creative Technologies)
* Victor Ng-Thow-Hing (Honda Research Institute USA)

PROGRAM COMMITTEE:
* Dan Bohus (Microsoft Research)
* Kerstin Dautenhahn (University of Hertfordshire)
* Jonathan Gratch (USC Institute for Creative Technologies)
* Alexis Heloir (German Research Center for Artificial Intelligence)
* Takayuki Kanda (ATR Intelligent Robotics and Communication Laboratories)
* Jina Lee (Sandia National Laboratories)
* Stacy Marsella (USC Institute for Creative Technologies)
* Maja Matari? (University of Southern California)
* Louis-Philippe Morency (USC Institute for Creative Technologies)
* Bilge Mutlu (University of Wisconsin-Madison)
* Victor Ng-Thow-Hing (Honda Research Institute USA)
* Catherine Pelachaud (TELECOM ParisTech)

WORKSHOP ORGANIZERS:
* Ross Mead (University of Southern California)
* Maha Salem (Bielefeld University)

CONTACT:
* Workshop Questions (icmi2012ws.speech.gesture at gmail.com)
* Ross Mead (rossmead at usc.edu)
* Maha Salem (msalem at cor-lab.uni-bielefeld.de)


More information about the robotics-worldwide mailing list