[robotics-worldwide] [meetings] Final CFP: Workshop on Cognitive Architectures for Situated Multimodal Human Robot Language Interaction (ICMI 2018)

stephanie.gross at ofai.at stephanie.gross at ofai.at
Fri Jul 6 06:16:55 PDT 2018

* Apologies for cross postings *

Workshop on Cognitive Architectures for Situated Multimodal Human Robot
Language Interaction (ICMI 2018)
October 16th, in Boulder, Colorado
Extended paper submission deadline: July 31, 2018
Invited talks: John Laird, Chen Yu


The workshop will take place in conjunction with the 20th ACM International
Conference on Multimodal Interaction (ICMI 2018) in Boulder, Colorado on the
16th of October.
In many application fields of human robot interaction, robots need to adapt
to changing contexts and thus be able to learn tasks from non-expert humans
through verbal and non-verbal interaction. Inspired by human cognition, we
are interested in various aspects of learning, including multimodal
representations, mechanisms for the acquisition of concepts (words, objects,
actions), memory structures etc., up to full models of socially guided,
situated, multimodal language interaction. These models can then be used to
test theories of human situated multimodal interaction, as well as to inform
computational models in this area of research.

Call for Papers

The workshop aims at bringing together linguists, computer scientists,
cognitive scientists, and psychologists with a particular focus on embodied
models of situated natural language interaction. Workshop submissions should
answer at least one of the following questions:

* Which kind of data is adequate to develop socially guided models of
language acquisition, e.g. multimodal interaction data, audio, video, motion
tracking, eye tracking, force data (individual or joint object
* How should empirical data be collected and preprocessed in order to
develop cognitively inspired models of language acquisition, e.g. should
either HH or HR data be collected?
* Which mechanisms are needed by the artificial system to deal with the
multimodal complexity of human interaction? How can the information
transmitted via different modalities be combined at a higher level of
* Models of language learning through multimodal interaction: How should
semantic representations or mechanisms for language acquisition look like to
allow an extension through multi-modal interaction?
* Based on the above representations, which machine learning approaches are
best suited to handle the multimodal, time-varying and possibly high
dimensional data? How can the system learn incrementally in an open-ended

Invited Speakers

Keynotes will be given by John Laird, Professor at the faculty of the
Computer Science and Engineering Division of the Electrical Engineering and
Computer Science Department of the University of Michigan, and Chen Yu,
Professor at the Computational Cognition and Learning Lab at Indiana

Important Dates

Extended paper submission deadline: July 31, 2018
Notification of acceptance: August 15, 2018
Final version: August 24, 2018
Workshop: October 16, 2018
Submission Instructions
Articles should be 4-6 pages, formatted using the ACM template of the ICMI
conference. For each accepted contribution, at least one of the authors is
required to attend the workshop.

Stephanie Gross, Austrian Research Institute for Artificial Intelligence,
Vienna, Austria
Brigitte Krenn, Austrian Research Institute for Artificial Intelligence,
Vienna, Austria
Matthias Scheutz, Department of Computer Science at Tufts University,
Massachusetts, USA
Matthias Hirschmanner, Automation and Control Institute at Vienna University
of Technology, Vienna, Austria

Sent from: https://urldefense.proofpoint.com/v2/url?u=http-3A__robotics-2Dworldwide.1046236.n5.nabble.com_&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=bXV6JaWFG_IpC4LpjiZ02MDRTro5moCxzG-wNKZFZKw&s=rdxIajKOizpM_6juM59FH4EKaz76N0zndhLiiqjvaEE&e=

More information about the robotics-worldwide mailing list