[robotics-worldwide] [2nd CFP] EURASIP Journal on Audio, Speech, and Music Processing Special Issue on Music Content Processing by and for Robots

Kazuhiro Nakadai nakadai at jp.honda-ri.com
Thu Jan 13 22:37:06 PST 2011

[apologies for cross posting]
[please circulate]

EURASIP Journal on Audio, Speech, and Music Processing
Special Issue on
Music Content Processing by and for Robots


===Second Call for Paper===

Music Content Processing (MCP) from audio signals is a very dynamic
research field, as witnessed by the uptake of the International
Society for Music Information Retrieval and its yearly conference. In
parallel, in Robotics, increasing efforts are dedicated to Robot
Audition and to musical collaborations between robots and humans. It
seems only natural that state-of-the-art MCP should play a significant
role in future entertainment robotics applications. Most existing
musical robots are preprogrammed or too complex to drive a
"human-like" musical interaction. Whichever link of the music
communication chain we take part in (e.g., performing, composing, and
listening to music), our behavior to musical sounds is dynamically
changing and based on processing music at a high level of abstraction.
Producing robust computational models of such behavior, embedded in
robotic platforms, calls for novel research at the frontier between
MCP and Robotics.

This special issue aims to
(1) foster the consideration of online processing, real-time issues,
computational simplicity, and robustness (e.g., to noise and typical
signal distortions) in MCP research,
(2) bridge the gap between Robot Audition research and MCP research, and
(3) provide an up-to-date overview of the current state of the art in
musical robot technology.

We target unpublished contributions from industry or academia,
covering all aspects from theory to practice. Potential topics
include, but are not limited to:

- Embedded machine listening
- Real time and robustness issues in the following:
   * Sound/music classification and identification
   * Audio tempo estimation and beat tracking
   * Audio pitch extraction
   * Score following
- Emotion/expression recognition/generation through musical sounds
- Multimodal music processing
- Musical-Social interactions (human-robot, robot-robot)
- Open development platforms of use to musical robots (visualization,
- Robot noise reduction (e.g., singing voice cancellation,
motion-noise suppression)
- Audio-motion synchronization
- Music behavior generation and planning (e.g., robot dancing,
playing, and singing)
- Robotic instrument playing, learning, and high-level control
- Human-robot mapping of motion capture data
- Machine musicianship (mechanics, software, etc.)

Before submission authors should carefully read over the journal's
Author Guidelines, which are located at
http://www.hindawi.com/journals/trt/guidelines.html. Prospective
authors should submit an electronic copy of their complete manuscript
through the journal Manuscript Tracking System at
http://mts.hindawi.com/ according to the following timetable:

Manuscript Due  April 1, 2011
First Round of Reviews  July 1, 2011
Publication Date        October 1, 2011

Lead Guest Editor
- Fabien Gouyon, Telecommunications and Multimedia Unit, INESC Porto,
Porto, Portugal

Guest Editors
- Kazuhiro Nakadai, Honda Research Institute Japan Co., Ltd.; Tokyo
Institute of Technology, Japan
- Jorge Solis, Faculty of Science and Engineering, WASEDA University, Japan
- Gil Weinberg, Center for Music Technology, Georgia Tech., USA
- Hiroshi G. Okuno, Department of Intelligence Science and Technology,
Graduate School of Informatics, Kyoto University, Japan

Kazuhiro Nakadai, Ph.D.
 HONDA Research Institute Japan Co., Ltd.
 8-1 Honcho, Wako-shi, Saitama, 351-0114, JAPAN
 TEL: +81-48-462-2121 ext.7438  FAX: +81-48-462-5221

 Visiting Associate Professor,
 Graduate School of Information Science and Engineering,
 Tokyo Institute of Technology

More information about the robotics-worldwide mailing list