[robotics-worldwide] Reminder: CfP: IROS 2010 Workshop on Performance Evaluation and Benchmarking for Intelligent Robots and Systems with Cognitive and Autonomy Capabilities

Fabio Bonsignorio fabio.bonsignorio at heronrobots.com
Mon Jul 5 08:42:47 PDT 2010


Apologies for multiple postings
----------------------------------------------

Dear all,

There are just a few days left to submit a paper to  the IROS workshop on
Performance Evaluation and Benchmarking for Intelligent Robots and
Systems with Cognitive and Autonomy Capabilities to be held in Taipei,
Taiwan  on October 22nd during IROS 2010
(October 18-22 2010), http://www.iros2010.org.tw/about.php
You will find below  the call for papers.
If you feel you have new interesting ideas on the matter, pls submit
an IEEE TRO formatted 2-4 page extended abstract to one of the
organizers.
The MAIN PURPOSE  of this workshop is to contribute to the progress of
performance evaluation and benchmarking, focusing in intelligent
robots and systems,  including those with some high level cognitive
capabilities and some degree of autonomy, by providing a forum for
participants to exchange their on-going work and ideas in this regard.
 A purpose of the workshop is IN PARTICULAR to achieve better and
agreed ideas on how to define and measures system level
characteristics like ‘autonomy’, ‘cognition’ and ‘intelligence’.
We will also give you a short review on what's going on.
The papers will be included in the IROS 2010 workshop CD.

A related workshop, focused on the issues of robotics experiment
replication and related publishing process, was organized - last week-  on Good
Experimental Methodology in
Robotics and Replicable Robotics Research at RSS'10 (Zaragoza,Spain,
June 28, 2010) PROGRAM and details here:
http://www.heronrobots.com/EuronGEMSig/GEMSIGRSS10Program.html
It was remarkable :-) . We will put the presentations online soon.

All the Best

Fabio, Angel, and Raj

Important dates

Extended abstracts: July 10th
Notification of acceptance: July 17th
Final full papers: August 12th

Workshop Highlights

Objectives and contents

Robust and adaptive pursuing of complex goals in complex and open
ended changing environments, especially in applied robotics (e.g.
service, manufacturing, and unmanned vehicle applications), impose
stringent demands on robots in terms of being able to cope with and
react to dynamic situations using limited on-board computational
resources, incomplete and often corrupted sensor data with an higher
level of dependability. The  high level cognitive capabilities
including knowledge representation, perception, control, and learning
are considered essential elements that will allow robots to perform
complex tasks within highly uncertain environments with an appropriate
level of autonomy need to be measured in term of bare performance,
adaptivity, dependability. As the complexity and variety of required
tasks and of the targeted environments increase exploring new
application capabilities and domains, it is more and more necessary to
develop well principled procedures allowing to compare quantitatively
the solutions provided by robotics research, making easier the
exchanges of methods and solutions between the different research
groups and the assessment of the state of the art.
The cumulative progress coming from new more successful (or simply
more robust) implementations of concepts already presented in
literature, but not implemented with exhaustive experimental
methodology, run the risk of being ignored, if appropriate
benchmarking procedures, allowing to compare the actual practical
results with reference to standard accepted procedures, are not in
place.
It is a well-known fact that current robotics research makes it
difficult not only to compare results of different approaches, but
also to assess the quality of individual research work. This is even
more clear when we analyze cognitive capabilities or levels of
autonomy. In the last years a number of initiatives have been taken to
address this problem by studying the ways in which research results in
robotics can be assessed and compared. In this context the European
Robotics Research Network EURON, the eu robotics researcg funding
offices have as one of their  major goal the definition and promotion
of benchmarks for robotics research. Moving from similar motivation
the Performance Metrics for Intelligent Systems (PerMIS) Workshop
series has been dealing with similar issues in the context of
intelligent systems. The recently constituted IEEE TC PEBRAS has the
purpose to deal with these objectives. The main purpose of this
workshop is to contribute to the progress of performance evaluation
and benchmarking, focusing in intelligent robots and systems,
including those with some high level cognitive capabilities and some
degree of autonomy, by providing a forum for participants to exchange
their on-going work and ideas in this regard.  A purpose of the
workshop is also to achieve better and agreed ideas on how to define
and measures system level characteristics like ‘autonomy’, ‘cognition’
and ‘intelligence’.

The emphasis of the workshop will be on principles, methods, and
application expanding beyond the current limit of robotics
applications in term of cognitive capabilities and autonomy.
Another key issue will be a capability-led understanding of cognitive
robots: how to define shared ontologies or dictionaries to discuss
robotic cognitive systems in terms of their performance, relationships
between different cognitive robotics capabilities, requirements,
theories, architectures, models and methods that can be applied across
multiple engineering and application domains, detailing and
understanding better the requirements for robots in terms of
performance, the approaches to meeting these requirements and the
trade-offs in terms of performance. The proper definition of
benchmarking is related to the problem of measuring capabilities of
robots in a context in which, in many cases, the ‘robotics
experiments’ themselves are difficult to ‘replicate’.

List of topics

We welcome any topic relevant to benchmarking and performance
evaluation in the context of cognitive solutions to practical
problems, such as:

•       Knowledge representation, perception (sensing), and learning
•       Uncertainty management in robot navigation, path-planning and control
•       Cognitive manipulation
•       Metrics for sensory motor coordination
•       Metrics for visual servoing effectiveness/efficiency
•       Benchmarking autonomy and robustness to changes in the environment/task
•       Capability-led understanding of cognitive robots
•       Scalable autonomy measurements
•       Shared ontologies to discuss robotic cognitive systems in terms of
their performance capabilities
•       Relationships between different cognitive robotics capabilities
•       Requirements, theories, architectures, models and methods that can
be applied across multiple engineering and application domains
•       Detailing and understanding better the requirements for robots in
terms of performance, the approaches to meeting these requirements,
the trade-offs in terms of performance
•       The development of experimental scenarios to evaluate performance,
demonstrate generality, and measure robustness
•       Benchmarking of sensory motor coordination
•       Performance modeling of the relationship between a task and the
environment where it is performed
•       Relationship between benchmarking and replication of
experiments with robots
•       Robotics experiment reporting

Intended Audience

The proposed workshop is the fifth in the series after four successful
and well attended workshops in IROS’06 to IROS’09, while related
workshops have been organized at the RSS and ICRA. It is part of a
general effort to improve the effectiveness of robotics research
carried on within the Euron network an at US NIST. The primary
audience of the proposed workshop is intended to be researchers and
practitioners both from academia and industry with an interest in
cognitive robotics and how these approaches can be utilized in
generating intelligent behaviors amidst uncertainty for robots in the
service and commercial sectors and robotics technology to find new
applications in everyday life. The workshop is also aimed at
benchmarking and objectively evaluating performance of such robots.
Accordingly, it is envisioned to be useful for anyone who has an
interest in quantitative performance evaluation of robots and/or robot
algorithms.

Organizers

Angel P. del Pobil, Ph.D.
Co-Chair for Research, European Robotics Research Network EURON
Professor, Department of Engineering and Computer Science
Universitat Jaume-I
Campus Riu Sec, Edificio TI, E-12071 Castellon, Spain.
Email: pobil at icc.uji.es Tel: +34-964-72.82.93

Raj Madhavan, Ph.D.
R & D Staff Member
Computational Sciences and Engineering Division, Oak Ridge National
Laboratory (ORNL) &
Guest Researcher
Intelligent Systems Division, National Institute of Standards and
Technology (NIST)
100 Bureau Dr. Stop 8230, Gaithersburg, MD 20899, U.S.A.
Email: raj.madhavan at nist.gov Tel: +1-301-975-2865

Fabio Bonsignorio
CEO & Founder
Heron Robots srl V.R. Ceccardi 1/18, 16121 Genova, Italy.
Email: fabio.bonsignorio at heronrobots.com Tel: +39-339-8406011
Professor
Banco de Santander Chair of Excellence in Robotics
Universidad Carlos III de Madrid
Edificio Agustín de Betancourt, Despacho 1.3B01
Avda. Universidad, 30
28911 Leganés
Spain

--
CONFIDENTIAL AND PRE-DECISIONAL INFORMATION


Fabio P. Bonsignorio

Ceo
Heron Robots s.r.l.
Via R.Ceccardi 1/18
I-16121 Genova
Italy
www.heronrobots.com

Professor
Banco de Santander Chair of Excellence in Robotics
Universidad Carlos III de Madrid
Edificio Agustín de Betancourt, Despacho 1.3B01
Avda. Universidad, 30
28911 Leganés
Spain


More information about the robotics-worldwide mailing list