[robotics-worldwide] [meetings] RSS 2018 Workshop on New Benchmarks, Metrics, and Competitions for Robotic Learning

Markus Wulfmeier markus at robots.ox.ac.uk
Wed Apr 18 09:09:31 PDT 2018

Dear colleagues,

Please find below the call for papers to the RSS 2018 workshop on New
Benchmarks, Metrics, and Competitions for Robotic Learning (*Deadline: May
27*). Apologies for cross-posting and please feel free to share this
announcement with your colleagues.

Looking forward to seeing you in Pittsburgh!

RSS 2018 Workshop on New Benchmarks, Metrics, and Competitions for Robotic

Call for contributions


Workshop at the Robotics: Science and Systems conference (RSS 2018)

Pittsburgh, Pennsylvania, USA, June 29-30, 2018

May 27, 2018 : Deadline for submission.
June 5, 2018 : Acceptance notification.
June 29 (full day) - 30 (half day), 2018: Workshop date.


Niko Sünderhauf niko.suenderhauf at qut.edu.au,

Markus Wulfmeier markus at robots.ox.ac.uk

We invite authors to contribute extended abstracts or full papers that:


   identify the shortcomings of existing benchmarks, datasets, and
   evaluation metrics for robotics

   propose improved datasets, evaluation metrics, benchmarks, and protocols
   for robotics  that foster repeatable evaluation and motivate research in
   important areas not well covered by existing benchmarks

   aim at metrics and general approaches for better repeatability,
   reproduction, comparison and sharing of results

   address specific robotics learning-related research challenges like
   coping with open-set conditions, uncertainty estimation, incremental /
   continuous learning, active learning, active vision, transfer learning


This workshop will discuss and propose new benchmarks, competitions, and
performance metrics that address the specific challenges arising when
deploying (deep) learning in robotics. In addition to new benchmarks and
and metrics, the focus lies on further methods for ensuring repeatability,
reproduction, easier comparison and sharing of results. Researchers in
robotics currently lack widely-accepted meaningful benchmarks, metrics and
competitions that inspire the community to work on the critical research
challenges for robotic learning, and allow repeatable experiments and
quantitative evaluation. This is in stark contrast to computer vision,
where datasets like ImageNet and COCO, and the associated competitions,
fueled much of the advances in recent years.

This workshop will therefore bring together experts from the robotics,
machine learning, and computer vision communities to identify the
shortcomings of existing benchmarks, datasets, and evaluation metrics. We
will discuss the critical challenges for learning in robotic perception,
planning, and control that are not well covered by the existing benchmarks,
and combine the results of these discussions to outline new benchmarks for
learning in robotic perception, planning, and control.

The new proposed benchmarks shall complement existing benchmark
competitions and be run annually in conjunction with conferences such as
RSS, CoRL, ICRA, NIPS, or CVPR. They will help to close the gap between
robotics, computer vision, and machine learning communities, and will
foster crucial advancements in machine learning for robotics.

Instructions for submission can be found under

INVITED SPEAKERS (confirmed so far)


   Dieter Fox (University of Washington)

   Wolfram Burgard (University of Freiburg)

   Stefanie Tellex (Brown University)

   Oliver Brock (Technical University Berlin)

   Vladlen Koltun (Intel)



   Niko Suenderhauf (Queensland University of Technology, Australian Centre
   for Robotic Vision)

   Markus Wulfmeier (University of Oxford)

   Anelia Angelova (Google Research)

   Ken Goldberg (UC Berkeley)

   Feras Dayoub (Queensland University of Technology)

More information about the robotics-worldwide mailing list