[robotics-worldwide] [meetings] CFP ICRA 2019 Workshop on Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR

Sajad Saeedi s.saeedi at imperial.ac.uk
Sat Mar 9 15:01:55 PST 2019


Title: Dataset Generation and Benchmarking of SLAM Algorithms for 
Robotics and VR/AR

Location: ICRA 2019, Montreal, Canada

Event Type: Joint Workshop and Tutorial


Abstract submission due date: *March 20 2019*

Abstract acceptance notification: *March 31 2019*

Camera-ready version due date: *April 8 2019*

Workshop day: *Friday* *May 24 2019*



Synthetic datasets have gained an enormous amount of popularity in the 
computer vision community, from training and evaluation of Deep 
Learning-based methods to benchmarking Simultaneous Localization and 
Mapping (SLAM). Having the right tools to create customized datasets 
will enable faster development, with the focus on the applications of 
robotics. A large number of datasets exist, but with emerging 
applications and new research directions, there is the need to have 
versatile dataset generation tools, covering all aspects of our daily 
life. On the other hand, SLAM is becoming a key component of robotics 
and augmented reality (AR) systems. While a large number of SLAM 
algorithms have been presented, there has been little effort to unify 
the interface of such algorithms, or to perform a holistic comparison of 
their capabilities. This is a problem, since different SLAM applications 
can have different functional and non-functional requirements. For 
example, a mobile phone-based AR application has a tight energy budget, 
while a UAV navigation system usually requires high accuracy. This 
workshop aims to bring experts in these two fields, dataset generation 
tools and benchmarking, to address challenges researchers facing.

This event will introduce novel benchmarking and dataset generation 
methods. As organizers, we will introduce*InteriorNet*(BMVC 
2018),*SLAMBench2.0*(ICRA 2018), and*MLPerf*.

(1) InteriorNet, developed at Imperial College London, is a versatile 
dataset generation application, capable of simulating a wide range of 
sensor and variations in environments, such a moving object and day 
lighting variation.

(2) SLAMBench2.0, developed at the University of Edinburgh, Imperial 
College London and the University of Manchester, is an open-source 
benchmarking framework for evaluating existing and future SLAM systems, 
both open and closed source, over an extensible list of datasets, while 
using a comparable and clearly specified list of performance metrics. A 
wide variety of existing datasets such as InteriorNet, TUM, ICL-NUIM, 
and also many SLAM algorithms such as ElasticFusion, InfiniTAM, 
ORB-SLAM2, OKVIS are supported. Integrating new algorithms and datasets 
to SLAMBench2.0 is straightforward and clearly has been specified by the 
framework. Attendees will gain experience on generating datasets and 
evaluating SLAM systems with SLAMBench.

(3) The MLPerf effort aims to build a common set of benchmarks that 
enables the machine learning (ML) field to measure system performance 
for both training and inference from mobile devices to cloud services. 
Researchers from several universities including Harvard University, 
Stanford University, University of Arkansas Littlerock, University of 
California Berkeley, University of Illinois Urbana Champaign, University 
of Minnesota, University of Texas Austin, and University of Toronto have 
contributed to MLPerf.


Topics of Interest:

Topics of interest include, but not limited to:

- SLAM Evaluation

- Reproducible Results

- Performance Analysis

- Application-oriented Mapping

- Metrics for Loop Closure Evaluation

- Active Vision Benchmarking and Datasets

- Metrics for Evaluations: from Perception to Motion Control

- Dataset and Benchmarking of SLAM in Dynamic Environments

- Task-based SLAM Evaluation: Navigation, Grasping, Planning, etc.

- Datasets and Benchmarking of AI for Robotics and Scene Understanding

- Customized Dataset Generation for SLAM and robotics learning: tools 
and datasets

- Deep Learning and AI: Datasets, Evaluation, and Benchmarking for 
Semantic and 3D Scene Understanding


Submission Format: for extended abstract or full papers, please use 
standard IEEE format (2-8 pages)

Submission Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Ddgbicra2019&d=DwICaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=4xOa35mqB8QAfyh-JvUOBHzWUV6R7hCvp1peBI5BuoU&s=7B5HTAuDyEtEEipx_9JPYP7Uqz6ZwGpXCJbSlE5UtLQ&e=


Desired Outcomes of The Workshop

- Share experience with other researchers

- Develop better techniques for performance evaluation

- Build and nurture a strong community that brings research and industry 


- *Prizes will be given to the best paper and also to the best 

- The workshop accepts contributions of research papers describing early 
research on emerging topics.

- The workshop is intended for quick publication of work-in-progress, 
early results, etc. The workshop is not intended to prevent later 
publication of extended papers.

- Informal proceedings will be made available at the website of the 
workshop permanently https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_view_icra-2D2019-2Dworkshop_home&d=DwICaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=4xOa35mqB8QAfyh-JvUOBHzWUV6R7hCvp1peBI5BuoU&s=r6aZozvRX4iRV6oAEFOShSNh5gEqZbJ4ylW2zZ1pLDI&e=.

- If there are any issues with submissions, please contact Sajad Saeedi 
at s.saeedi at imperial.ac.uk <mailto:icra.workshop at gmail.com>


Organizing Committee:

Sajad Saeedi, Imperial College London, UK

Bruno Bodin, NUS-Yale College, Singapore

Wenbin Li, University of Bath, UK

Luigi Nardi, Stanford University, USA

Rui Tang, Kujiale.com, China

More information about the robotics-worldwide mailing list