[robotics-worldwide] [meetings] [CfP] RSS 2020 Workshop on Self-Supervised Robot Learning

Abhinav Valada valada at cs.uni-freiburg.de
Sat Mar 14 13:18:53 PDT 2020


Call for Papers


Workshop on Self-Supervised Robot Learning

Robotics: Science and Systems (RSS)

July 13th, 2020

Corvallis, Oregon, U.S.A.

Website: https://urldefense.com/v3/__https://www.brainlinks-braintools.uni-freiburg.de/rss20-ssrl__;!!LIr3w8kk_Xxm!97ehPORWND82cWot_E6BxOo__C1W5GIVWSwwFxBVTeJd2cMf0itU7USQW2SyEJ2vdK7sWf48$  
<https://urldefense.com/v3/__https://www.brainlinks-braintools.uni-freiburg.de/rss20-ssrl/__;!!LIr3w8kk_Xxm!97ehPORWND82cWot_E6BxOo__C1W5GIVWSwwFxBVTeJd2cMf0itU7USQW2SyEJ2vdOsbINUx$ >



Self-supervised learning is a promising direction that aims to learn 
representations from the data itself without explicit and potentially 
even manual supervision. One of the major benefits of self-supervised 
learning is the ability to scale to large amounts of unlabelled data in 
a lifelong learning manner and to improve performance by reducing the 
effect of dataset bias. Recent development in self-supervised learning 
has resulted in achieving comparable or better performance than 
fully-supervised models. However, many of these methods are developed in 
domain-specific communities such as robotics, computer vision or 
reinforcement learning. The aim of this workshop is to bring together 
researchers from different communities to discuss opportunities, 
challenges and explore new directions.



The focus topics of our workshop include, but are not restricted to:


    Self-supervised learning for robotics, robot vision, reinforcement


    Self-supervised domain adaptation


    Meta-learning of self-supervised tasks


    Large-scale self-supervised learning


    Learning of generalizable pretext-tasks


    Loss functions for self-supervised learning


    Learning from auxiliary/multiple tasks


    Multimodal and cross-modal learning

Submission Instructions


We encourage participants to submit their research in the form of a 
single PDF. Submissions may be up to 4 pages in length, including 
figures, excluding references and any supplementary material. Please use 
the RSS conference template. Accepted papers will be presented in a 
poster session and selected awards papers as spotlight talks. All 
submitted contributions will go through a single blind review process. 
The contributed papers will be made available on the workshop’s website 
and selected papers will be invited for a special issue of a major 
robotics journal.

Submission Website:https://urldefense.com/v3/__https://easychair.org/conferences/?conf=ssrl20__;!!LIr3w8kk_Xxm!97ehPORWND82cWot_E6BxOo__C1W5GIVWSwwFxBVTeJd2cMf0itU7USQW2SyEJ2vdPjfElO4$ 


In order to make acceptance decisions early, we request interested 
researchers to submit a single page extended abstract by 19th April as 
an expression of interest. The authors would then have time 2nd July to 
include new results and submit the full 4 page paper. We also welcome 
submissions that have already been accepted to RSS or other major 
conferences and journal papers that have not been discussed in a conference.

Important Dates


- One page submission deadline: April 19th, 2020 AoE

- Notification of acceptance: April 24th, 2020

- Full paper submission: July 2nd, 2020 AoE

- Workshop date: July 13th, 2020

Invited Speakers


- Pieter Abbeel (UC Berkeley & Covariant.AI)

- Dieter Fox (University of Washington & NVIDIA)

- Abhinav Gupta (CMU & Facebook AI Research)

- Roberto Calandra (Facebook AI Research)

- Chelsea Finn (Stanford University)

- Pierre Sermanet (Google Brain)

- Andy Zeng (Google Brain)

Organizing Committee


- Abhinav Valada, University of Freiburg

- Anelia Angelova, Google Research/Google Brain

- Joschka Boedecker, University of Freiburg

- Oier Mees, University of Freiburg

- Wolfram Burgard, University of Freiburg

For further information please contact us at <rss20-ssrl at googlegroups.com>.


Abhinav, Anelia, Joschka, Oier, Wolfram


More information about the robotics-worldwide mailing list