[robotics-worldwide] [meetings] Deadline Extended: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

Clark, Ronald ronald.clark at imperial.ac.uk
Sun Mar 25 15:19:51 PDT 2018

Due to numerous requests we have extended the deadline for the 1st Workshop on Deep Learning for Visual SLAM to 10 April 2018!

This is especially to give those who have not yet submitted a chance to submit.

Please see below for revised dates...

Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=DwRb8yb5OohmefN8sTsdmvFFnO7XG7zigxEE4s1Pzj4&e=

Workshop overview:
Visual SLAM and ego-motion estimation are two of the key challenges and cornerstone requirements of machine perception. To enable the next generation of visual SLAM, we need to pursue better means of integrating prior knowledge and understanding of the world. This workshop will focus on the intersection of deep learning and real-time visual SLAM. The workshop will explore ways in which data-driven models can be harnessed for creating robust Visual SLAM algorithms which are less fragile and much more robust than existing state-of-the-art approaches. The workshop will also investigate ways in which we can use deep-learned models alongside traditional approaches in a unified and synergistic fashion.

Paper submission: 10 April 2018
Notification of Acceptance: 15 April 2018
Camera ready: 19 April 2018
Final Schedule: 15 May 2018
Workshop date: 18 June 2018

Paper details:
Paper length must be limited to 8 pages.
Submisisons may be accepted as either oral or poster presentations.
Accepted papers will be published in the CVPR workshop proceedings.
Submission is via CMT: https://urldefense.proofpoint.com/v2/url?u=https-3A__cmt3.research.microsoft.com_DLVSLAM2018&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=0SzDQ8_8BVG964IKxfXlyogCQ5eEB6oWRLrMef-4Ank&e=

We are particularly soliciting papers containing new and innovatitive ideas (with possibly preliminary experimental evaluation) related, but not limited to, the following topics:

- Dense, Direct and Sparse Visual SLAM methods
- Learning for Real-time Odometry, Tracking and Ego-Motion estimation
- Learning for Single and Multi-View 3D Reconstruction
- Visual Place recognition and Relocalization
- Semantic SLAM methods
- Semantic elements are a fundamental component of human perception and scene understanding.
- New methods for 3D Scene Representation and Compression

More info: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=DwRb8yb5OohmefN8sTsdmvFFnO7XG7zigxEE4s1Pzj4&e=

We hope to see you all there!

Ronald Clark
Dyson Fellow
Imperial College London

More information about the robotics-worldwide mailing list