[robotics-worldwide] [meetings] CfP: ICRA2019 Workshop "ViTac: Integrating Vision and Touch for Multimodal and Cross-modal Perception"

Luo, Shan Shan.Luo at liverpool.ac.uk
Mon Mar 18 10:53:22 PDT 2019


CfP: ICRA2019 Workshop "ViTac: Integrating Vision and Touch for Multimodal and Cross-modal Perception"

Abstract
Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality, e.g. computer vision and tactile robotics. Future robots, as embodied agents, should make best use of all available sensing modalities to interact with the environment. Over the last few years, there have been advances in the fusing of information from distinct modalities and selecting between those modalities to use the most appropriate information for achieving a goal, e.g. grasping or manipulating an object. Furthermore, there has been a recent acceleration in the development of optical tactile sensors using cameras, such as the GelSight and TacTip tactile sensors, bridging the gap between vision and tactile sensing, and creating cross-modal perception. This workshop will encompass recent progress in the area of combining vision and touch sensing from the perspective of how touch sensing complements vision to achieve a better robot perception, exploration, learning and interaction with humans. The proposed full day workshop aims to enhance active collaboration, discussion of methods for the fusion of vision and touch, discussion of challenges for multimodal and cross-modal sensing, development of optical tactile sensors and applications.

Topics of Interest
• trends in combining vision and tactile sensing for robot perception
• development of optical tactile sensors (using visual cameras or optical fibres)
• integration of optical tactile sensors into robotic grippers and hands
• roles of vision and touch sensing in different object perception tasks, e.g., object recognition, localization, object exploration, planning, learning and action selection
• interplay between touch sensing and vision
• bio-inspired approaches for fusion of vision and touch sensing
• psychophysics and neuroscience of combining vision and tactile sensing in humans and animals
• computational methods for processing vision and touch data in robot learning
• deep learning for optical tactile sensing and relation/interaction with deep learning for robot vision
• the use of vision and touch for safe human-robot interaction/collaboration

Invited Speakers
• Edward Adelson (MIT) – world-renowned neuroscientist in vision science and is also well known for his development of the GelSight optical touch sensor;
• Peter Allen (Columbia University) – recognised for his multiple pioneering works on integration of vision and tactile sensing, especially applied in robot grasping;
• Yasemin Bekiroglu (Vicarious AI, USA) – an expert in using vision and touch data for grasp
stability assessment;
• Alberto Rodriguez (MIT) – an expert in grasping and manipulation, won several Amazon picking challenges, development of the GelSlim touch sensor recently;
• Oliver Kroemer (CMU) – renowned for his pioneering work on learning dynamic tactile sensing with vision-based training;
• Sergey Levine (UC Berkeley) – an expert in machine learning and reinforcement learning, with applications in hand-eye coordination for robotic grasping;
• Lorenzo Natale (IIT) – an expert in tactile and visual perception, especially applied in humanoid robotics;
• Vincent Hayward (UPMC Univ Paris) – distinguished and well known for his research on human perception, especially human touch and haptics;
• Rebecca Lawson (University of Liverpool) – an expert in diverse aspects of processing in human object recognition system for vision and for haptics;
• Hongbin Liu (King’s College London) – an expert in developing optical (fibre) based tactile sensors and soft robotics.

Key dates
Posters and the live demonstrations will be reviewed by the organizers. The best posters will be invited for a talk at the workshop. All submissions will be reviewed using a single-blind review process. Accepted contributions will be presented during the workshop as posters. Expected contributions should be submitted in the form of extended abstracts (max 2 pages) in IEEE Conference paper format. Submissions should be in PDF format (<5MB), following the IEEE conference style (two-columns), via the EasyChair submission page:

https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Dicra2019vitac&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=tIBtRtO3HbSL34oPnIBaKsNjOHQyVaQ7ETwddMBqfcg&e=

Submission Deadline: 1 April, 2019
Notification of acceptance: 15 April, 2019
Camera-ready deadline: 1 May, 2019
Workshop day: 23/24 May, 2019 (Montreal Convention Centre, Montreal Canada)

Accepted papers at the ViTac workshop are invited to submit an extended full-paper to the Special Issue at the Frontiers in Robotics and AI on the same topic. More details can be found here<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.frontiersin.org_research-2Dtopics_10004_vitac-2Dintegrating-2Dvision-2Dand-2Dtouch-2Dfor-2Dmultimodal-2Dand-2Dcross-2Dmodal-2Dperception&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=3h4U3hPKkaa-rFQW_CF5tO5rYLH1RZOTvXKWloQALi0&e=>.

Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__bit.ly_icra2019vitac&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=jlB26TpnecYoD3SYBEE6Qn5C0PbSOX5aiyRquUiO0-c&e=
Contact: Shan.Luo at liverpool.ac.uk<mailto:Shan.Luo at liverpool.ac.uk>

Organisers
Shan Luo (University of Liverpool)
Nathan Lepora (Univ Bristol & Bristol Robotics Lab)
Uriel Martinez Hernandez (University of Bath)
João Bimbo (Istituto Italiano di Tecnologia)
Huaping Liu (Tsinghua University)

Support:
IEEE RAS Technical Committee on Haptics
IEEE RAS Technical Committee on BioRobotics
IEEE RAS Technical Committee on Human-Robot Interaction and Coordination
IEEE RAS Technical Committee on Computer and Robot Vision
IEEE RAS Technical Committee on Cognitive Robotics


-------------------------------------
Dr. Shan Luo

Lecturer (Assistant Professor) in Robotics

Director of the smARTLab<https://urldefense.proofpoint.com/v2/url?u=http-3A__wordpress.csc.liv.ac.uk_smartlab_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=LpTIzTD_uwaj4uzYLPTBjgNAMk9n1_qCcNp4Xa5NK6k&e=>
Department of Computer Science
The University of Liverpool
Liverpool, L69 3GJ
United Kingdom

Email: shan.luo at liverpool.ac.uk<mailto:xinping.yi at liverpool.ac.uk>
Web: https://urldefense.proofpoint.com/v2/url?u=https-3A__cgi.csc.liv.ac.uk_-7Eshanluo_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=olDiegJsa6sT0qBVq8jN98ngBuJPq1w4dWNTVcafj7M&e=


<https://urldefense.proofpoint.com/v2/url?u=https-3A__cgi.csc.liv.ac.uk_-7Eshanluo_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Xg1XoBywXHuKywHdC39rTlia48feWbH4hkLGyBpG6ak&s=olDiegJsa6sT0qBVq8jN98ngBuJPq1w4dWNTVcafj7M&e=>


More information about the robotics-worldwide mailing list