[robotics-worldwide] [meetings] CVPR'19 Workshop on Event-based Vision and Smart Cameras - Call for papers and demos
scaramuzza.davide at gmail.com
Sun Mar 17 13:57:05 PDT 2019
Workshop website: https://urldefense.proofpoint.com/v2/url?u=http-3A__rpg.ifi.uzh.ch_CVPR19-5Fevent-5Fvision-5Fworkshop.html&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=PSpYXF8wXrUfMKCx0p4MfY1bMOHu6f576cTN3kmU2Oc&s=L6toOsUJVMKrVjOLJqxNwpLB2Gn3I2yPtPT2AUUSKxI&e=
Time, Date and Location: Monday, June 17, 2019. Full day workshop at
CVPR'19, Long Beach, California.
This workshop is dedicated to event-based cameras, smart cameras and
algorithms. Event-based cameras are revolutionary vision sensors with three
key advantages: a measurement rate that is almost 1 million times faster
than standard cameras, a latency of microseconds, and a dynamic range that
is eight orders of magnitude larger than that of standard cameras. Because
of these advantages, event-based cameras open frontiers that are
unthinkable with standard cameras (which have been the main sensing
technology of the past 60 years). These revolutionary sensors enable the
design of a new class of algorithms to track a baseball in the moonlight,
build a flying robot with the same agility of a fly, and doing structure
from motion in challenging lighting conditions and at remarkable speeds.
These sensors became commercially available in 2008 and are slowly being
adopted in computer vision and robotics. They have covered the main news in
recent years, with event-camera company Prophesee receiving $40 million in
investment from Intel and Bosch, and Samsung announcing mass production as
well its use in combination with IBM TrueNorth processor to recognize human
gestures. Cellular processor arrays (CPAs), such as the SCAMP sensor, are
novel sensors in which each pixel has a programmable processor, thus they
yield massively parallel processing near the image plane. Unlike a
conventional image sensor, the SCAMP does not output raw images, but rather
the results of on-sensor computations, for instance a feature map or optic
flow map. Because early vision computations are carried out entirely
on-sensor, the resulting system has high speed and low-power consumption,
enabling new embedded vision applications in areas such as robotics, AR/VR,
automotive, gaming, surveillance, etc. This workshop will cover the sensing
hardware, as well as the processing and learning methods needed to take
advantage of the above-mentioned novel cameras.
- Andrew Davison (Imperial College London and SLAMCore)
- Cornelia Fermuller (University of Maryland)
- Mike Davies (Intel Corp.)
- Piotr Dudek (University of Manchester)
- Hyunsurk Eric Ryu (Samsung Electronics)
- Yulia Sandamirskaya (University of Zurich / ETH Zurich)
- Chiara Bartolozzi (Istituto Italiano di Tecnologia)
- Robert Mahony (Australian National University)
- Garrick Orchard (Intel Corp.)
- Simone Lavizzari (Prophesee)
- Kynan Eng (iniVation / aiCTX)
- Shoushun Chen (CelePixel)
Call for Papers and Demos:
Research papers and demos are solicited in, but not limited to, the
- Event-based / neuromorphic vision.
- Algorithms: visual odometry, SLAM, 3D reconstruction, optical flow
estimation, image intensity reconstruction, recognition, stereo depth
reconstruction, feature/object detection and tracking, calibration, sensor
- Model based, embedded or learning approaches.
- Event-based signal processing, control, bandwidth control.
- Event-based active vision.
- Event camera datasets and/or simulators.
- Applications in: robotics (navigation, manipulation, drones...),
automotive, IoT, AR/VR, space, inspection, surveillance, crowd counting,
- Near-focal plane processing, such as cellular processor arrays (e.g.,
- Biologically-inspired vision and smart cameras.
- Novel hardware (cameras, neuromorphic processors, etc.) and/or
- New trends and challenges in event-based and/or biologically-inspired
Accepted papers at the main conference
Authors of accepted papers on the above topics at the main conference
(CVPR'19) are encouraged to contact the Workshop organizers (contact
details below) to make arrangements to showcase your work at the workshop.
We solicit live demonstrations of event-based vision systems and
prototypes. We plan to have a dedicated poster and demonstration session
for authors to interact with the audience and show their systems and
Paper submission: March 29, 2019.
Demo abstract submission: March 29, 2019.
Notification to the authors: April 4, 2019.
Camera-ready paper: April 10, 2019.
Workshop day: June 17, 2019.
Submission website: https://urldefense.proofpoint.com/v2/url?u=https-3A__cmt3.research.microsoft.com_EVENTVISION2019&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=PSpYXF8wXrUfMKCx0p4MfY1bMOHu6f576cTN3kmU2Oc&s=d5o8kRz59APXQFRK-_szrjo5FCz8_8iAFG2UcGea1Ew&e=
Author guidelines. Please follow CVPR guidelines and template:
- For paper submission, please refer to CVPR guidelines. Page limit: 8
pages + references. Papers will be published in IEEE and in CVF open access.
- For demo abstract submission, authors are encouraged to submit an
abstract of up to 2 pages.
- Davide Scaramuzza - University of Zurich
- Guillermo Gallego - University of Zurich
- Kostas Daniilidis - UPenn
Prof. Dr. Davide Scaramuzza
Director of the Robotics and Perception Group:
Inst. of Informatics, University of Zurich,
Inst. of Neuroinformatics, University of Zurich and ETH Zurich
Andreasstrasse 15, AND 2.10, Zurich, Switzerland
Office: +41 44 635 24 09
YouTube Channel: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_ailabRPG_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=PSpYXF8wXrUfMKCx0p4MfY1bMOHu6f576cTN3kmU2Oc&s=f_k65xBGqJp1wX1W9IMDc3y1pdynn3CTFFjU-_YO0mU&e=
More information about the robotics-worldwide