[robotics-worldwide] CFP: 2nd IEEE Workshop on Egocentric (First-Person) Vision at CVPR 2012

Hamed Pirsiavash hpirsiav at ics.uci.edu
Wed Mar 14 16:25:24 PDT 2012


2nd IEEE Workshop on Egocentric (First-Person) Vision in Conjunction 
with CVPR 2012
http://egovision12.cc.gatech.edu

Call for papers/abstracts
===============================================


Organizers
================
James M. Rehg (Georgia Tech, USA)
Deva Ramanan (UC Irvine, USA)
Xiaofeng Ren (Intel, USA)
Hamed Pirsiavash (UC Irvine, USA)
Alireza Fathi (Georgia Tech, USA)

Invited Speakers
================
Takeo Kanade (Carnegie Mellon University, USA)
Others will be confirmed soon.

Program Committee
================
Antonio Torralba (MIT, USA)
Ian Reid (University of Oxford, UK)
Bernt Schiele (Max Planck Institute, Germany)
Charless Fowlkes (UC Irvine, USA)
Andrew Davison (Imperial College London, UK)
Laurent Itti (USC, USA)
Greg Mori (Simon Fraser University, Canada)
Stefan Carlsson (KTH, Sweden)
Fernando De la Torre (CMU, USA)
Yoichi Sato (University of Tokyo, Japan)
Matthai Philipose (Intel Research, Seattle)
Abhinav Gupta (CMU, USA)
Walterio Mayol-Cuevas (University of Bristol, USA)
Kris Kitani (CMU, USA)


Schedule
================
Submission deadline: April 10, 2012
Notification of Acceptance: April 20, 2012
Camera Ready Due: April 25, 2012
Workshop Day at CVPR: June 17, 2012



The goal of this workshop is to call for a converged effort to 
understand the opportunities and challenges emerging in egocentric 
vision, to identify key tasks and evaluate the state of the art, and to 
discuss future directions. We invite submissions in all fields of vision 
that explore the egocentric perspective, including, but not limited to:

 

- Egocentric object detection, recognition and categorization

- Egocentric activity and action recognition

- Egocentric vision in detecting activities of daily living

- Egocentric vision in ubiquitous computing

- Leveraging first-person gaze for egocentric vision

- Human computer/robot interaction using egocentric vision

- Online learning and modeling of objects and scenes

- Data collection, benchmarking, and performance evaluation

- Integration of egocentric vision with other sensors

- Feature detection, tracking, and matching in egocentric video

- Motion analysis and scene segmentation with moving cameras

- Localization and visual SLAM in everyday environments

- Machine learning techniques in egocentric vision

______________________________________________________________________________________________________

  For further questions, please send an email to egovision12 at gmail.com
______________________________________________________________________________________________________




More information about the robotics-worldwide mailing list