[robotics-worldwide] call for Challenge on software solutions for the Automatic Object Identification (AOI) and Tracking (as part of the SAGA 2013 1st INTERNATIONAL WORKSHOP ON SOLUTIONS FOR AUTOMATIC GAZE DATA ANALYSIS)

Kai Essig kai.essig at uni-bielefeld.de
Tue Jun 11 09:14:43 PDT 2013


[Apologies for multiple postings.]

1. Call for Challenge on Automatic Object Identification (AOI) and Tracking

as part of the

SAGA 2013:
1st INTERNATIONAL WORKSHOP ON SOLUTIONS FOR AUTOMATIC GAZE DATA ANALYSIS
  - uniting academics and industry.

24-26 October 2013 Bielefeld University, Germany
Cognitive Interaction Technology Center of Excellence Workshop Website:
http://saga.eyemovementresearch.com/

===========================================================================

Important Dates:


August, 15, 2013: Deadline for 2-page abstract sketching your approach.
September, 2, 2013: Notification of acceptance for challenge.
October, 2, 2013: Submission of the final abstracts and final results.

October, 24-26, 2013: Challenge results presentation takes place at the 
SAGA 2013 Workshop at Bielefeld University,
                       Germany.

===========================================================================

We are very pleased to publish this call for challenge contributions as 
part of the
SAGA 2013 1st International Workshop on Solutions for Automatic Gaze 
Data Analysis. The challenge will
focus on software solutions for automatic object recognition as a 
trailblazer for
vision-based object and person tracking algorithms. The automatic object 
or person recognition and
tracking in video sequences (in real-time) is a key condition for many 
application fields, such as
mobile service robotics, Human-Robot Interaction (HRI), Computer Vision, 
Digital Image Processing,
autonomous assistance and surveillance systems (e.g., driver assistance 
systems) and Eye Tracking.
Applications vary from tracking of objects (e.g., manipulating or 
recognition of objects in dynamic scenes),
body parts (e.g., head or hand tracking for mimic and gesture 
classification), and persons
(e.g., person reidentification or visual following).

Although, many efficient tracking methods have been introduced for 
different tasks over the last years,
they are mostly restricted towards particular environmental settings and 
therefore cannot be applied to general application fields.
This is due to a range of factors: 1.) Often, underlying assumptions 
about the environment
cannot be met, including static background, no changes in lighting and 
inhomogeneous or invariant appearances.
These idealized conditions are usually missing for object tracking in 
high dynamic environments, as they are
common, for example in mobile scenarios. 2.) Object models cannot be 
applied because of the high variance in the appearance of
tracked persons or objects. 3.) Most algorithms are computationally 
quite expensive (large systems demand
often hard computational restrictions for the used algorithms).

===========================================================================

Details on the SAGA 2013 CHALLENGE on Automatic Object Identification 
(AOI) and Tracking:


In order to drive research on software solutions for the automatic
annotation of videos we offer a special challenge on this topic.
The purpose of the challenge is to encourage the community to work on a
set of specific software solutions and research questions and to
continuously improve on earlier results obtained for these problems over
the years. This will hopefully not only push the field as a whole and
increase the impact of work published in it, but also contribute open
source hardware, methods and data analysis software back to the
community.

For the challenge we adress this topic on the basis of eye-tracking 
data. Therefore,
we are providing a set of test videos (duration 2-3 minutes) and 
separate text
files with the corresponding gaze data on the workshop website
for which solutions should be written. These gaze videos, recorded by a 
scene camera attached
to an eye-tracking system, show people when they look at objects or 
interact with them
in mobile applications. The gaze data contains a time-stamped list of x- 
and y-positions of the gaze points
(in the coordinate system of the scene video). For selected videos, 
frame counter information will be also available to assist
with synchronization of the video and the gaze data.

For the challenge we are looking for semi- and fully-automatic software
solutions for the recognition and tracking of objects over the whole
video sequence. The software should provide the coordinates for the
tracked objects and use this information to automatically calculate
object specific gaze data, such as number of fixations and cumulative
fixation durations, by using the time-stamped list of 2D gaze 
coordinates in the eye-tracking file.
There are no restrictions on the way in which the
relevant objects are marked and on which kind of techniques can be used
to track the objects. The only constraint is that your software solution
can read and process the provided videos and reports gaze specific data
for the selected objects either as a text file (which can serve as input
for a statistical program such as SPSS, Matlab, R oder MS Excel) or by
providing some kind of visualization.

All submissions will be evaluated by an independent jury according to the
evaluation criteria (see below). Additionally, there is a live session 
scheduled
for the third day in which all selected solutions can be demonstrated to 
the
interested workshop participants. The three best solutions will receive 
an award.

Prize money:

1. Price: 1,000,- €
2. Price: 500,- €
3. Price: 250,- €

We would like to thank our premium sponsor SensoMotoric Instruments
(SMI) for the contribution of the prize money.

The SAGA challenge features test videos recorded with different devices
from
- SensoMotoric Instruments (SMI) [SMI EyeTracking Glasses]
- Tobii Technologies [Tobii Glasses]
- Applied Science Laboratories (ASL)
   / Engineering Systems Technologies (EST) [ASL Mobile Eye-XG]

===========================================================================

Submissions:

In order to allow for more time for the implementation process for the
challenge a two-step submission procedure has been devised. The decision
for acceptance to the challenge will be on a preliminary submitted
abstract. The final evaluation and ranking of the software solutions
will be based on the final abstract and the final results for a test-set
of videos, including such similar to those on the website:

a) Preliminary submissions should consist of a 2 page abstract
describing the implementation details of your proposed software solution
including the following:

- description of the underlying techniques and implementations
- description of object selection and tracking processes

b) Finals submissions shall extend the preliminary submission to a 3
page paper by adding the following details:

- number of fixations and cumulative fixation duration details for the
   specified objects
- performance data (such as computation time, number of selected
   objects, parallel tracking of several objects in the scene)
- snapshot of the results

We will use results based on manual annotation to evaluate the submitted
results. The following evaluation criteria will be applied:

- quality of the automated benchmark results (region and pixel based)
   compared to the results given by manual annotation
- conceptual innovation
- performance (such as computation time, number of selected objects,
   parallel tracking of several objects in the scene)
- robustness (such as such as tracking performance, general scope of
   the application)
- usability

The test videos and a corresponding description of them can be found on
the workshop website. Additionally, you can find a detailed description
of how we perform the manual annotation. The exact description for the
challenge, including the evaluation criteria and the required format for
the results, will appear on the workshop website within the next 3
weeks. Please check the website regularly for updates.

Abstracts will be peer-viewed by at least two members of an
international program committee. We will provide templates on the
workshop website. We are currently pursuing possible options for 
publication of a special
issue in a journal or as an edited volume.

Please Note: All challenge participants must register separately for
access to the challenge material and the video download.

===========================================================================

We would like to thank our commercial sponsors:

Premium Sponsors
- SensoMotoric Instruments (SMI) [challenge]
   / SMI Eye Tracking Glasses (www.eyetracking-glasses.com)

Sponsors
- Tobii Technologies [live demo workshop session]
   / Tobii Glasses (http://www.tobii.com/en/eye-tracking-
   research/global/products/hardware/tobii- glasses-eye-tracker/)

===========================================================================

Challenge Organising Committee:

Workshop Organisers:
- Kai Essig
- Thies Pfeiffer
- Pia Knoeferle
- Helge Ritter
- Thomas Schack
- Werner Schneider

All from the
Cognitive Interaction Technology Center of Excellence
at Bielefeld University

Scientific Board:
- Thomas Schack
- Helge Ritter
- Werner Schneider

Jury of the Challenge:
- Kai Essig
- Thies Pfeiffer
- Pia Knoeferle
- Denis Williams (Sensomotoric Instruments, SMI)

Please visit the website periodically for updates:
http://saga.eyemovementresearch.com/about-saga/

For additional question, please contact: saga at eyemovementresearch.com

We look forward to receiving your submissions and to welcoming you to
Bielefeld in October, 2013!

On behalf of the organisers

Kai Essig


More information about the robotics-worldwide mailing list