[robotics-worldwide] [software] ROS package for TRACA real-time visual object tracking
t.fischer at imperial.ac.uk
Tue Nov 20 06:27:27 PST 2018
We just released a ROS package for visual object tracking based on our CVPR2018 paper “Context-aware Deep Feature Compression for High-speed Visual Tracking”. The package can track objects in a variety of scenarios with over 100 fps, and is robust to scale changes and brief full occlusions. The input can be provided either by direct access to a webcam or via a ROS topic (an initial bounding box, for example given by an object detector, is required).
*Link to paper*: https://urldefense.proofpoint.com/v2/url?u=http-3A__openaccess.thecvf.com_content-5Fcvpr-5F2018_papers_Choi-5FContext-2DAware-5FDeep-5FFeature-5FCVPR-5F2018-5Fpaper.pdf&d=DwIGIw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ccMCNVW7QDAug8sVRsaWCzkYEGblGai6Yxuf4dQl9v8&s=MEhmYBBAYzvDr8oEWET_9Jh9Zb41FLp2Bduaipmu794&e=
*Link to code*: https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_site_jwchoivision_home_traca&d=DwIGIw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ccMCNVW7QDAug8sVRsaWCzkYEGblGai6Yxuf4dQl9v8&s=iCv1C6IeiKjAyT1fKww2s7eMeMzfzpCGdt_88rVuPGA&e=
Please do not hesitate to contact Jongwon Choi (jwchoi.pil [at] gmail.com) regarding general questions about the object tracker, or me (t.fischer [at] imperial.ac.uk) for ROS-specific questions.
We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.
Personal Robotics Laboratory
Imperial College London
More information about the robotics-worldwide