Robust Visual Tracking via Multi-Task Sparse Learning

  • Posted on May 10, 2013 at 3:18 pm by kcarr1@illinois.edu.
  • Categorized Uncategorized.

Published in CVPR 2012.

cvpr_tracking

In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing `p;q mixed norms (p 2 f2;1g and q = 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As  compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker [15] is a special case of our MTT formulation (denoted as the L11 tracker) when p = q = 1. The  learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally  attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers.

People:
Zhang Tianzhu
Bernard Ghanem
Narendra Ahuja

Documents:
paper (preprint pdf)
Bib
Source code:
code

Acknowledgement:
This study is supported by the research grant for the Human Sixth Sense Programme at the Advanced Digital Sciences Center from Singapore’s Agency for Science, Technology and Research (A*STAR).