Partial-Occlusion-Handling-for-Visual-Tracking-via-Robust-Part-Matching

Published in CVPR 2014

 

Part-based visual tracking is advantageous due to its robustness against partial occlusion. However, how to effectively exploit the confidence scores of individual parts to construct a robust tracker is still a challenging problem. In this paper, we address this problem by simultaneously matching parts in each of multiple frames, which is realized by a locality-constrained low-rank sparse learning method that establishes multi-frame part correspondences through optimization of partial permutation matrices. The proposed part matching tracker (PMT) has a number of attractive properties. (1) It exploits the spatial-temporal locality-constrained property for robust part matching. (2) It matches local parts from multiple frames jointly by considering their low-rank and sparse structure information, which can effectively handle part appearance variations due to occlusion or noise. (3) The proposed PMT model has the inbuilt mechanism of leveraging multi-mode target templates, so that the dilemma of template updating when encountering occlusion in tracking can be better handled. This contrasts with existing methods that only do part matching between a pair of frames. We evaluate PMT and compare with 10 popular state-of-the-art methods on challenging benchmarks. Experimental results show that PMT consistently outperform these existing trackers

 

People:
Tianzhu Zhang
Chris Jia
Narendra Ahuja
Yi Ma

Documents:
Paper (Pdf)
BibTex (Bib)

Acknowledgement:

This study is supported by the research grant for the Human Sixth Sense Program at the Advanced Digital Sciences Center from Singapore’s Agency for Science, Technology and Research (A*STAR).