Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion

Dianwei Wang, Chunxiang Xu*, Daxiang Li, Ying Liu, Zhijie Xu, Jing Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To solve the problem of low robustness of trackers under significant appearance changes in complex background, a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation filter is proposed. Firstly, multi-layer features are extracted by a deep model pre-trained on massive object recognition datasets. The linearly separable features of Relu3-1, Relu4-1 and Relu5-4 layers from VGG-Net-19 are especially suitable for target tracking. Then, correlation filters over hierarchical convolutional features are learned to generate their correlation response maps. Finally, a novel approach of weight adjustment is presented to fuse response maps. The maximum value of the final response map is just the location of the target. Extensive experiments on the object tracking benchmark datasets demonstrate the high robustness and recognition precision compared with several state-of-the-art trackers under the different conditions.

Original languageEnglish
Pages (from-to)770-776
Number of pages7
JournalJournal of Beijing Institute of Technology (English Edition)
Volume28
Issue number4
DOIs
Publication statusPublished - 1 Dec 2019
Externally publishedYes

Keywords

  • Convolution neural network
  • Correlation filter
  • Feature fusion
  • Visual tracking

Fingerprint

Dive into the research topics of 'Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion'. Together they form a unique fingerprint.

Cite this