A Deep Top-down Framework towards Generalisable Multi-View Pedestrian Detection

Rui Qiu, Ming Xu*, Yuchen Ling, Jeremy S. Smith, Yuyao Yan, Xinheng Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Multiple cameras have been frequently used to detect heavily occluded pedestrians. The state-of-the-art methods, for deep multi-view pedestrian detection, usually project the feature maps, extracted from multiple views, to the ground plane through homographies for information fusion. However, this bottom-up approach can easily overfit the camera locations and orientations in a training dataset, which leads to a weak generalisation performance and compromises its real-world applications. To address this problem, a deep top-down framework TMVD is proposed, in which the feature maps within the rectangular boxes, sitting at each cell of the discretized ground plane and of the average pedestrians' size, in the multiple views are weighted and embedded in a top view. They are used to infer the locations of pedestrians by using a convolutional neural network. The proposed method significantly improves the generalisation performance when compared with the benchmark methods for deep multi-view pedestrian detection. Meanwhile, it also significantly outperforms the other top-down methods.
Original languageEnglish
Article number128458
Number of pages11
JournalNeurocomputing
Volume607
DOIs
Publication statusPublished - Aug 2024

Fingerprint

Dive into the research topics of 'A Deep Top-down Framework towards Generalisable Multi-View Pedestrian Detection'. Together they form a unique fingerprint.

Cite this