Improved Camshift Algorithm in AGV Vision-based Tracking with Edge Computing

Tongpo Zhang, Xiaokai Nie, Xu Zhu, Enggee Lim, Fei Ma, Limin Yu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Automated guided vehicles (AGVs) are Internet of Things robots that navigate automatically as guided by a central control platform with distributed intelligence. Different methodologies have been proposed for AGV visual tracking applications. However, vision-based tracking in AGVs usually confronts the problem of time delay caused by the complexity of image processing algorithms. To balance the trade-off among algorithm complexity, hardware cost and performance, precision and robustness are usually compromised in practical deployment. This paper proposes a prototype design of a visual tracking system. Edge computing is implemented which migrates computation intensive image processing to a local computer. The Raspberry Pi-based AGV captures the real-time image through the camera, sends the images to the computer and receives the processing results through the WiFi link. An improved Camshift algorithm is developed and implemented. Based on this algorithm, the AGV can make convergent prediction of the pixels in the target area after the first detection of the object. Relative coordinates of the target can be located more accurately in less time. As tested in the experiments, the system architecture and new algorithm lead to reduced hardware cost, less time delay, improved robustness and higher accuracy in tracking.

Original languageEnglish
Pages (from-to)2709-2723
Number of pages15
JournalJournal of Supercomputing
Volume78
Issue number2
DOIs
Publication statusPublished - Feb 2022

Keywords

  • AGV
  • Camshift algorithm
  • Edge computing
  • Visual tracking

Cite this