Say what you are looking at: An attention-based interactive system for autistic children

Furong Deng, Yu Zhou, Sifan Song, Zijian Jiang, Lifu Chen, Jionglong Su*, Zhenglong Sun*, Jiaming Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Gaze-following is an effective way for intention understanding in human–robot interaction, which aims to follow the gaze of humans to estimate what object is being observed. Most of the existing methods require people and objects to appear in the same image. Due to the limitation in the view of the camera, these methods are not applicable in practice. To address this problem, we propose a method of gaze following that utilizes a geometric map for better estimation. With the help of the map, this method is competitive for cross-frame estimation. On the basis of this method, we propose a novel gaze-based image caption system, which has been studied for the first time. Our experiments demonstrate that the system follows the gaze and describes objects accurately. We believe that this system is competent for autistic children’s rehabilitation training, pension service robots, and other applications.

Original languageEnglish
Article number7426
JournalApplied Sciences (Switzerland)
Volume11
Issue number16
DOIs
Publication statusPublished - 2 Aug 2021

Keywords

  • Human–robot interaction
  • Image caption
  • Simultaneous localization and mapping
  • Visual attention

Fingerprint

Dive into the research topics of 'Say what you are looking at: An attention-based interactive system for autistic children'. Together they form a unique fingerprint.

Cite this