Language-Led Visual Grounding and Future Possibilities

Zezhou Sui, Mian Zhou*, Zhikun Feng, Angelos Stefanidis, Nan Jiang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In recent years, with the rapid development of computer vision technology and the popularity of intelligent hardware, as well as the increasing demand for human–machine interaction in intelligent products, visual localization technology can help machines and humans to recognize and locate objects, thereby promoting human–machine interaction and intelligent manufacturing. At the same time, human–machine interaction is constantly evolving and improving, becoming increasingly intelligent, humanized, and efficient. In this article, a new visual localization model is proposed, and a language validation module is designed to use language information as the main information to increase the model’s interactivity. In addition, we also list the future possibilities of visual localization and provide two examples to explore the application and optimization direction of visual localization and human–machine interaction technology in practical scenarios, providing reference and guidance for relevant researchers and promoting the development and application of visual localization and human–machine interaction technology.

Original languageEnglish
Article number3142
JournalElectronics (Switzerland)
Volume12
Issue number14
DOIs
Publication statusPublished - Jul 2023

Keywords

  • human–computer interaction
  • intelligent systems
  • interaction design
  • user experience
  • visual grounding

Fingerprint

Dive into the research topics of 'Language-Led Visual Grounding and Future Possibilities'. Together they form a unique fingerprint.

Cite this