AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback

Fangtao Zhao, Ziming Li, Yiming Luo, Yue Li, Hai Ning Liang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Virtual reality (VR) technology has been increasingly focusing on incorporating multimodal outputs to enhance the sense of immersion and realism. In this work, we developed AirWhisper, a modular wearable device that provides dynamic airflow feedback to enhance VR experiences. AirWhisper simulates wind from multiple directions around the user’s head via four micro fans and 3D-printed attachments. We applied a Just Noticeable Difference study to support the design of the control system and explore the user’s perception of the characteristics of the airflow in different directions. Through multimodal comparison experiments, we find that vision-airflow multimodality output can improve the user’s VR experience from several perspectives. Finally, we designed scenarios with different airflow change patterns and different levels of interaction to test AirWhisper’s performance in various contexts and explore the differences in users’ perception of airflow under different virtual environment conditions. Our work shows the importance of developing human-centered multimodal feedback adaptive learning models that can make real-time dynamic changes based on the user’s perceptual characteristics and environmental features.

Original languageEnglish
JournalJournal on Multimodal User Interfaces
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Airflow
  • Human-centered design
  • Multimodal feedback
  • Virtual reality

Fingerprint

Dive into the research topics of 'AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback'. Together they form a unique fingerprint.

Cite this