Touch, Sound, and Space: Exploring Immersive Music Interaction through AI-Generated Environments

Wanfang Xu, Jifan Yang, Fengwen Zhang, Yu Lu, Lijie Yao, Le Liu, Lingyun Yu*

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

We introduce an interactive music system powered by AI-generated content (AIGC) that enables users to engage with music through multimodal interactions involving touch, sound, and spatial immersion.
Motivated by the desire to enhance engagement and emotional connection with music, our system enables users to co-create and interact with musical content. Users upload a song and a descriptive text prompt, from which the system generates 3D visuals. During playback, users can embed their own audio inputs and trigger responsive visual effects such as color-driven point clouds using tangible controls.
To explore how spatial scale and embodiment shape user experience, we implement the system across three increasing spatial scale and embodiment: (1) a handheld AR music box, (2) a table-sized stage box, and (3) a fully immersive VR environment. Through a user study, we investigate how different levels of immersion and interaction influence user engagement, emotional response, and sense of presence. Our findings demonstrate the potential of combining AIGC with embodied interaction to enrich creative expression and enhance immersive musical experiences.
Original languageEnglish
Title of host publication2025 International Symposium on Visual Information Communication and Interaction (VINCI)
Pages1-8
Number of pages8
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Immersive Creation
  • Music Generation
  • Visual Design

Fingerprint

Dive into the research topics of 'Touch, Sound, and Space: Exploring Immersive Music Interaction through AI-Generated Environments'. Together they form a unique fingerprint.

Cite this