Mozualization: Crafting Music and Visual Representation with Multimodal AI

Wanfang Xu, Lixiang Zhao, Haiwen Song, Xinheng Song, Zhaolin Lu, Yu Liu, Min Chen, Eng Gee Lim, Lingyun Yu*

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

In this work, we introduce Mozualization, a music generation and editing tool that creates multi-style embedded music by integrating diverse inputs, such as keywords, images, and sound clips (e.g., segments from various pieces of music or even a playful cat’s meow). Our work is inspired by the ways people express their emotions—writing mood-descriptive poems or articles, creating drawings with warm or cool tones, or listening to sad or uplifting music. Building on this concept, we developed a tool that transforms these emotional expressions into a cohesive and expressive song, allowing users to seamlessly incorporate their unique preferences and inspirations. To evaluate the tool and, more importantly, gather insights for its improvement, we conducted a user study involving nine music enthusiasts. The study assessed user experience, engagement, and the impact of interacting with and listening to the generated music.

Original languageEnglish
Title of host publicationCHI EA 2025 - Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9798400713958
DOIs
Publication statusPublished - 26 Apr 2025
Event2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025 - Yokohama, Japan
Duration: 26 Apr 20251 May 2025

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2025 CHI Conference on Human Factors in Computing Systems, CHI EA 2025
Country/TerritoryJapan
CityYokohama
Period26/04/251/05/25

Keywords

  • Multimodal Input
  • Music Editing
  • Music Visualization

Fingerprint

Dive into the research topics of 'Mozualization: Crafting Music and Visual Representation with Multimodal AI'. Together they form a unique fingerprint.

Cite this