Mozualization: Crafting Music and Visual Representation with Multimodal AI

Wanfang Xu, Lixiang Zhao, Haiwen Song, Xinheng Song, Zhaolin Lu, Yu Liu, Min Chen, Eng Gee Lim, Lingyun Yu*

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

Abstract

In this work, we introduce Mozualization, a music generation and editing tool that creates multi-style embedded music by integrating diverse inputs, such as keywords, images, and sound clips (e.g., segments from various pieces of music or even a playful cat’s meow).
Our work is inspired by the ways people express their emotions---writing mood-descriptive poems or articles, creating drawings with warm or cool tones, or listening to sad or uplifting music. Building on this concept, we developed a tool that transforms these emotional expressions into a cohesive and expressive song, allowing users to seamlessly incorporate their unique preferences and inspirations.
To evaluate the tool and, more importantly, gather insights for its improvement, we conducted a user study involving nine music enthusiasts. The study assessed user experience, engagement, and the impact of interacting with and listening to the generated music.
Original languageEnglish
Title of host publicationCHI 2025 - Extended Abstracts of the 2025 CHI Conference on Human Factors in Computing Sytems
Publication statusAccepted/In press - 20 Feb 2025

Fingerprint

Dive into the research topics of 'Mozualization: Crafting Music and Visual Representation with Multimodal AI'. Together they form a unique fingerprint.

Cite this