An AI-mediated VR Sound Installation

Giovanni Santini, Zhonghao Chen

    Research output: Chapter in Book or Report/Conference proceedingChapterpeer-review

    Abstract

    Artificial Intelligence (AI) and Extended Realities (XR) have the potential to create new worlds, new narrative spaces. From our standpoint, to be believable, those worlds need not only to convince our senses, but also our experience reality: a complex, non-linear net of relationships among objects, beings, actions and concepts. In other words, we wanted to create a narrative space rich and complex, in terms of content and behaviour. At times, non-linear and unpredictable.

    In Oracle, we explored some boundaries of this idea. Oracle is an installation based on a variable chain of spatialized sound processing algorithms directly controlled by a multilayer perceptron neural network (the “oracle”). The sound input of the system is the voice of any participant who wants to play as a “visitor” (the one who poses a question to the oracle). The output will be a processed version of that input, a slight modification, a completely different sound texture or anything in between these extremes. By wearing a VR headset, the visitor can interact with the oracle and influence the output of the neural network, therefore the sound output. However, it is not possible for any of the users to exactly predict the outcome of such interaction. The oracle’s answer is then a net of sound relationships the user needs to decipher, not to get the answer, but to better understand the question.
    Original languageEnglish
    Title of host publicationCreativity in the Age of Digital Reproduction. xArch 2023. Lecture Notes in Civil Engineering
    PublisherSpringer Singapore
    Volume343
    ISBN (Electronic)978-981-97-0621-1
    ISBN (Print)978-981-97-0620-4
    Publication statusPublished - 24 Feb 2024

    Fingerprint

    Dive into the research topics of 'An AI-mediated VR Sound Installation'. Together they form a unique fingerprint.

    Cite this