Skip to main navigation Skip to search Skip to main content

Text-to-Image Personalization based on Diffusion Models

Activity: SupervisionCompleted SURF Project

Description

This study proposes a novel approach for personalized text-to-image generation, leveraging both textual descriptions and a small set of user provided images (3-5 examples). Specifically, we explore the technique of Textual Inversion, which transforms visual concepts from images into
pseudo-words. These pseudo-words are then incorporated into the prompt, generating new images that embody the desired characteristics. Our
approach enhances the ability of generative models to produce images that accurately reflect both the textual prompt and the unique features
present in the user-provided examples, thus enabling a more personalized and context-aware image generation process
PeriodJul 2024Sept 2024