Referring flexible image restoration

Runwei Guan, Rongsheng Hu, Zhuhao Zhou, Tianlang Xue, Ka Lok Man, Jeremy Smith, Eng Gee Lim, Weiping Ding, Yutao Yue*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In reality, images often exhibit multiple degradations, such as rain and fog at night (triple degradations). However, in many cases, individuals may not want to remove all degradations, for instance, a blurry lens revealing a beautiful snowy landscape (double degradations). In such scenarios, people may only desire to deblur. These situations and requirements shed light on a new challenge in image restoration, where a model must perceive and remove specific degradation types specified by human commands in images with multiple degradations. We term this task Referring Flexible Image Restoration (RFIR). To address this, we first construct a large-scale synthetic dataset called RFIR, comprising 153,423 samples with the degraded image, text prompt for specific degradation removal and restored image. RFIR consists of five basic degradation types: blur, rain, haze, low light and snow while six main sub-categories are included for varying degrees of degradation removal. To tackle the challenge, we propose a novel transformer-based multi-task model named TransRFIR, which simultaneously perceives degradation types in the degraded image and removes specific degradation upon text prompt. TransRFIR is based on two devised modules, Multi-Head Agent Self-Attention (MHASA) for multi-degradation context modeling and Multi-Head Agent Cross Attention (MHACA) for efficient alignment between prompt and referred degradations, where MHASA and MHACA introduce the agent token and reach the linear complexity, achieving lower computation cost than vanilla self-attention and cross-attention and obtain competitive performances. Our TransRFIR achieves state-of-the-art performances compared with other counterparts and is proven as an effective basic structure for image restoration. We release our project at https://github.com/GuanRunwei/FIR-CP.

Original languageEnglish
Article number126857
JournalExpert Systems with Applications
Volume274
DOIs
Publication statusPublished - 15 May 2025

Keywords

  • Cross attention
  • Multi-modal learning
  • Prompt learning
  • Referring flexible image restoration

Fingerprint

Dive into the research topics of 'Referring flexible image restoration'. Together they form a unique fingerprint.

Cite this