FFEvent: Fast fourier-based knowledge transfer for event cameras

Yuhui Lin, Jiahao Zhang, Siyue Yu, Jimin Xiao, Jiaxuan Lu

Research output: Contribution to journalArticlepeer-review

Abstract

Event cameras, as an emerging imaging technology, offer distinct advantages over traditional RGB cameras, including reduced energy consumption and higher frame rates. However, the limited quantity of available event data presents a significant challenge, hindering their broader development. To address this challenge, we propose a modality-adaptive framework, FFEvent, for RGB-to-event knowledge transfer. By introducing domain adaptation, FFEvent enables event data to effectively leverage pre-trained RGB models and achieve competitive performance with minimal parameter tuning. Specifically, we introduce a bidirectional reverse state space model (BiR-SSM) within the FFEvent architecture. Unlike traditional bidirectional scanning, BiR-SSM introduces a shared-weight design that simultaneously models forward and reverse dependencies while significantly reducing computational overhead. Additionally, we design a fast fourier sparse convolution block (FFSConv), which combines frequency-domain global modeling with local spatial sparse convolution to efficiently capture the inherent sparsity of event data. Extensive experiments on event classification, object detection, and video deblurring demonstrate that FFEvent achieves state-of-the-art results across eight benchmark datasets.

Original languageEnglish
JournalExpert Systems with Applications
Volume299
DOIs
Publication statusPublished - 1 Mar 2026

Keywords

  • Event cameras
  • Knowledge transfer
  • State space model

Cite this