TY - JOUR
T1 - FFEvent
T2 - Fast fourier-based knowledge transfer for event cameras
AU - Lin, Yuhui
AU - Zhang, Jiahao
AU - Yu, Siyue
AU - Xiao, Jimin
AU - Lu, Jiaxuan
N1 - Publisher Copyright:
© 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PY - 2026/3/1
Y1 - 2026/3/1
N2 - Event cameras, as an emerging imaging technology, offer distinct advantages over traditional RGB cameras, including reduced energy consumption and higher frame rates. However, the limited quantity of available event data presents a significant challenge, hindering their broader development. To address this challenge, we propose a modality-adaptive framework, FFEvent, for RGB-to-event knowledge transfer. By introducing domain adaptation, FFEvent enables event data to effectively leverage pre-trained RGB models and achieve competitive performance with minimal parameter tuning. Specifically, we introduce a bidirectional reverse state space model (BiR-SSM) within the FFEvent architecture. Unlike traditional bidirectional scanning, BiR-SSM introduces a shared-weight design that simultaneously models forward and reverse dependencies while significantly reducing computational overhead. Additionally, we design a fast fourier sparse convolution block (FFSConv), which combines frequency-domain global modeling with local spatial sparse convolution to efficiently capture the inherent sparsity of event data. Extensive experiments on event classification, object detection, and video deblurring demonstrate that FFEvent achieves state-of-the-art results across eight benchmark datasets.
AB - Event cameras, as an emerging imaging technology, offer distinct advantages over traditional RGB cameras, including reduced energy consumption and higher frame rates. However, the limited quantity of available event data presents a significant challenge, hindering their broader development. To address this challenge, we propose a modality-adaptive framework, FFEvent, for RGB-to-event knowledge transfer. By introducing domain adaptation, FFEvent enables event data to effectively leverage pre-trained RGB models and achieve competitive performance with minimal parameter tuning. Specifically, we introduce a bidirectional reverse state space model (BiR-SSM) within the FFEvent architecture. Unlike traditional bidirectional scanning, BiR-SSM introduces a shared-weight design that simultaneously models forward and reverse dependencies while significantly reducing computational overhead. Additionally, we design a fast fourier sparse convolution block (FFSConv), which combines frequency-domain global modeling with local spatial sparse convolution to efficiently capture the inherent sparsity of event data. Extensive experiments on event classification, object detection, and video deblurring demonstrate that FFEvent achieves state-of-the-art results across eight benchmark datasets.
KW - Event cameras
KW - Knowledge transfer
KW - State space model
UR - https://www.scopus.com/pages/publications/105020880153
U2 - 10.1016/j.eswa.2025.130055
DO - 10.1016/j.eswa.2025.130055
M3 - Article
AN - SCOPUS:105020880153
SN - 0957-4174
VL - 299
JO - Expert Systems with Applications
JF - Expert Systems with Applications
ER -