VELIE: A vehicle-based efficient low-light image enhancement method for intelligent vehicles

Linwei Ye, Dong Wang, Dongyi Yang, Zhiyuan Ma*, Quan Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and
Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex environments, particularly underperforming in low-light conditions, which raises a significant concern. To address these challenges, multi-sensor fusion systems or specialized low-light cameras have been proposed, but their high costs render them unsuitable for widespread deployment. On the other hand, improvements in post-processing algorithms offer a more economical and effective solution. However, current research in low-light image enhancement still shows substantial gaps in detail enhancement on nighttime driving datasets and is characterized by high deployment costs, failing to achieve real-time inference and edge deployment. Therefore, this paper leverages the Swin Vision Transformer combined with a gamma transformation integrated
U-Net for the decoupled enhancement of initial low-light inputs, proposing a deep learning enhancement network named Vehicle-based Efficient Low-light Image Enhancement (VELIE). VELIE achieves state-of-the-art performance on various driving datasets with a processing time of only 0.19 s, significantly enhancing high-dimensional environmental perception tasks in low-light conditions.
Original languageEnglish
Article number1345
JournalSensors (Switzerland)
Volume24
Issue number4
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'VELIE: A vehicle-based efficient low-light image enhancement method for intelligent vehicles'. Together they form a unique fingerprint.

Cite this