CTLane: An end-to-end lane detector by a CNN transformer and fusion decoder for edge computing

Mian Zhou, Guoqiang Zhu, Zhikun Feng, Haoyi Lian, Siqi Huang

Research output: Contribution to journalArticlepeer-review

Abstract

In advanced driving assistance systems and autonomous vehicles, lane detection plays a crucial role in ensuring the safety and stability of the vehicle during driving. While deep learning-based lane detection methods can provide accurate pixel-level predictions, they can struggle to interpret lanes as a whole in the presence of interference. To address this issue, we have developed a method that includes two components: a convolutional neural network transformer and a fusion decoder. The CNN transformer extracts the overall semantics of the lanes and speeds up convergence, while the fusion decoder combines high-level semantics with low-level local features to improve accuracy and robustness. By using these two components together, our method is able to effectively detect lanes in a variety of conditions, even when interference is present. We tested our method on multiple lane datasets and obtained superior results, with the best performance on the BDD100K dataset. Our method has successfully addressed the challenge of accurately and completely detecting lanes in the presence of interference, such as darkness, shadows, and strong light. The algorithm has been employed in an edge computing device, an intelligent cart. The code has been made available at: https://github.com/squirtlecc/CNNTransformer
Original languageEnglish
Pages (from-to)149-161
Number of pages13
JournalITU Journal on Future and Evolving Technologies
DOIs
Publication statusPublished - 25 Jun 2025

Cite this