BiFNet: Bidirectional Fusion Network for Road Segmentation

Haoran Li*, Yaran Chen, Qichao Zhang, Dongbin Zhao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

Multisensor fusion-based road segmentation plays an important role in the intelligent driving system since it provides a drivable area. The existing mainstream fusion method is mainly to feature fusion in the image space domain which causes the perspective compression of the road and damages the performance of the distant road. Considering the bird's eye views (BEVs) of the LiDAR remains the space structure in the horizontal plane, this article proposes a bidirectional fusion network (BiFNet) to fuse the image and BEV of the point cloud. The network consists of two modules: 1) the dense space transformation (DST) module, which solves the mutual conversion between the camera image space and BEV space and 2) the context-based feature fusion module, which fuses the different sensors information based on the scenes from corresponding features. This method has achieved competitive results on the KITTI dataset.

Original languageEnglish
Pages (from-to)8617-8628
Number of pages12
JournalIEEE Transactions on Cybernetics
Volume52
Issue number9
DOIs
Publication statusPublished - 1 Sept 2022

Keywords

  • Adaptive learning
  • autonomous vehicles
  • multisensor fusion
  • road segmentation

Fingerprint

Dive into the research topics of 'BiFNet: Bidirectional Fusion Network for Road Segmentation'. Together they form a unique fingerprint.

Cite this