Samba: Semantic segmentation of remotely sensed images with state space model

Qinfeng Zhu, Yuanzhi Cai, Yuan Fang, Yihan Yang, Cheng Chen, Lei Fan*, Anh Nguyen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

High-resolution remotely sensed images pose challenges to traditional semantic segmentation networks, such as Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). CNN-based methods struggle to handle high-resolution images due to their limited receptive field, while ViT-based methods, despite having a global receptive field, face challenges when processing long sequences. Inspired by the Mamba network, which is based on a state space model (SSM) to efficiently capture global semantic information, we propose a semantic segmentation framework for high-resolution remotely sensed imagery, named Samba. Samba utilizes an encoder-decoder architecture, with multiple Samba blocks serving as the encoder to efficiently extract multi-level semantic information, and UperNet functioning as the decoder. We evaluate Samba on the LoveDA, ISPRS Vaihingen, and ISPRS Potsdam datasets using the mIoU and mF1 metrics, and compare it with top-performing CNN-based and ViT-based methods. The results demonstrate that Samba achieves unparalleled performance on commonly used remotely sensed datasets for semantic segmentation. Samba is the first to demonstrate the effectiveness of SSM in segmenting remotely sensed imagery, setting a new performance benchmark for Mamba-based techniques in this domain of semantic segmentation. The source code and baseline implementations are available at https://github.com/zhuqinfeng1999/Samba.

Original languageEnglish
Article numbere38495
JournalHeliyon
Volume10
Issue number19
DOIs
Publication statusPublished - 15 Oct 2024

Keywords

  • Images
  • Mamba
  • Remote sensing
  • Semantic segmentation
  • State space model

Fingerprint

Dive into the research topics of 'Samba: Semantic segmentation of remotely sensed images with state space model'. Together they form a unique fingerprint.

Cite this