Voxel labelling in CT images with data-driven contextual features

Kang Dang, Junsong Yuan, Ho Yee Tiong

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

1 Citation (Scopus)

Abstract

Spatial contextual information is useful for voxel labelling and especially suitable for the images with relatively fixed scene structure such as CT images. For each voxel, the intensity values of nearby and far away positions are sampled as its contextual features and such contextual features have shown promising performance. However how to determine sampling position to construct good contextual features remains a critical problem since a good sampling could significantly improve the classification performance. In this paper we proposed a novel approach by discovering discriminative sampling pattern. We emphasize that the sampling pattern is not hand craft but data driven and can cater to a particular type of problem, such as kidneys labelling in contrast-enhanced CT images. After discriminative pattern is discovered it can be adapted for use in other datasets of the same problem. Experiments on kidney dataset showed considerable improvements over competing methods.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings
PublisherIEEE Computer Society
Pages680-684
Number of pages5
ISBN (Print)9781479923410
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 20th IEEE International Conference on Image Processing, ICIP 2013 - Melbourne, VIC, Australia
Duration: 15 Sept 201318 Sept 2013

Publication series

Name2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings

Conference

Conference2013 20th IEEE International Conference on Image Processing, ICIP 2013
Country/TerritoryAustralia
CityMelbourne, VIC
Period15/09/1318/09/13

Keywords

  • CT image segmentation
  • Spatial contextual feature
  • Voxel labelling

Fingerprint

Dive into the research topics of 'Voxel labelling in CT images with data-driven contextual features'. Together they form a unique fingerprint.

Cite this