Learning location constrained pixel classifiers for image parsing

Kang Dang*, Junsong Yuan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


When parsing images with regular spatial layout, the location of a pixel (x,y) can provide important prior for its semantic label. This paper proposes a technique to leverage both location and appearance information for pixel labeling. The proposed method utilizes the spatial layout of the image by building local pixel classifiers that are location constrained, i.e., trained with pixels from a local neighborhood region only. Our proposed local learning works well in different challenging image parsing problems, such as pedestrian parsing, street-view scene parsing and object segmentation, and outperforms existing results that rely on one unified pixel classifier. To better understand the behavior of our local classifier, we perform bias-variance analysis, and demonstrate that the proposed local classifier essentially performs spatial smoothness over the target estimator that uses appearance information and location, which explains why the local classifier is more discriminative but can still handle mis-alignment. Meanwhile, our theoretical and experimental studies suggest the importance of selecting an appropriate neighborhood size to perform location constrained learning, which can significantly influence the parsing results.

Original languageEnglish
Pages (from-to)1-13
Number of pages13
JournalJournal of Visual Communication and Image Representation
Publication statusPublished - Nov 2017
Externally publishedYes


  • Local learning
  • Pedestrian parsing
  • Spatial layout
  • Street-view scene parsing


Dive into the research topics of 'Learning location constrained pixel classifiers for image parsing'. Together they form a unique fingerprint.

Cite this