Face segmentation using combined bottom-up and top-down saliency maps

Zhang Qin Seak*, Li Minn Ang, Kah Phooi Seng

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

3 Citations (Scopus)

Abstract

This paper presents a simple color based segmentation technique for faces. The proposed technique utilizes saliency maps and incorporates both top-down (data driven) and bottom-up saliency methods to generate the saliency map used in the segmentation phase. The top-down approach uses skin color data obtained from a training database to bias the skin color saliency map while the bottom-up approach utilizes both the intensity and color features maps from the test image. The saliency map is computed from the center sound difference and normalization of the feature maps from both systems. Finally, a square moving window function is used to determine the point with the highest energy in the saliency map, which is marked as the facial region. The system shows good performance for subjects in both simple and complex backgrounds, as well as varying illumination conditions and skin color variances.

Original languageEnglish
Title of host publicationProceedings - 2010 3rd IEEE International Conference on Computer Science and Information Technology, ICCSIT 2010
Pages477-480
Number of pages4
DOIs
Publication statusPublished - 2010
Externally publishedYes
Event2010 3rd IEEE International Conference on Computer Science and Information Technology, ICCSIT 2010 - Chengdu, China
Duration: 9 Jul 201011 Jul 2010

Publication series

NameProceedings - 2010 3rd IEEE International Conference on Computer Science and Information Technology, ICCSIT 2010
Volume5

Conference

Conference2010 3rd IEEE International Conference on Computer Science and Information Technology, ICCSIT 2010
Country/TerritoryChina
CityChengdu
Period9/07/1011/07/10

Keywords

  • Bottom-up visual attention
  • Face segmentation
  • Saliency map
  • Top-down perception
  • Visual attention

Cite this