TY - GEN
T1 - FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs
AU - Wang, Xiaoqin
AU - Ma, Xusen
AU - Hou, Xianxu
AU - Ding, Meidan
AU - Li, Yudong
AU - Chen, Junliang
AU - Chen, Wenting
AU - Peng, Xiaoyang
AU - Shen, Linlin
PY - 2025
Y1 - 2025
N2 - Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in various tasks. However, effectively evaluating these MLLMs on face perception remains largely unexplored. To address this gap, we introduce FaceBench, a dataset featuring hierarchical multi-view and multi-level attributes specifically designed to assess the comprehensive face perception abilities of MLLMs. Initially, we construct a hierarchical facial attribute structure, which encompasses five views with up to three levels of attributes, totaling over 210 attributes and 700 attribute values. Based on the structure, the proposed FaceBench consists of 49,919 visual question-answering (VQA) pairs for evaluation and 23,841 pairs for fine-tuning. Moreover, we further develop a robust face perception MLLM baseline, Face-LLaVA, by multi-modal training with our proposed face instruction-tuning data. Extensive experiments on various mainstream MLLMs and Face-LLaVA are conducted to test their face perception ability, which are also comapred with human. The results reveal that, the existing MLLMs are far from satisfactory in understanding the fine-grained facial attributes, while our Face-LLaVA significantly outperforms existing open-source models with a small amount of training data and is comparable to commercial ones like GPT-4o and Gemini. The dataset will be released upon acceptance of this work.
AB - Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in various tasks. However, effectively evaluating these MLLMs on face perception remains largely unexplored. To address this gap, we introduce FaceBench, a dataset featuring hierarchical multi-view and multi-level attributes specifically designed to assess the comprehensive face perception abilities of MLLMs. Initially, we construct a hierarchical facial attribute structure, which encompasses five views with up to three levels of attributes, totaling over 210 attributes and 700 attribute values. Based on the structure, the proposed FaceBench consists of 49,919 visual question-answering (VQA) pairs for evaluation and 23,841 pairs for fine-tuning. Moreover, we further develop a robust face perception MLLM baseline, Face-LLaVA, by multi-modal training with our proposed face instruction-tuning data. Extensive experiments on various mainstream MLLMs and Face-LLaVA are conducted to test their face perception ability, which are also comapred with human. The results reveal that, the existing MLLMs are far from satisfactory in understanding the fine-grained facial attributes, while our Face-LLaVA significantly outperforms existing open-source models with a small amount of training data and is comparable to commercial ones like GPT-4o and Gemini. The dataset will be released upon acceptance of this work.
M3 - Conference Proceeding
BT - Proceedings - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025
ER -