Paper Detail

Demographic Fairness in Multimodal LLMs: A Benchmark of Gender and Ethnicity Bias in Face Verification

Ünsal Öztürk, Hatef Otroshi Shahreza, Sébastien Marcel

arxiv Score 21.6

Published 2026-03-26 · First seen 2026-03-27

General AI

Abstract

Multimodal Large Language Models (MLLMs) have recently been explored as face verification systems that determine whether two face images are of the same person. Unlike dedicated face recognition systems, MLLMs approach this task through visual prompting and rely on general visual and reasoning abilities. However, the demographic fairness of these models remains largely unexplored. In this paper, we present a benchmarking study that evaluates nine open-source MLLMs from six model families, ranging from 2B to 8B parameters, on the IJB-C and RFW face verification protocols across four ethnicity groups and two gender groups. We measure verification accuracy with the Equal Error Rate and True Match Rate at multiple operating points per demographic group, and we quantify demographic disparity with four FMR-based fairness metrics. Our results show that FaceLLM-8B, the only face-specialised model in our study, substantially outperforms general-purpose MLLMs on both benchmarks. The bias patterns we observe differ from those commonly reported for traditional face recognition, with different groups being most affected depending on the benchmark and the model. We also note that the most accurate models are not necessarily the fairest and that models with poor overall accuracy can appear fair simply because they produce uniformly high error rates across all demographic groups.

Workflow Status

Review status
pending
Role
unreviewed
Read priority
now
Vote
Not set.
Saved
no
Collections
Not filed yet.
Next action
Not filled yet.

Reading Brief

Key Findings

The study reveals that a face-specialized MLLM, FaceLLM-8B, substantially outperforms general-purpose models in face verification. Observed bias patterns in MLLMs differ from those in traditional face recognition, and the most accurate models are not necessarily the fairest, as some models appear fair only due to uniformly poor performance across all groups.

Limitations

The study focuses on evaluating bias rather than mitigating it, and notes that the observed bias patterns vary significantly depending on the specific model and benchmark used.

Methodology

The researchers benchmarked nine open-source MLLMs on the IJB-C and RFW face verification datasets, measuring accuracy and fairness metrics across four ethnicity and two gender groups.

Significance

This research provides a crucial first benchmark for understanding demographic fairness in the emerging application of MLLMs for face verification.

Why It Surfaced

No ranking explanation is available yet.

Tags

No tags.

BibTeX

@article{ztrk2026demographic,
  title = {Demographic Fairness in Multimodal LLMs: A Benchmark of Gender and Ethnicity Bias in Face Verification},
  author = {Ünsal Öztürk and Hatef Otroshi Shahreza and Sébastien Marcel},
  year = {2026},
  abstract = {Multimodal Large Language Models (MLLMs) have recently been explored as face verification systems that determine whether two face images are of the same person. Unlike dedicated face recognition systems, MLLMs approach this task through visual prompting and rely on general visual and reasoning abilities. However, the demographic fairness of these models remains largely unexplored. In this paper, we present a benchmarking study that evaluates nine open-source MLLMs from six model families, rangin},
  url = {https://arxiv.org/abs/2603.25613},
  keywords = {cs.CV, cs.AI},
  eprint = {2603.25613},
  archiveprefix = {arXiv},
}

Metadata

{}