Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

邮发代号 80-970

2019 Impact Factor: 1.275

Frontiers of Computer Science  2025, Vol. 19 Issue (1): 191321   https://doi.org/10.1007/s11704-024-40051-3
  本期目录
Debiasing vision-language models for vision tasks: a survey
Beier ZHU(), Hanwang ZHANG
School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore
 全文: PDF(694 KB)   HTML
收稿日期: 2024-01-11      出版日期: 2024-08-21
Corresponding Author(s): Beier ZHU   
 引用本文:   
. [J]. Frontiers of Computer Science, 2025, 19(1): 191321.
Beier ZHU, Hanwang ZHANG. Debiasing vision-language models for vision tasks: a survey. Front. Comput. Sci., 2025, 19(1): 191321.
 链接本文:  
https://academic.hep.com.cn/fcs/CN/10.1007/s11704-024-40051-3
https://academic.hep.com.cn/fcs/CN/Y2025/V19/I1/191321
Fig.1  
MethodDebiasing typeTraining data?Retraining?
ZPE [4]Label biasPT dataNo, post-hoc
GLA [3]Label biasDS dataNo, post-hoc
REAL [11]Label biasDS&PTYes, linear classifier
ProReg [8]Spurious corr.DS dataYes, fine-tuning
C-Adapter [9]Spurious corr.DS dataNo, adapter
Orth-Cali [10]Spurious corr.NoNo,
Social biasadjust prompt
FairSampling [5]Gender biasDS dataYes, fine-tuning
FeatureClip [5]Gender biasDS dataNo, post-hoc
AdvDebias [12]Social biasDS dataNo, prompt tuning
DeAR [2]Social biasDS dataNo, adapter
Tab.1  
1 A, Radford J W, Kim C, Hallacy A, Ramesh G, Goh S, Agarwal G, Sastry A, Askell P, Mishkin J, Clark G, Krueger I Sutskever . Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748–8763
2 A, Seth M, Hemani C Agarwal . DeAR: debiasing vision-language models with additive residuals. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 6820–6829
3 B, Zhu K, Tang Q, Sun H Zhang . Generalized logit adjustment: Calibrating fine-tuned models by removing label bias in foundation models. In: Proceedings of the 37th Conference on Neural Information Processing Systems. 2023, 64663–64680
4 J U, Allingham J, Ren M W, Dusenberry X, Gu Y, Cui D, Tran J Z, Liu B Lakshminarayanan . A simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 26
5 J, Wang Y, Liu X Wang . Are gender-neutral queries really gender-neutral? Mitigating gender bias in image search. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 1995–2008
6 X, Wang Z, Wu L, Lian S X Yu . Debiased learning from naturally imbalanced pseudo-labels. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 14627–14637
7 J, Cui B, Zhu X, Wen X, Qi B, Yu H Zhang . Classes are not equal: an empirical study on image recognition fairness. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2024, 23283–23292
8 B, Zhu Y, Niu S, Lee M, Hur H Zhang . Debiased fine-tuning for vision-language models by prompt regularization. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 3834–3842
9 M, Zhang C Ré . Contrastive adapters for foundation model group robustness. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1576
10 C Y, Chuang V, Jampani Y, Li A, Torralba S Jegelka . Debiasing vision-language models via biased prompts. 2023, arXiv preprint arXiv: 2302.00070
11 Parashar S, Lin Z, Liu T, Dong X, Li Y, Ramanan D, Caverlee J, Kong S. The neglected tails in vision-language models. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024, 12988–12997
12 H, Berg S, Hall Y, Bhalgat H, Kirk A, Shtedritski M Bain . A prompt array keeps the bias away: Debiasing vision-language models with adversarial learning. In: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing. 2022, 806–822
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed