Warning: This paper contains examples of language and images which may be offensive. In this paper, we address the problem of automatic misogynous meme recognition by dealing with potentially biased elements that could lead to unfair models. In particular, a bias estimation technique is used to identify those textual and visual elements that unintendedly affect the model prediction, and a few bias mitigation methods are proposed, investigating two different types of debiasing strategies, i.e., at training time and at inference time. The proposed approaches achieve remarkable results both in terms of prediction and generalization capabilities.
Balducci, G., Rizzi, G., Fersini, E. (2025). Misogynous Memes Recognition: Training vs Inference Bias Mitigation. IJCOL, 11 [10.17454/IJCOL111.05].
Misogynous Memes Recognition: Training vs Inference Bias Mitigation
Balducci, G;Rizzi, G;Fersini, E
2025
Abstract
Warning: This paper contains examples of language and images which may be offensive. In this paper, we address the problem of automatic misogynous meme recognition by dealing with potentially biased elements that could lead to unfair models. In particular, a bias estimation technique is used to identify those textual and visual elements that unintendedly affect the model prediction, and a few bias mitigation methods are proposed, investigating two different types of debiasing strategies, i.e., at training time and at inference time. The proposed approaches achieve remarkable results both in terms of prediction and generalization capabilities.| File | Dimensione | Formato | |
|---|---|---|---|
|
Balducci-2025-IJCOL-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
430.99 kB
Formato
Adobe PDF
|
430.99 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


