Warning: This paper contains examples of language and images which may be offensive. Misogyny is a form of hate against women and has been spreading exponentially through the Web, especially on social media platforms. Hateful content towards women can be conveyed not only by text but also using visual and/or audio sources or their combination, highlighting the necessity to address it from a multimodal perspective. One of the predominant forms of multimodal content against women is represented by memes, which are images characterized by pictorial content with an overlaying text introduced a posteriori. Its main aim is originally to be funny and/or ironic, making misogyny recognition in memes even more challenging. In this paper, we investigated 4 unimodal and 3 multimodal approaches to determine which source of information contributes more to the detection of misogynous memes. Moreover, a bias estimation technique is proposed to identify specific elements that compose a meme that could lead to unfair models, together with a bias mitigation strategy based on Bayesian Optimization. The proposed method is able to push the prediction probabilities towards the correct class for up to 61.43% of the cases. Finally, we identified the most challenging archetypes of memes that are still far to be properly recognized, highlighting the most relevant open research directions.

Rizzi, G., Gasparini, F., Saibene, A., Rosso, P., Fersini, E. (2023). Recognizing misogynous memes: Biased models and tricky archetypes. INFORMATION PROCESSING & MANAGEMENT, 60(5) [10.1016/j.ipm.2023.103474].

Recognizing misogynous memes: Biased models and tricky archetypes

Rizzi G.;Gasparini F.;Saibene A.;Fersini E.
2023

Abstract

Warning: This paper contains examples of language and images which may be offensive. Misogyny is a form of hate against women and has been spreading exponentially through the Web, especially on social media platforms. Hateful content towards women can be conveyed not only by text but also using visual and/or audio sources or their combination, highlighting the necessity to address it from a multimodal perspective. One of the predominant forms of multimodal content against women is represented by memes, which are images characterized by pictorial content with an overlaying text introduced a posteriori. Its main aim is originally to be funny and/or ironic, making misogyny recognition in memes even more challenging. In this paper, we investigated 4 unimodal and 3 multimodal approaches to determine which source of information contributes more to the detection of misogynous memes. Moreover, a bias estimation technique is proposed to identify specific elements that compose a meme that could lead to unfair models, together with a bias mitigation strategy based on Bayesian Optimization. The proposed method is able to push the prediction probabilities towards the correct class for up to 61.43% of the cases. Finally, we identified the most challenging archetypes of memes that are still far to be properly recognized, highlighting the most relevant open research directions.
Articolo in rivista - Articolo scientifico
Bias estimation; Bias mitigation; Meme; Misogyny identification;
English
9-ago-2023
2023
60
5
103474
open
Rizzi, G., Gasparini, F., Saibene, A., Rosso, P., Fersini, E. (2023). Recognizing misogynous memes: Biased models and tricky archetypes. INFORMATION PROCESSING & MANAGEMENT, 60(5) [10.1016/j.ipm.2023.103474].
File in questo prodotto:
File Dimensione Formato  
Rizzi-2023-Informat Process Manag-VoR.pdf

accesso aperto

Descrizione: Research Article
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 5.09 MB
Formato Adobe PDF
5.09 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/451139
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
Social impact