The spread of misinformation has catalyzed a surge of interest in systems that assist users in identifying false claims, misleading content, and other forms of unreliable information, with Automated Fact-Checking (AFC) emerging as a prominent – yet not exclusive – approach. However, the majority of current models overwhelmingly rely on binary or categorical reliability-related labels, oversimplifying the complex, context-sensitive nature of information verification. In this paper, we argue that such reductive labeling fails to align with the interpretive, epistemic, and normative dimensions of real-world misinformation detection practices, including fact-checking. Drawing on critiques from both computational and social research, we illustrate how binary judgments not only distort reliability-related aspects, but also limits system trustworthiness and public accountability. Instead, we propose a shift from binary classification to ranking-based approaches that prioritize access to information that is contextually relevant and reliable across distinct dimensions. Grounded in Information Retrieval (IR) theory, our alternative evaluation paradigm emphasizes graded relevance, contextual utility, and ambiguity tolerance, reflecting how both laypersons and professional fact-checkers alike navigate uncertainty. By reframing misinformation detection as an issue of information access, we seek to bridge the gap between technical modeling and the practical challenges of managing fake, misleading, or harmful content.
La Barbera, D., Milanese, G., Peikos, G., Pasi, G., Viviani, M. (2025). Beyond binary classification: ranking for information access in misinformation contexts. In Joint Proceedings of the Thematic Workshops at Ital-IA 2025 colocated with the 5th National Conference on Artificial Intelligence, organized by CINI (Ital-IA 2025) (pp.1-7). CEUR-WS.
Beyond binary classification: ranking for information access in misinformation contexts
La Barbera, D;Milanese, GC;Peikos, G;Pasi, G;Viviani, M
2025
Abstract
The spread of misinformation has catalyzed a surge of interest in systems that assist users in identifying false claims, misleading content, and other forms of unreliable information, with Automated Fact-Checking (AFC) emerging as a prominent – yet not exclusive – approach. However, the majority of current models overwhelmingly rely on binary or categorical reliability-related labels, oversimplifying the complex, context-sensitive nature of information verification. In this paper, we argue that such reductive labeling fails to align with the interpretive, epistemic, and normative dimensions of real-world misinformation detection practices, including fact-checking. Drawing on critiques from both computational and social research, we illustrate how binary judgments not only distort reliability-related aspects, but also limits system trustworthiness and public accountability. Instead, we propose a shift from binary classification to ranking-based approaches that prioritize access to information that is contextually relevant and reliable across distinct dimensions. Grounded in Information Retrieval (IR) theory, our alternative evaluation paradigm emphasizes graded relevance, contextual utility, and ambiguity tolerance, reflecting how both laypersons and professional fact-checkers alike navigate uncertainty. By reframing misinformation detection as an issue of information access, we seek to bridge the gap between technical modeling and the practical challenges of managing fake, misleading, or harmful content.| File | Dimensione | Formato | |
|---|---|---|---|
|
La Barbera et al-2025-Ital-IA-CEUR-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
200.44 kB
Formato
Adobe PDF
|
200.44 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


