In an era of pervasive online content, effectively distinguishing reliable information from misinformation has become an increasingly urgent challenge with broad societal implications. In this context, algorithmic solutions that focus on supervised learning can be effective within specific domains, but they require large labeled datasets for training. Producing such datasets is costly and time-consuming, and these approaches are prone to several issues, including annotation bias, temporal leakage, subjective interpretation, and poor generalization across domains. Alternative approaches include semi-supervised and weakly supervised learning, unsupervised or self-supervised methods, graph-based propagation models, zero-shot and few-shot learning, and Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs). However, these approaches also present several limitations, such as potential label noise in semi-supervised methods, spurious correlations and reduced interpretability in unsupervised approaches, unrealistic assumptions in graph-based models, sensitivity to prompt design and pre-trained knowledge in zero-shot and few-shot methods, and dependence on the availability and quality of external knowledge in RAG-based methods. To mitigate several of these limitations, we propose LogicJitter, a novel and cost-efficient fine-tuning strategy that enhances the reasoning capabilities of LLMs by exposing them to structured, logic-based games specifically designed to counteract common human cognitive biases and logical fallacies. Rather than relying solely on domain-specific misinformation data, as in prior misinformation detection approaches that use such data either for direct training or as domain-specific knowledge, our method improves detection capabilities by strengthening domain-agnostic reasoning skills. We introduce an open-source framework for automatically generating both valid and fallacious logic statements to support training and reproducibility. Empirical results demonstrate that LLMs fine-tuned with LogicJitter lead to meaningful results in misinformation detection performance, highlighting the potential of reasoning-centric training as a robust alternative to traditional, data-intensive approaches.

Herranz-Celotti, L., Viviani, M. (2026). LogicJitter: Let LLMs play and uncover misinformation. COMPUTERS & ELECTRICAL ENGINEERING, 136(August 2026) [10.1016/j.compeleceng.2026.111215].

LogicJitter: Let LLMs play and uncover misinformation

Herranz-Celotti, Luca;Viviani, Marco
2026

Abstract

In an era of pervasive online content, effectively distinguishing reliable information from misinformation has become an increasingly urgent challenge with broad societal implications. In this context, algorithmic solutions that focus on supervised learning can be effective within specific domains, but they require large labeled datasets for training. Producing such datasets is costly and time-consuming, and these approaches are prone to several issues, including annotation bias, temporal leakage, subjective interpretation, and poor generalization across domains. Alternative approaches include semi-supervised and weakly supervised learning, unsupervised or self-supervised methods, graph-based propagation models, zero-shot and few-shot learning, and Retrieval-Augmented Generation (RAG) with Large Language Models (LLMs). However, these approaches also present several limitations, such as potential label noise in semi-supervised methods, spurious correlations and reduced interpretability in unsupervised approaches, unrealistic assumptions in graph-based models, sensitivity to prompt design and pre-trained knowledge in zero-shot and few-shot methods, and dependence on the availability and quality of external knowledge in RAG-based methods. To mitigate several of these limitations, we propose LogicJitter, a novel and cost-efficient fine-tuning strategy that enhances the reasoning capabilities of LLMs by exposing them to structured, logic-based games specifically designed to counteract common human cognitive biases and logical fallacies. Rather than relying solely on domain-specific misinformation data, as in prior misinformation detection approaches that use such data either for direct training or as domain-specific knowledge, our method improves detection capabilities by strengthening domain-agnostic reasoning skills. We introduce an open-source framework for automatically generating both valid and fallacious logic statements to support training and reproducibility. Empirical results demonstrate that LLMs fine-tuned with LogicJitter lead to meaningful results in misinformation detection performance, highlighting the potential of reasoning-centric training as a robust alternative to traditional, data-intensive approaches.
Articolo in rivista - Articolo scientifico
Misinformation, Natural language processing, Large language models, Logic games, Reasoning
English
8-mag-2026
2026
136
August 2026
111215
open
Herranz-Celotti, L., Viviani, M. (2026). LogicJitter: Let LLMs play and uncover misinformation. COMPUTERS & ELECTRICAL ENGINEERING, 136(August 2026) [10.1016/j.compeleceng.2026.111215].
File in questo prodotto:
File Dimensione Formato  
Celotti-Viviani-2026-Computers & Electrical Engineering-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 1.11 MB
Formato Adobe PDF
1.11 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/604821
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact