Large Language Models (LLMs) have shown remarkable capabilities in manipulating natural language across multiple applications, but their ability to handle simple reasoning tasks is often questioned. In this work, we aim to provide a comprehensive analysis of LLMs' reasoning competence, specifically focusing on their prompt dependency. In particular, we introduce a new benchmark dataset with a series of simple reasoning questions demanding shallow logical reasoning. Aligned with cognitive psychology standards, the questions are confined to a basic domain revolving around geometric figures, ensuring that responses are independent of any pre-existing intuition about the world and rely solely on deduction. An empirical analysis involving zero-shot and few-shot prompting across 24 LLMs of different sizes reveals that, while LLMs with over 70 billion parameters perform better in the zero-shot setting, there is still a large room for improvement. An additional test with chain-of-thought prompting over 22 LLMs shows that this additional prompt can aid or damage the performance of models, depending on whether the rationale is required before or after the answer.

Raganato, A., Peñaloza, R., Viviani, M., Pasi, G. (2024). Reasoning Capabilities and Invariability of Large Language Models. In 2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) (pp.125-132). Institute of Electrical and Electronics Engineers Inc. [10.1109/WI-IAT62293.2024.00025].

Reasoning Capabilities and Invariability of Large Language Models

Raganato A.;Peñaloza R.;Viviani M.;Pasi G.
2024

Abstract

Large Language Models (LLMs) have shown remarkable capabilities in manipulating natural language across multiple applications, but their ability to handle simple reasoning tasks is often questioned. In this work, we aim to provide a comprehensive analysis of LLMs' reasoning competence, specifically focusing on their prompt dependency. In particular, we introduce a new benchmark dataset with a series of simple reasoning questions demanding shallow logical reasoning. Aligned with cognitive psychology standards, the questions are confined to a basic domain revolving around geometric figures, ensuring that responses are independent of any pre-existing intuition about the world and rely solely on deduction. An empirical analysis involving zero-shot and few-shot prompting across 24 LLMs of different sizes reveals that, while LLMs with over 70 billion parameters perform better in the zero-shot setting, there is still a large room for improvement. An additional test with chain-of-thought prompting over 22 LLMs shows that this additional prompt can aid or damage the performance of models, depending on whether the rationale is required before or after the answer.
paper
Generative AI; Knowledge Rep-resentation and Reasoning; LLM benchmark; Natural Language Processing;
English
The 2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT'24) - 09-12 December 2024
2024
2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)
9798331504946
2024
125
132
reserved
Raganato, A., Peñaloza, R., Viviani, M., Pasi, G. (2024). Reasoning Capabilities and Invariability of Large Language Models. In 2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) (pp.125-132). Institute of Electrical and Electronics Engineers Inc. [10.1109/WI-IAT62293.2024.00025].
File in questo prodotto:
File Dimensione Formato  
Raganato et al-2024-WI-IAT-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 196.1 kB
Formato Adobe PDF
196.1 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/558362
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact