Recent advancements in Large Language Models (LLMs) have significantly enhanced their ability to generate and manipulate human language, highlighting their potential across various applications. Evaluating LLMs in languages other than English is crucial for ensuring their linguistic versatility, cultural relevance, and applicability in diverse global contexts, thus broadening their usability and effectiveness. We tackle this challenge by introducing a structured benchmark using the INVALSI tests, a set of well-established assessments designed to measure educational competencies across Italy. Our study makes three primary contributions: First, we adapt the INVALSI tests as a benchmark for automated LLM evaluation, rigorously adapting the test format to suit automated processing while retaining the essence of the original tests. Second, we provide a detailed assessment of current LLMs, offering a crucial reference point for the academic community. Finally, we visually compare the performance of these models against human results. Additionally, our benchmark is publicly available and provided with a comprehensive evaluation suite, ensuring that the benchmark remains a current and valuable resource relevant for advancing industrial-strength NLP applications.

Mercorio, F., Mezzanzanica, M., Potertì, D., Serino, A., Seveso, A. (In corso di stampa). A Benchmark to Evaluate LLMs’ Proficiency on Italian Student Competencies. Intervento presentato a: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Porto, Portugal.

A Benchmark to Evaluate LLMs’ Proficiency on Italian Student Competencies

Mercorio, F;Mezzanzanica, M;Potertì, D;Serino A;Seveso, A
In corso di stampa

Abstract

Recent advancements in Large Language Models (LLMs) have significantly enhanced their ability to generate and manipulate human language, highlighting their potential across various applications. Evaluating LLMs in languages other than English is crucial for ensuring their linguistic versatility, cultural relevance, and applicability in diverse global contexts, thus broadening their usability and effectiveness. We tackle this challenge by introducing a structured benchmark using the INVALSI tests, a set of well-established assessments designed to measure educational competencies across Italy. Our study makes three primary contributions: First, we adapt the INVALSI tests as a benchmark for automated LLM evaluation, rigorously adapting the test format to suit automated processing while retaining the essence of the original tests. Second, we provide a detailed assessment of current LLMs, offering a crucial reference point for the academic community. Finally, we visually compare the performance of these models against human results. Additionally, our benchmark is publicly available and provided with a comprehensive evaluation suite, ensuring that the benchmark remains a current and valuable resource relevant for advancing industrial-strength NLP applications.
paper
Large Language Models, Benchmark, Evaluation
English
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
2025
In corso di stampa
none
Mercorio, F., Mezzanzanica, M., Potertì, D., Serino, A., Seveso, A. (In corso di stampa). A Benchmark to Evaluate LLMs’ Proficiency on Italian Student Competencies. Intervento presentato a: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Porto, Portugal.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/555281
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact