One of the main goals of artificial intelligence is understanding how intelligent agent acts. Language is one of the most important media of communication, and studying theories that can account for the meaning of natural language expressions is an important task. Language is one of the most important media of communication, and studying theories that can account for the meaning of natural language expressions is a crucial task in artificial intelligence. Distributional semantics states that the meaning of natural language expressions can be derived from the context in which the expressions appear. This theory has been implemented by algorithms that generate vector representations of natural language expressions that represent similar natural language expressions with similar vectors. In the last years, several cognitive scientists have shown that these representations are correlated with associative learning and they capture cognitive biases and stereotypes as they are encoded in text corpora. If language is encoding important aspects of cognition and our associative knowledge, and language usage change across the contexts, the comparison of language usage in different contexts may reveal important associative knowledge patterns. Thus, if we want to reveal these patterns, we need ways to compare distributional representations that are generated from different text corpora. For example, using these algorithms on textual documents from different periods will generate different representations: since language evolves during time, finding a way to compare words that have shifted over time is a valuable task for artificial intelligence (e.g., the word "Amazon" has changed its prevalent meaning during the last years). In this thesis, we introduce a corpus-based comparative model that allows us to compare representations of different sources generated under the distributional semantic theory. We propose a model that is both effective and efficient, and we show that it can also deal with entity names and not just words, overcoming some problems that follow from the ambiguity of natural language. Eventually, we combine these methods with logical approaches. We show that we can do logical reasoning on these representations and make comparisons based on logical constructs.

L'intelligenza artificiale cerca di spiegare come gli agenti intelligenti si comportano. Il linguaggio è uno dei media di comunicazioni più importanti e studiare delle teorie che permettano di definire il significato di espressioni naturali è molto importante. I linguisti hanno usato con successo linguaggi artificiali basati su logiche, ma una theory che ha avuto un impatto significativo in intelligenza artificiale è la semantica distribuzionale. La semantica distribuzionale afferma che il significato di espressioni in linguaggio naturale può essere derivato dal contesto in cui tali espressioni compaiono. Questa teoria è stata implementata da algoritmi che permettono di generare rappresentazioni vettoriali delle espressioni del linguaggio natural in modo che espressioni simili vengano rappresentate con vettori simili. Negli ultimi anni, gli scienziati cognitivi hanno sottolineato che queste rappresentazioni sono correlate con l'associative learning e che sono anche in grado di catturare bias e stereotype del testo. Diventa quindi importante trovare metodologie per comparare rappresentazioni che arrivano da sorgenti diverse. Ad esempio, usare questi algoritmi su testi di periodi differenti genera rappresentazioni differenti: visto che il linguaggio muta nel tempo, trovare delle metododoloie per comparare come le parole si sono mosse è un task imporante per l'intelligenza artificiale (e.g., la parola "amazon" ha cambiato il suo significato principale negli ultimi anni) In questa tesi, introduciamo un modello comparative basato su testi che permette di comparare rappresentazioni di sorgenti diverse generate con la semantica distribuzionale. Proponiamo un modello che è efficiente ed efficace e mostriamo che possiamo anche gestire nomi di entità e non solo paorle, superando problemi legati all'ambiguità del linguaggio. Alla fine, mostriamo che è possibile combinare questi metodi con approcci logici e fare comparazioni utilizzando costrutti logici.

(2020). Corpus-based Comparison of Distributional Models of Language and Knowledge Graphs. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2020).

Corpus-based Comparison of Distributional Models of Language and Knowledge Graphs

BIANCHI, FEDERICO
2020

Abstract

One of the main goals of artificial intelligence is understanding how intelligent agent acts. Language is one of the most important media of communication, and studying theories that can account for the meaning of natural language expressions is an important task. Language is one of the most important media of communication, and studying theories that can account for the meaning of natural language expressions is a crucial task in artificial intelligence. Distributional semantics states that the meaning of natural language expressions can be derived from the context in which the expressions appear. This theory has been implemented by algorithms that generate vector representations of natural language expressions that represent similar natural language expressions with similar vectors. In the last years, several cognitive scientists have shown that these representations are correlated with associative learning and they capture cognitive biases and stereotypes as they are encoded in text corpora. If language is encoding important aspects of cognition and our associative knowledge, and language usage change across the contexts, the comparison of language usage in different contexts may reveal important associative knowledge patterns. Thus, if we want to reveal these patterns, we need ways to compare distributional representations that are generated from different text corpora. For example, using these algorithms on textual documents from different periods will generate different representations: since language evolves during time, finding a way to compare words that have shifted over time is a valuable task for artificial intelligence (e.g., the word "Amazon" has changed its prevalent meaning during the last years). In this thesis, we introduce a corpus-based comparative model that allows us to compare representations of different sources generated under the distributional semantic theory. We propose a model that is both effective and efficient, and we show that it can also deal with entity names and not just words, overcoming some problems that follow from the ambiguity of natural language. Eventually, we combine these methods with logical approaches. We show that we can do logical reasoning on these representations and make comparisons based on logical constructs.
PALMONARI, MATTEO LUIGI
MESSINA, VINCENZINA
distribuzionale; ia; grafi conoscenza; linguaggio; reti neurali
distributionals; ai; knowledge graphs; language; reti neurali
INF/01 - INFORMATICA
English
18-feb-2020
INFORMATICA
32
2018/2019
open
(2020). Corpus-based Comparison of Distributional Models of Language and Knowledge Graphs. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2020).
File in questo prodotto:
File Dimensione Formato  
phd_unimib_746914.pdf

accesso aperto

Descrizione: tesi di dottorato
Dimensione 3.62 MB
Formato Adobe PDF
3.62 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/263553
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact