This PhD thesis aims to integrate lexical taxonomies and word embeddings to develop novel methodologies for enhancing Natural Language Processing representations. Lexical taxonomies serve as a natural means of organising human knowledge in a hierarchical way and offering formal descriptions of concepts and their relationships, supporting both syntactic and semantic exchanges. On the other hand, word embeddings are vector representations of words that capture linguistic patterns and lexical semantics from extensive corpora based on the idea that words with similar contexts tend to have similar meanings. This research explores the conjunction of word embeddings with the structure of lexical taxonomies, enabling the choice of word embeddings that fit the hierarchical structure of the concepts they represent. Additionally, word embeddings can aid in updating taxonomies to accommodate evolving languages and knowledge domains. They facilitate the incorporation of new concepts in the appropriate taxonomic positions by leveraging vast textual data. Moreover, word embeddings can be valuable for aligning and linking taxonomies, which is crucial when multiple taxonomies within a single domain, built by different institutions for varying purposes, need to communicate effectively. The thesis is divided into several parts. Part I introduces the two fundamental subjects of word embeddings and lexical taxonomies. Part II focuses on two methods for evaluating word embeddings. One method, TaxoVec, is a framework to select taxonomy-aware word embeddings leveraging a measure of taxonomic semantic similarity (the HSS), while VEC2BEST offers a general evaluation framework for word embeddings without a specific taxonomy. It provides a comprehensive evaluation metric called the PCE (Principal Component Evaluation) for each model. Part III details two methodologies for enhancing and aligning lexical taxonomies using word embeddings. NEE enables taxonomy enrichment by estimating data conformity to a given taxonomy and identifying new entities and concepts. WETA is a domain-independent method for automatic taxonomy alignment, combining hierarchical similarity and classification tasks into a scoring function. Part IV showcases the practical applications of the proposed methodologies in the context of Labour Market Intelligence. This research contributes to Natural Language Processing by providing innovative techniques for enhancing language representation and knowledge, ultimately benefiting various applications in this domain.

Questa tesi di dottorato mira a integrare tassonomie lessicali e modelli di word embedding per sviluppare nuove metodologie volte a migliorare le rappresentazioni dell'elaborazione del linguaggio naturale. Le tassonomie lessicali servono come mezzo per organizzare gerarchicamente la conoscenza umana e offrire descrizioni formali dei concetti e delle loro relazioni, supportando scambi sia sintattici che semantici. D'altra parte, i modelli di word embedding sono rappresentazioni vettoriali di parole che catturano modelli linguistici e semantica lessicale da testi estesi; sono basati sull'idea che parole con contesti simili tendono ad avere significati simili. Questa ricerca esplora la congiunzione dei modelli di word embedding con la struttura delle tassonomie lessicali, consentendo la scelta di word embedding che si adattano alla struttura gerarchica dei concetti che rappresentano. Inoltre, i modelli di word embedding possono aiutare ad aggiornare le tassonomie per accogliere lingue e domini di conoscenza in evoluzione. Facilitano l'incorporazione di nuovi concetti nelle posizioni tassonomiche appropriate sfruttando dati testuali. Inoltre, i modelli di word embedding possono essere preziosi per allineare e collegare le tassonomie, il che è fondamentale quando più tassonomie all’interno di un unico dominio, costruito da diverse istituzioni per scopi diversi, devono comunicare in modo efficace. La tesi è divisa in più parti. La parte I introduce i due argomenti fondamentali dei modelli di word embedding e delle tassonomie lessicali. La parte II si concentra su due metodi per valutare i modelli di word embedding. Un metodo, TaxoVec, è un framework per selezionare word embedding compatibili con la tassonomia sfruttando una misura di somiglianza semantica tassonomica (HSS), mentre VEC2BEST offre un framework di valutazione generale per word embedding senza una tassonomia specifica. Fornisce una metrica di valutazione completa chiamata PCE (Principal Component Evaluation) per ciascun modello. La parte III descrive in dettaglio due metodologie per migliorare e allineare le tassonomie lessicali utilizzando i modelli di word embedding. NEE consente l'arricchimento della tassonomia stimando la conformità dei dati a una determinata tassonomia e identificando nuove entità e concetti. WETA è un metodo indipendente dal dominio per l'allineamento automatico della tassonomia, combinando la somiglianza gerarchica e un modello di classificazione in un punteggio finale. La parte IV presenta le applicazioni pratiche delle metodologie proposte nel contesto del mercato del lavoro. Questa ricerca contribuisce all'elaborazione del linguaggio naturale fornendo tecniche innovative per migliorare la rappresentazione e la conoscenza del linguaggio, avvantaggiando varie applicazioni in questo settore.

(2024). Integrating Word Embeddings and Taxonomy Learning for Enhanced Lexical Domain Modelling. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2024).

Integrating Word Embeddings and Taxonomy Learning for Enhanced Lexical Domain Modelling

GIABELLI, ANNA
2024

Abstract

This PhD thesis aims to integrate lexical taxonomies and word embeddings to develop novel methodologies for enhancing Natural Language Processing representations. Lexical taxonomies serve as a natural means of organising human knowledge in a hierarchical way and offering formal descriptions of concepts and their relationships, supporting both syntactic and semantic exchanges. On the other hand, word embeddings are vector representations of words that capture linguistic patterns and lexical semantics from extensive corpora based on the idea that words with similar contexts tend to have similar meanings. This research explores the conjunction of word embeddings with the structure of lexical taxonomies, enabling the choice of word embeddings that fit the hierarchical structure of the concepts they represent. Additionally, word embeddings can aid in updating taxonomies to accommodate evolving languages and knowledge domains. They facilitate the incorporation of new concepts in the appropriate taxonomic positions by leveraging vast textual data. Moreover, word embeddings can be valuable for aligning and linking taxonomies, which is crucial when multiple taxonomies within a single domain, built by different institutions for varying purposes, need to communicate effectively. The thesis is divided into several parts. Part I introduces the two fundamental subjects of word embeddings and lexical taxonomies. Part II focuses on two methods for evaluating word embeddings. One method, TaxoVec, is a framework to select taxonomy-aware word embeddings leveraging a measure of taxonomic semantic similarity (the HSS), while VEC2BEST offers a general evaluation framework for word embeddings without a specific taxonomy. It provides a comprehensive evaluation metric called the PCE (Principal Component Evaluation) for each model. Part III details two methodologies for enhancing and aligning lexical taxonomies using word embeddings. NEE enables taxonomy enrichment by estimating data conformity to a given taxonomy and identifying new entities and concepts. WETA is a domain-independent method for automatic taxonomy alignment, combining hierarchical similarity and classification tasks into a scoring function. Part IV showcases the practical applications of the proposed methodologies in the context of Labour Market Intelligence. This research contributes to Natural Language Processing by providing innovative techniques for enhancing language representation and knowledge, ultimately benefiting various applications in this domain.
BIANCO, SIMONE
MERCORIO, FABIO
Word Embeddings; Tassonomie lessicali; NLP; Valutazione WE; Aumento tassonomie
Word Embeddings; Lexical taxonomies; NLP; WE evaluation; Taxonomy enrichment
INF/01 - INFORMATICA
English
28-feb-2024
36
2022/2023
open
(2024). Integrating Word Embeddings and Taxonomy Learning for Enhanced Lexical Domain Modelling. (Tesi di dottorato, Università degli Studi di Milano-Bicocca, 2024).
File in questo prodotto:
File Dimensione Formato  
phd_unimib_791989.pdf

accesso aperto

Descrizione: Integrating Word Embeddings and Taxonomy Learning for Enhanced Lexical Domain Modelling
Tipologia di allegato: Doctoral thesis
Dimensione 11.29 MB
Formato Adobe PDF
11.29 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/465159
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact