The use of transformer-based models like BERT for natural language processing has achieved remarkable performance across multiple domains. However, these models face challenges when dealing with very specialized domains, such as scientific literature. In this paper, we conduct a comprehensive analysis of knowledge injection strategies for transformers in the scientific domain, evaluating four distinct methods for injecting external knowledge into transformers. We assess these strategies in a single-label multi-class classification task involving scientific papers. For this, we develop a public benchmark based on 12k scientific papers from the AIDA knowledge graph, categorized into three fields. We utilize the Computer Science Ontology as our external knowledge source. Our findings indicate that most proposed knowledge injection techniques outperform the BERT baseline.

Cadeddu, A., Chessa, A., De Leo, V., Fenu, G., Motta, E., Osborne, F., et al. (2023). Enhancing Scholarly Understanding: A Comparison of Knowledge Injection Strategies in Large Language Models. In Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG 2023) co-located with the 21th International Semantic Web Conference (ISWC 2023) (pp.1-6). CEUR-WS.

Enhancing Scholarly Understanding: A Comparison of Knowledge Injection Strategies in Large Language Models

Osborne F.;
2023

Abstract

The use of transformer-based models like BERT for natural language processing has achieved remarkable performance across multiple domains. However, these models face challenges when dealing with very specialized domains, such as scientific literature. In this paper, we conduct a comprehensive analysis of knowledge injection strategies for transformers in the scientific domain, evaluating four distinct methods for injecting external knowledge into transformers. We assess these strategies in a single-label multi-class classification task involving scientific papers. For this, we develop a public benchmark based on 12k scientific papers from the AIDA knowledge graph, categorized into three fields. We utilize the Computer Science Ontology as our external knowledge source. Our findings indicate that most proposed knowledge injection techniques outperform the BERT baseline.
paper
BERT; Classification Tasks; Feature Engineering; Knowledge Graphs; Natural Language Processing;
English
Deep Learning for Knowledge Graphs 2023 - 6 November 2023 through 10 November 2023
2023
Alam, M; Buscaldi, D; Cochez, M; Osborne, F; Reforgiato Recupero, D
Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG 2023) co-located with the 21th International Semantic Web Conference (ISWC 2023)
2023
3559
1
6
https://ceur-ws.org/Vol-3559/
open
Cadeddu, A., Chessa, A., De Leo, V., Fenu, G., Motta, E., Osborne, F., et al. (2023). Enhancing Scholarly Understanding: A Comparison of Knowledge Injection Strategies in Large Language Models. In Proceedings of the Workshop on Deep Learning for Knowledge Graphs (DL4KG 2023) co-located with the 21th International Semantic Web Conference (ISWC 2023) (pp.1-6). CEUR-WS.
File in questo prodotto:
File Dimensione Formato  
Cadeddu-2023-DL4KG-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 949.41 kB
Formato Adobe PDF
949.41 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/455378
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact