Few-shot Fine-grained Entity Typing (FET) techniques are applied to classify entities recognized in texts into classes from taxonomies using a limited amount of training data. The task supports entity extraction and knowledge graph (KG) construction by classifying novel entities according to a taxonomy of interest. In this paper, we present PROMET (PROmpt-tuning using implicit Mask filling for Entity Typing), a novel parameter-efficient prompt-based approach to few-shot FET that exploits implicit mask filling. The usage of the embedding of the masked tokens avoids the necessity of a manually defined verbalizer to initialize the matrix which maps the predicted token to entity classes. By directly fine-tuning Adapters and a linear layer instead of 1) the whole PLM and 2) the verbalizer matrix, PROMET uses a number of trainable parameters that is two orders of magnitude smaller than existing models, achieving better or comparable performance on benchmark datasets. Finally, we develop PROMET (and modify the state-of-the-art few-shot FET approach) to work in multi-label inference settings, coherently with earlier work in the field and with typing patterns in KGs.
Rubini, R., Vimercati, M., Palmonari, M. (2024). PROMET: Parameter-Efficient Few-Shot Fine-Grained Entity Typing with Implicit Mask Filling. In 2024 IEEE/WIC International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT) (pp.141-149). Institute of Electrical and Electronics Engineers Inc. [10.1109/WI-IAT62293.2024.00027].
PROMET: Parameter-Efficient Few-Shot Fine-Grained Entity Typing with Implicit Mask Filling
Vimercati M.;Palmonari M.
2024
Abstract
Few-shot Fine-grained Entity Typing (FET) techniques are applied to classify entities recognized in texts into classes from taxonomies using a limited amount of training data. The task supports entity extraction and knowledge graph (KG) construction by classifying novel entities according to a taxonomy of interest. In this paper, we present PROMET (PROmpt-tuning using implicit Mask filling for Entity Typing), a novel parameter-efficient prompt-based approach to few-shot FET that exploits implicit mask filling. The usage of the embedding of the masked tokens avoids the necessity of a manually defined verbalizer to initialize the matrix which maps the predicted token to entity classes. By directly fine-tuning Adapters and a linear layer instead of 1) the whole PLM and 2) the verbalizer matrix, PROMET uses a number of trainable parameters that is two orders of magnitude smaller than existing models, achieving better or comparable performance on benchmark datasets. Finally, we develop PROMET (and modify the state-of-the-art few-shot FET approach) to work in multi-label inference settings, coherently with earlier work in the field and with typing patterns in KGs.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


