Skip to Main Content (Press Enter)

Logo CNR
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze

UNI-FIND
Logo CNR

|

UNI-FIND

cnr.it
  • ×
  • Home
  • Persone
  • Pubblicazioni
  • Strutture
  • Competenze
  1. Pubblicazioni

How Do BERT Embeddings Organize Linguistic Knowledge?

Contributo in Atti di convegno
Data di Pubblicazione:
2021
Abstract:
Several studies investigated the linguistic information implicitly encoded in Neural Language Models. Most of these works focused on quantifying the amount and type of information available within their internal representations and across their layers. In line with this scenario, we proposed a different study, based on Lasso regression, aimed at understanding how the information encoded by BERT sentence-level representations is arrange within its hidden units. Using a suite of several probing tasks, we showed the existence of a relationship between the implicit knowledge learned by the model and the number of individual units involved in the encodings of this competence. Moreover, we found that it is possible to identify groups of hidden units more relevant for specific linguistic properties.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
nlp; interpretability; deep learning
Elenco autori:
Miaschi, Alessio; Dell'Orletta, Felice
Autori di Ateneo:
DELL'ORLETTA FELICE
MIASCHI ALESSIO
Link alla scheda completa:
https://iris.cnr.it/handle/20.500.14243/400473
  • Dati Generali

Dati Generali

URL

https://www.aclweb.org/anthology/2021.deelio-1.6
  • Utilizzo dei cookie

Realizzato con VIVO | Designed by Cineca | 26.5.0.0 | Sorgente dati: PREPROD (Ribaltamento disabilitato)