Data di Pubblicazione:
2020
Abstract:
This paper explores the relationship between Neural Language Model (NLM) perplexity and sentence readability. Starting from the evidence that NLMs implicitly acquire sophisticated linguistic knowledge from a huge amount of training data, our goal is to investigate whether perplexity is affected by linguistic features used to automatically assess sentence readability and if there is a correlation between the two metrics. Our findings suggest that this correlation is actually quite weak and the two metrics are affected by different linguistic phenomena.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
nlp; neural language models; readability
Elenco autori:
Brunato, DOMINIQUE PIERINA; Miaschi, Alessio; Alzetta, Chiara; Dell'Orletta, Felice; Venturi, Giulia
Link alla scheda completa: