Publication Date:
2021
abstract:
Probing tasks are frequently used to evaluate whether the representations of Neural Language Models (NLMs) encode linguistic information. However, it is still questioned if probing classification tasks really enable such investigation or they simply hint for surface patterns in the data. We present a method to investigate this question by comparing the accuracies of a set of probing tasks on gold and automatically generated control datasets. Our results suggest that probing tasks can be used as reliable diagnostic methods to investigate the linguistic information encoded in NLMs representations.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
Neural Language Models; Linguistic probing; Treebanks
List of contributors:
Miaschi, Alessio; Alzetta, Chiara; Dell'Orletta, Felice; Venturi, Giulia; Brunato, DOMINIQUE PIERINA
Published in: