On the interaction of automatic evaluationand task framing in headline style transfer
Contributo in Atti di convegno
Data di Pubblicazione:
2020
Abstract:
An ongoing debate in the NLG communityconcerns the best way to evaluate systems,with human evaluation often being consideredthe most reliable method, compared to corpus-based metrics. However, tasks involving sub-tle textual differences, such as style transfer,tend to be hard for humans to perform. In thispaper, we propose an evaluation method forthis task based on purposely-trained classifiers,showing that it better reflects system differ-ences than traditional metrics such as BLEUand ROUGE.
Tipologia CRIS:
04.01 Contributo in Atti di convegno
Keywords:
natural language generation; evaluation; style
Elenco autori:
Dell'Orletta, Felice
Link alla scheda completa: