On the interaction of automatic evaluationand task framing in headline style transfer
Conference Paper
Publication Date:
2020
abstract:
An ongoing debate in the NLG communityconcerns the best way to evaluate systems,with human evaluation often being consideredthe most reliable method, compared to corpus-based metrics. However, tasks involving sub-tle textual differences, such as style transfer,tend to be hard for humans to perform. In thispaper, we propose an evaluation method forthis task based on purposely-trained classifiers,showing that it better reflects system differ-ences than traditional metrics such as BLEUand ROUGE.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
natural language generation; evaluation; style
List of contributors: