Publication Date:
2010
abstract:
The issue of how to experimentally evaluate information extraction (IE) systems has received hardly any satisfactory solution in the literature. In this paper we propose a novel evaluation model for IE and argue that, among others, it allows (i) a correct appreciation of the degree of overlap between predicted and true segments, and (ii) a fair evaluation of the ability of a system to correctly identify segment boundaries. We describe the properties of this models, also by presenting the result of a re-evaluation of the results of the CoNLL'03 and CoNLL'02 Shared Tasks on Named Entity Extraction.
Iris type:
04.01 Contributo in Atti di convegno
Keywords:
Information Search and Retrieval; Natural Language Processing; Experimental evaluation; Information Extraction; Wrapper induction
List of contributors: