ML p(r)ior | Evaluating Discourse Processing Algorithms
Processing...

Evaluating Discourse Processing Algorithms

1994-10-10
9410006 | cmp-lg
In order to take steps towards establishing a methodology for evaluating Natural Language systems, we conducted a case study. We attempt to evaluate two different approaches to anaphoric processing in discourse by comparing the accuracy and coverage of two published algorithms for finding the co-specifiers of pronouns in naturally occurring texts and dialogues. We present the quantitative results of hand-simulating these algorithms, but this analysis naturally gives rise to both a qualitative evaluation and recommendations for performing such evaluations in general. We illustrate the general difficulties encountered with quantitative evaluation. These are problems with: (a) allowing for underlying assumptions, (b) determining how to handle underspecifications, and (c) evaluating the contribution of false positives and error chaining.
PDF

Highlights - Most important sentences from the article

Login to like/save this paper, take notes and configure your recommendations