Can we evaluate the quality of generated text?

Scott, D and Hardcastle, D (2008) Can we evaluate the quality of generated text? In: 6th Language Resources and Evaluation Conference, (LREC'08), Marrakech, Morocco.

Full text not available from this repository.


Evaluating the output of NLG systems is notoriously difficult, and performing assessments of text quality even more so. A range of automated and subject-based approaches to the evaluation of text quality have been taken, including comparison with a putative gold standard text, analysis of specific linguistic features of the output, expert review and task-based evaluation. In this paper we present the results of a variety of such approaches in the context of a case study application. We discuss the problems encountered in the implementation of each approach in the context of the literature, and propose that a test based on the Turing test for machine intelligence offers a way forward in the evaluation of the subjective notion of text quality.

Item Type: Conference or Workshop Item (Paper)
Schools and Departments: School of Engineering and Informatics > Informatics
Depositing User: Donia Scott
Date Deposited: 06 Feb 2012 20:37
Last Modified: 08 Jun 2012 10:51
📧 Request an update