From task to evaluation: an automatic text summarization review

Lingfeng Lu, Yang Liu, Weiqiang Xu, Huakang Li, Guozi Sun*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Automatic summarization is attracting increasing attention as one of the most promising research areas. This technology has been tried in various real-world applications in recent years and achieved a good response. However, the applicability of conventional evaluation metrics cannot keep up with rapidly evolving summarization task formats and ensuing indicator. After recent years of research, automatic summarization task requires not only readability and fluency, but also informativeness and consistency. Diversified application scenarios also bring new challenges both for generative language models and evaluation metrics. In this review, we analysis and specifically focus on the difference between the task format and the evaluation metrics.

Original languageEnglish
Pages (from-to)2477-2507
Number of pages31
JournalArtificial Intelligence Review
Publication statusPublished - Nov 2023


  • Automatic text summarization
  • Natural language generates
  • Real-world application
  • Text summarization evaluation


Dive into the research topics of 'From task to evaluation: an automatic text summarization review'. Together they form a unique fingerprint.

Cite this