Neural Abstractive Summarization: A Brief Survey

Lu Qian*, Haiyang Zhang, Wei Wang, Dawei Liu, Xin Huang

*Corresponding author for this work

Research output: Chapter in Book or Report/Conference proceedingConference Proceedingpeer-review

1 Citation (Scopus)

Abstract

Due to the development of neural networks, abstractive summarization has received more attention than extractive one, and has gained significant progress in generating fluent and human-like summaries with novel expressions. Seq2seq has become the primary framework for abstractive summarization, employing encoder-decoder architecture based on RNNs or CNNs, and Transformers. In this paper, we focus on reviewing the neural models that are based on seq2seq framework for abstractive summarization. Moreover, we discuss some of the most effective techniques for improving seq2seq models and provide two challenging directions, i.e. generating query-based abstractive summaries and incorporating commonsense knowledge, for in-depth investigation.

Original languageEnglish
Title of host publication2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence, CCAI 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages50-58
Number of pages9
ISBN (Electronic)9798350335262
DOIs
Publication statusPublished - 2023
Event3rd IEEE International Conference on Computer Communication and Artificial Intelligence, CCAI 2023 - Taiyuan, China
Duration: 26 May 202328 May 2023

Publication series

Name2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence, CCAI 2023

Conference

Conference3rd IEEE International Conference on Computer Communication and Artificial Intelligence, CCAI 2023
Country/TerritoryChina
CityTaiyuan
Period26/05/2328/05/23

Keywords

  • abstractive summarization
  • neural network
  • pre-trained models
  • seq2seq
  • Transformer

Fingerprint

Dive into the research topics of 'Neural Abstractive Summarization: A Brief Survey'. Together they form a unique fingerprint.

Cite this