TY - GEN
T1 - Improving Handwritten Mathematical Expression Recognition via an Attention Refinement Network
AU - Liu, Jiayi
AU - Wang, Qiufeng
AU - Liao, Wei
AU - Chen, Jianghan
AU - Huang, Kaizhu
N1 - Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2024
Y1 - 2024
N2 - Handwritten mathematical expression recognition (HMER), typically regarding as a sequence-to-sequence problem, has made great progress in recent years, where RNN based models have been widely adopted. Although Transformer based model has demonstrated success in many areas, its performance is not satisfied due to the issue of standard attention mechanism in HMER. Therefore, we propose to improve the performance via an attention refinement network in the Transformer framework for HMER. We firstly adopt a shift window attention (SWA) from Swin Transformer to capture spatial contexts of the whole image for HMER. Moreover, we propose a refined coverage attention (RCA) to overcome the issue of lack of converge in the standard attention mechanism, where we utilize a convolutional kernel with a gating function to obtain coverage features. With the proposed RCA, we refine coverage attentions to attenuate the repeating issue of focused areas in the long-sequence. In addition, we utilize a pyramid data augmentation method to generate mathematical expression images with multiple resolutions to enhance the model generalization. We evaluate the proposed attention refinement network on the HMER benchmark datasets of CROHME2014/2016/2019, and extensive experiments demonstrate its effectiveness.
AB - Handwritten mathematical expression recognition (HMER), typically regarding as a sequence-to-sequence problem, has made great progress in recent years, where RNN based models have been widely adopted. Although Transformer based model has demonstrated success in many areas, its performance is not satisfied due to the issue of standard attention mechanism in HMER. Therefore, we propose to improve the performance via an attention refinement network in the Transformer framework for HMER. We firstly adopt a shift window attention (SWA) from Swin Transformer to capture spatial contexts of the whole image for HMER. Moreover, we propose a refined coverage attention (RCA) to overcome the issue of lack of converge in the standard attention mechanism, where we utilize a convolutional kernel with a gating function to obtain coverage features. With the proposed RCA, we refine coverage attentions to attenuate the repeating issue of focused areas in the long-sequence. In addition, we utilize a pyramid data augmentation method to generate mathematical expression images with multiple resolutions to enhance the model generalization. We evaluate the proposed attention refinement network on the HMER benchmark datasets of CROHME2014/2016/2019, and extensive experiments demonstrate its effectiveness.
KW - Handwritten mathematical expression recognition
KW - Pyramid data augmentation
KW - Refined coverage attention
KW - Shift window attention
UR - http://www.scopus.com/inward/record.url?scp=85180153172&partnerID=8YFLogxK
U2 - 10.1007/978-981-99-8178-6_41
DO - 10.1007/978-981-99-8178-6_41
M3 - Conference Proceeding
AN - SCOPUS:85180153172
SN - 9789819981779
T3 - Communications in Computer and Information Science
SP - 543
EP - 555
BT - Neural Information Processing - 30th International Conference, ICONIP 2023, Proceedings
A2 - Luo, Biao
A2 - Cheng, Long
A2 - Wu, Zheng-Guang
A2 - Li, Hongyi
A2 - Li, Chaojie
PB - Springer Science and Business Media Deutschland GmbH
T2 - 30th International Conference on Neural Information Processing, ICONIP 2023
Y2 - 20 November 2023 through 23 November 2023
ER -