Translating math formula images to LaTeX sequences using deep neural networks with sequence-level training
- PDF / 1,577,540 Bytes
- 13 Pages / 595.276 x 790.866 pts Page_size
- 31 Downloads / 191 Views
ORIGINAL PAPER
Translating math formula images to LaTeX sequences using deep neural networks with sequence-level training Zelun Wang1 · Jyh-Charn Liu1 Received: 7 November 2019 / Revised: 8 June 2020 / Accepted: 30 September 2020 © Springer-Verlag GmbH Germany, part of Springer Nature 2020
Abstract In this paper, we propose a deep neural network model with an encoder–decoder architecture that translates images of math formulas into their LaTeX markup sequences. The encoder is a convolutional neural network that transforms images into a group of feature maps. To better capture the spatial relationships of math symbols, the feature maps are augmented with 2D positional encoding before being unfolded into a vector. The decoder is a stacked bidirectional long short-term memory model integrated with the soft attention mechanism, which works as a language model to translate the encoder output into a sequence of LaTeX tokens. The neural network is trained in two steps. The first step is token-level training using the maximum likelihood estimation as the objective function. At completion of the token-level training, the sequence-level training objective function is employed to optimize the overall model based on the policy gradient algorithm from reinforcement learning. Our design also overcomes the exposure bias problem by closing the feedback loop in the decoder during sequence-level training, i.e., feeding in the predicted token instead of the ground truth token at every time step. The model is trained and evaluated on the IM2LATEX-100 K dataset and shows state-of-the-art performance on both sequence-based and image-based evaluation metrics. Keywords Deep learning · Encoder–decoder · Seq2seq model · Image to LaTeX · Reinforcement learning · Math formulas
1 Introduction Math formulas often carry the most significant technical substances in many science, technology, engineering and math (STEM) fields. Being able to extract the math formulas from digital documents and translate them into markup languages is very useful for a wide range of information retrieval tasks. Portable document format (PDF) is the de facto standard publication format, which makes document distribution very easy and reliable. Although math formulas can be recognized by human readers relatively easily, computer-based math formula recognition in PDF documents remains a major challenge. This is mainly because the PDF format does not contain tagged information about its math contents. Recognizing math formulas from PDF documents is intrinsically
B
Jyh-Charn Liu [email protected] Zelun Wang [email protected]
1
Department of Computer Science and Engineering, Texas A&M University, College Station, USA
difficult because of the presence of unusual math symbols and complex layout structures. In addition, math formulas in PDF documents could partially be represented by blocks of graphics directly rendered from the PDF glyphs, which preserves the correct shapes but misses the meaning of contents. These problems would be readily solved if the markup sources o
Data Loading...