Recovering text sequences using deep learning models

Бесплатный доступ

This article presents the results of the formation, training and performance evaluation of models with the Encoder-Decoder and Sequence-To-Sequence (Seq2Seq) architectures for solving the problem of supplementing incomplete texts. Problems of this type often arise when restoring the contents of documents from their low-quality images. The studies conducted in the work are aimed at solving the practical problem of forming electronic copies of scanned documents of the «Roskadastr» PLC, the recognition of which is difficult or impossible with standard means. The formation and study of models was carried out in Python using the high-level API of the Keras package. A dataset consisting of several thousand pairs was formed for the purpose of training and studying the models. Each pair in this set represented an incomplete and corresponding full text. To evaluate the quality of the models, the values of the loss function and the accuracy, BLEU and ROUGE-L metrics were calculated. Loss and accuracy made it possible to evaluate the effectiveness of the models at the level of predicting individual words. The BLEU and ROUGE-L metrics were used to evaluate the similarity between the full and reconstructed texts. The results showed that both the Encoder-Decoder and Seq2Seq models cope with the task of reconstructing text sequences from their fixed set, but the Seq2Seq transformer-based model achieves better results in terms of training speed and quality.

Еще

Encoder-decoder, трансформер sequence-to-sequence, bleu, rouge-l, keras, python

Короткий адрес: https://sciup.org/143183469

IDR: 143183469   |   DOI: 10.25209/2079-3316-2024-15-3-75-110

Список литературы Recovering text sequences using deep learning models

  • N. C. Sabharwal, A. Agrawal Hands-on Question Answering Systems with BERT: Applications in Neural Networks and Natural Language Processing.– Berkeley, CA: Apress.– 2021.– ISBN 978-1-4842-6664-9.– xv+184 pp. https://doi.org/10.1007/978-1-4842-6664-9 ↑94
  • K. Aitken, VV. Ramasesh, Y. Cao, N. Maheswaranathan Understanding how encoder-decoder architectures attend.– 2021.– 24 pp. arXivarXiv 2110.15253~[cs.LG] ↑94
  • A. Rahali, M. A. Akhloufi End-to-end transformer-based models in textual-based NLP // Artificial Intelligence.– 2023.– Vol. 4.– No. 1.– Pp. 54–110. https://doi.org/10.3390/ai4010004 ↑94
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin Attention is all you need.– 2017.– 15 pp. arXivarXiv 1706.03762 ↑94, 105
  • K. Papineni, S. Roukos, T. Ward, W.-J. Zhu BLEU: a method for automatic evaluation of machine translation // ACL’02 Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (July 7–12, 2002, Philadelphia, Pennsylvania, USA), Stroudsburg: ACL.– 2002.– Pp. 311–318. https://doi.org/10.3115/1073083.1073135 ↑94
  • Ch.-Y. Lin ROUGE: a package for automatic evaluation of summaries // Proceedings of the Workshop on Text Summarization Branches Out, WAS 2004 (July, 2004, Barcelona, Spain).– ACL.– 2004.– 74–81 pp. UhtRtpLs://aclanthology.org/W04-1013.pdf ↑94
  • И. В. Винокуров Использование свёрточной нейронной сети для распознавания элементов текста на отсканированных изображениях плохого качества // Программные системы: теория и приложения.– 2022.– Т. 13.–№3(54).– С. 29–43. https://doi.org/110.252hU0t9Rt/pL2:0/7/9p-s3t3a1.p6s-i2r0a2s2.r-u1/3r-3eMha-2tdtN9/p-p4:/s3t/am2i0.2m2a_th3ne2t9.r-u43/.pp[sРd3Иf96НЦ] ↑96
  • И. В. Винокуров Распознавание цифровых последовательностей с использованием свёрточных нейронных сетей // Программные системы: теория и приложения.– 2023.– Т. 14.– №3(58).– С. 3–36 (русс.+англ.). https://doi.org/10.2520hU9t/Rt2pL0s7:/9/-3p3s1ta6.-p2s0i2r3a-s1.r4u-3/-r3Mhe-at3tNd6p/:p/s/tma2i.0m2a3th3n_et3.-r3u6/.pp[sРd4fИ23НЦ] ↑96
  • Th. Luong, H. Pham, Ch.D. Manning Effective approaches to attention-based neural machine translation // Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (17–21 September, 2015, Lisbon, Portugal).– ACL.– 2015.– ISBN 978-1-941643-32-7.– Pp. 1412–1421. https://doi.org/10.1865Uh3t/RtvpL1s/:/D/1a5c-l1a1n6th6ology.org/D15-1166.pdf ↑97
  • A. M. Dai, Q. V. Le Semi-supervised Sequence Learning, NIPS 2015 (December 7–12, 2015, Montreal, Quebec, Canada), Advances in Neural Information Processing Systems.– Vol. 28.– Curran Associates, Inc..– 2015.– ISBN 9781510825024.– 9 pp. hUtRtpLs://proceedings.neurips.cc/paper_files/paper/2015/file/7137debd45ae4d0ab9aa953017286b20-Paper.https://doi.org/10.48550/arXiv.1511.01432 ↑97
  • J. Gehring, M. Auli, D. Grangier, D. Yarats, Y. N. Dauphin Convolutional sequence to sequence learning // Proceedings of the 34th International Conference on Machine Learning (6–11 August 2017, International Convention Centre, Sydney, Australia), PMLR.– vol. 70.– 2017.– Pp. 1243–1252. hUtRtpLs://proceedings.mlrh.pttrpess:s///vd7o0i/.ogregh/r1in0g.41875a5/0g/eahrrXinigv1.177a0.p5.d0f3122 ↑97
  • D. Ulyanov, A. Vedaldi, V. Lempitsky Deep image prior // International Journal of Computer Vision.– 2020.– Vol. 128.– No. 7.– Pp. 1867–1888. https://doi.org/10.1007/s11263-020-01303-4 ↑
  • K. Hakala, A. Vesanto, N. Miekka, T. Salakoski, F. Ginter Leveraging text repetitions and denoising autoencoders in OCR post-correction.– 2019.– 5 pp. arXivarXiv 1906.10907~[cs.CL] ↑97
  • G. Huang, J. Wang, H. Tang, X. Ye BERT-based contextual semantic analysis for English preposition error correction // Journal of Physics: Conference Series.– 2020.– Vol. 1693.– No. 1.– id. 012115.– 5 pp. https://doi.org/10.1088/1742-6596/1693/1/012115 ↑97
  • K. Song, X. Tan, T. Qin, J. Lu, T.-Y. Liu MASS: masked sequence to sequence pre-training for language generation, International Conference on Machine Learning (9–15 June 2019, Long Beach, California, USA), PMLR.– vol. 97.– 2019.– Pp. 5926–5936. hUtRtpLs://proceedings.mlr.press/v97/song19d/song19d.pdf arXivarXiv 1905.02450~[cs.CL] ↑97
  • Sh. Chollampatt, D, T. Hoang, H. T. Ng Adapting grammatical error correction based on the native language of writers with neural network joint models // Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 (1–4 November, 2016, Austin, Texas, USA).– ACL.– 2016.– ISBN 978-1-945626-25-8.– Pp. 1901–1911. https://doi.org/10.1865hU3t/RtvpL1s/:/D/1a6c-l1a1n9th5ology.org/D16-1195.pdf ↑97
  • A. Graves Supervised Sequence Labelling with Recurrent Neural Networks, Studies in Computational Intelligence.– Vol. 385.– Berlin–Heidelberg: Springer.– 2012.– ISBN 978-3-642-24797-2.– 146 pp. https://doi.org/10.1007/978-3-642-24797-2 ↑98
  • T. Ge, X. Zhang, F. Wei, M. Zhou Automatic grammatical error correction for sequence-to-sequence text generation: an empirical study // Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (July 28–August 2, 2019, Florence, Italy).– ACL.– 2019.– ISBN 978-1-950737-48-2.– Pp. 6059–6064. https://doi.org/10.18653/v1/P19-1609 hUtRtpLs://aclanthology.org/P19-1609.pdf ↑98
  • X. Zhang, J. Zhao, Y. LeCun Character-level convolutional networks for text classification.– 2016.– 9 pp. arXivarXiv 1509.01626~[cs.LG] https://doi.org/10.48550/arXiv.1509.01626 ↑98
  • Z. Xie, A. Avati, N. Arivazhagan, D. Jurafsky, A. Ng Neural language correction with character-based attention.– 2016.– 10 pp. arXivarXiv 1603.09727~[cs.CL] ↑98
  • J. Ramirez-Orta, E. Xamena, A. Maguitman, E. Milios, A. Soto Post-OCR document correction with large ensembles of character sequence-to-sequence models // Proceedings of the AAAI Conference on Artificial Intelligence.– 2022.– Vol. 36.– Pp. 11192–11199. https://doi.org/10.1609Uh/taRtapaLsi:./v/3c6di1n0..a2a1a3i.6o9rg/ojs/21369/21369-13-25382-1-2-20220628.pdf ↑98
  • A. A. Alkhazraji, K. Baheeja, A. M. N. Alzubaidi Ancient textual restoration using deep neural networks: a literature review // 2023 Al-Sadiq International Conference on Communication and Information Technology, AICCIT 2023 (04–06 July 2023, Al-Muthana, Iraq).– 2023.– ISBN 9798350341898.– Pp. 64–69. https://doi.org/10.1109/AICCIT57614.2023.10218159 ↑98
  • F. Chollet Deep Learning with Python, 2nd ed..– Manning.– 2021.– ISBN 9781617296864.– 504 pp. hUtRtpLs://github.com/moisesfaponte/deep_learning_book ↑99, 103
  • A. Géron Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd ed..– Sebastopol: O’Reilly Media.– 2019.– ISBN 978-1-492-03264-9.– 848 pp. hUtRtpLs://powerunit-ju.com/wp-content/uploads/2021/04/Aurelien-Geron-Hands-On-Machine-Learning-↑99, 103
  • A. Kapoor, A. Gulli, S. Pal Deep Learning with TensorFlow and Keras: Build and deploy supervised, unsupervised, deep, and reinforcement learning models, 3rd ed..– Packt Publishing.– 2022.– ISBN 978-1803232911.– 698 pp.
Еще
Статья научная