Sentence Classification in Medical Abstracts Using Quantized Transformer and BiLSTM Architecture
Автор: Ahmed Abdal Shafi Rasel, Md. Towhidul Islam Robin, Md. Samiul Islam, Mehedi Hasan
Журнал: International Journal of Intelligent Systems and Applications @ijisa
Статья в выпуске: 2 vol.18, 2026 года.
Бесплатный доступ
Automatically classifying abstract sentences into significant categories such as - background, methods, objective, result, and conclusions - is an essential support tool for scientific medical database querying that assists in searching and summarizing relevant literature works and writing new abstracts. This paper presents a memory-efficient deep learning model for sentence role classification in medical scientific abstracts, achieved by integrating quantized Transformer layers with a Bidirectional Long Short-Term Memory (BiLSTM) network. While the core components are recognized, our contribution is demonstrated in the successful application of quantization to this hybrid architecture, significantly reducing model size (from ~75MB to ~25MB) without a meaningful drop in classification performance on a subset of the PubMed 200k RCT dataset. This makes our approach distinctively practical for deployment in resource-constrained environments, offering an effective tool for automated literature analysis.
Sentence Classification, Medical Journal, Abstract, Transformer Block, Attention, LSTM, Word Vectors, Word2vec, Contextual Modeling
Короткий адрес: https://sciup.org/15020326
IDR: 15020326 | DOI: 10.5815/ijisa.2026.02.11