Abstractive Summarization of Indonesian Islamic Stories Using Long Short-Term Memory (LSTM)
DOI:
https://doi.org/10.52436/1.jutif.2026.7.1.4918Keywords:
Abstractive Summarization, Islamic Stories, Long Short Term Memory, ROUGE, Text Summarization, Word2VecAbstract
The length of narratives in stories often poses a challenge for many readers, especially those with time constraints or difficulty understanding the entire story. In this case, summarization offers a solution, but manual summarization is not always efficient in meeting the need for quick and concise information. This study aims to develop an automatic text summarization system for Islamic stories using the Long Short Term Memory (LSTM) algorithm. The study employs three data splitting scenarios for training and testing: 90:10, 80:20, and 70:30. Testing results show that the highest training accuracy was achieved in the 80:20 scenario with a value of 89.44%. This does not entirely indicate that a smaller proportion of training data will always result in higher accuracy, as this improvement can be influenced by data variation, overfitting conditions, and early stopping performance. Therefore, the data division ratio influences the training process. Although the highest training accuracy was obtained in the 80:20 scenario, the best semantic summary quality was found in the 90:10 scenario. In the 90:10 scenario, the ROUGE-1 evaluation score achieved a precision of 0.4147, a recall of 0.2516, and an F1-score of 0.3027. Meanwhile, ROUGE-2 achieved a precision of 0.1022, a recall of 0.0568, and an F1-score of 0.0684. Meanwhile, ROUGE-L achieved a precision of of 0.2017, recall of 0.1209, and F1-score of 0.1459.
Downloads
References
A. Bahari and K. E. Dewi, “Peringkasan Teks Otomatis Abstraktif Menggunakan Transformer Pada Teks Bahasa Indonesia,” Komputa J. Ilm. Komput. dan Inform., vol. 13, no. 1, pp. 83–91, 2024, doi: 10.34010/komputa.v13i1.11197.
D. F. AL-Hafiidh, I. F. Rozi, and I. K. Putri, “Peringkasan Teks Otomatis pada Portal Berita Olahraga menggunakan metode Maximum Marginal Relevance.,” J. Inform. Polinema, vol. 8, no. 3, pp. 21–30, 2022, doi: 10.33795/jip.v8i3.519.
G. Zakawaly, N. Hayatin, and V. R. S. Nastiti, “Improvisasi Algoritma Dijkstra Pada Peringkasan Teks Otomatis Untuk Artikel Politik,” J. Repos., vol. 5, no. 2, pp. 709–716, 2023, [Online]. Available: https://repositor.umm.ac.id/index.php/repositor/article/view/1437
D. Argade, V. Khairnar, D. Vora, S. Patil, K. Kotecha, and S. Alfarhood, “Multimodal Abstractive Summarization using bidirectional encoder representations from transformers with attention mechanism,” Heliyon, vol. 10, no. 4, p. e26162, 2024, doi: 10.1016/j.heliyon.2024.e26162.
A. Khan, I. Ahmad, Q. E. Ali, M. O. Khan, and U. Sana, “Automated Abstractive Text Summarization Using Multidimensional Long Short-Term Memory,” vol. 21, no. 02, pp. 1–11, 2023.
G. A. Sandag, “Perbandingan Algoritma LSTM Untuk Analisis Sentimen Pengguna Twitter Terhadap Layanan Rumah Sakit Saat Pandemi Covid-19,” TeIKa J. Teknol. Inf. dan Komun. (Journal Inf. Comunnication Technol., vol. 13, no. 01, pp. 31–40, 2023, doi: 10.36342/teika.v13i01.3063.
A. Ravikumar and H. Sriraman, “A Deep Understanding of Long Short-Term Memory for Solving Vanishing Error Problem,” in Advances in systems analysis, software engineering, and high performance computing book series, 2023, pp. 74–90. doi: 10.4018/978-1-6684-8531-6.ch004.
S. BAYAT and G. ISIK, “Assessing the Efficacy of Lstm, Transformer, and Rnn Architectures in Text Summarization,” Int. Conf. Appl. Eng. Nat. Sci., vol. 1, no. 1, pp. 813–820, 2023, doi: 10.59287/icaens.1099.
M. Alfhi Saputra, “Peringkas Teks Otomatis Bahasa Indonesia secara Abstraktif Menggunakan Metode Long Short-Term Memory,” e-Proceeding Eng. Vol.8, No.2 April 2021 |, vol. 8, no. 2, pp. 3474–3488, 2021.
S. Bayat and G. Isik, “Assessing the Efficacy of Lstm, Transformer, and Rnn Architectures in Text Summarization,” Int. Conf. Appl. Eng. Nat. Sci., vol. 1, no. 1, pp. 813–820, 2023, doi: 10.59287/icaens.1099.
H. Singh, S. Sood, H. Maity, and Y. Kumar, “Exploring Pre-processing Strategies and Feature Extraction in practical aspect for Effective Spam Detection,” 2024, pp. 1–6. doi: 10.1109/iatmsi60426.2024.10502863.
I. N. Purnama and N. N. Widya Utami, “Implementasi Peringkas Dokumen Berbahasa Indonesia Menggunakan Metode Text To Text Transfer Transformer (T5),” J. Teknol. Inf. dan Komput., vol. 9, p. 4, 2023.
A. Zevana and D. Riana, “Text classification using indobert fine-tuning modeling with convolutional neural network and bi-lstm,” 2024, doi: 10.52436/1.jutif.2023.4.6.1650.
S. Ketineni and J. Sheela, “Metaheuristic Aided Improved LSTM for Multi-document Summarization: A Hybrid Optimization Model,” J. Web Eng., 2023, doi: 10.13052/jwe1540-9589.2246.
B. N. Yashasvi, S. L. J, S. Raghavendra, M. Tiwari, and K. Ananthi, “Improved tweets in English text classification by LSTM neural network,” 2023, pp. 1–8. doi: 10.1109/icaeeci58247.2023.10370893.
L. Tao, Z. Cui, Y. He, and D. Yang, “An explainable multiscale LSTM model with wavelet transform and layer-wise relevance propagation for daily streamflow forecasting.,” Sci. Total Environ., p. 172465, 2024, doi: 10.1016/j.scitotenv.2024.172465.
Yuyun, A. D. Latief, T. Sampurno, Hazriani, A. O. Arisha, and Mushaf, “Next Sentence Prediction: The Impact of Preprocessing Techniques in Deep Learning,” 2023, pp. 274–278. doi: 10.1109/ic3ina60834.2023.10285805.
M. J. Aufa and A. Qoiriah, “Analisis Sentimen Pengguna Platform Belajar Online Coursera menggunakan Random Forest dengan Metode Ekstraksi Fitur Word2vec,” J. Informatics Comput. Sci., vol. 04, pp. 244–255, 2023, doi: 10.26740/jinacs.v4n02.p244-255.
T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean, “Distributed representations ofwords and phrases and their compositionality,” Adv. Neural Inf. Process. Syst., pp. 1–9, 2020.
R. S. Kartha et al., “NLP-Based Automatic Summarization using Bidirectional Encoder Representations from Transformers-Long Short Term Memory Hybrid Model: Enhancing Text Compression,” Int. J. Adv. Comput. Sci. Appl., vol. 15, no. 5, pp. 1223–1236, 2024, doi: 10.14569/IJACSA.2024.01505124.
V. Boppana and P. Sandhya, “Distributed Focused Web Crawling for Context Aware Recommender System using Machine Learning and Text Mining Algorithms,” Int. J. Adv. Comput. Sci. Appl., vol. 14, no. 3, 2023, doi: 10.14569/ijacsa.2023.0140370.
R. Chen, X. Jin, S. J. Laima, Y. Huang, and H. Li, “Intelligent modeling of nonlinear dynamical systems by machine learning,” Int. J. Non. Linear. Mech., vol. 142, p. 103984, 2022, doi: 10.1016/j.ijnonlinmec.2022.103984.
M. F. Abdillah and K. Kusnawi, “Comparative Analysis of Long Short-Term Memory Architecture for Text Classification,” Ilk. J. Ilm., 2023, doi: 10.33096/ilkom.v15i3.1906.455-464.
I. K. Raharjana, F. Harris, and A. Justitia, “Tool for Generating Behavior-Driven Development Test-Cases,” J. Inf. Syst. Eng. Bus. Intell., vol. 6, no. 1, p. 27, 2020, doi: 10.20473/jisebi.6.1.27-36.
S. Regundwar, R. Bhagwat, S. Bhosale, R. Chougale, and S. Abbu, “INTELLIGENT SYSTEMS AND APPLICATIONS IN ENGINEERING Sequence-to-Sequence Abstractive Text Summarization Model for Headline Generation with Attention,” vol. 12, no. 3, pp. 842–851, 2024.
G. Karuna, M. Akshith, P. S. Dinesh, B. V. Vardhan, Y. S. Bisht, and M. N. Narsaiah, “Automated Abstractive Text Summarization using Deep Learning,” E3S Web Conf., vol. 430, 2023, doi: 10.1051/e3sconf/202343001021.
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Aisya Gusti Savila, Supriyono, Roro Inda Melani

This work is licensed under a Creative Commons Attribution 4.0 International License.





