STACKING ENSEMBLE LEARNING AND INSTANCE HARDNESS THRESHOLD FOR BANK TERM DEPOSIT ACCEPTANCE CLASSIFICATION ON IMBALANCED DATASET

  • Bangun Watono Magister of Informatics Engineering, Universitas AMIKOM Yogyakarta, Indonesia
  • Ema Utami Magister of Informatics Engineering, Universitas AMIKOM Yogyakarta, Indonesia
  • Dhani Ariatmanto Magister of Informatics Engineering, Universitas AMIKOM Yogyakarta, Indonesia
Keywords: Bank Term Deposit, Classification, Instance Hardness Threshold, Machine Learning, Stacking Ensemble Learning, Undersampling

Abstract

Bank term deposits are a popular banking product with relatively high interest rates. Predicting potential customers is crucial for banks to maximize revenue from this product. Therefore, bank term deposits acceptance classification is an important challenge in the banking industry to optimize marketing strategies. Previous studies have been conducted using machine learning classification techniques with the imbalanced Bank Marketing Dataset from the UCI Repository. However, the accuracy results obtained still need to be improved. Using the same dataset, this study proposes an Instance Hardness Threshold (IHT) undersampling technique to handle imbalanced datasets and Stacking Ensemble Learning (SEL) for classification. In this SEL, Decision Tree, Random Forest, and XGBoost are selected as base classifiers and Logistic Regression as meta classifier. The model trained on SEL with the dataset undersampled using IHT shows a high accuracy rate of 98.80% and an AUC-ROC of 0.9821. This performance is significantly better than the model trained with the dataset without undersampling, which achieved an accuracy of 90.30% and an AUC-ROC of 0.6898. The findings of this research demonstrate that implementing of the suggested IHT undersampling technique combined with SEL has been evaluated to effectively enhance the performance of term deposit classification on the dataset.

Downloads

Download data is not yet available.

References

N. L. Muliawati and T. Maryati, “Analaisis Pengaruh Inflasi, Kurs, Suku Bunga dan Bagi Hasil Terhadap Deposito Pada PT. Bank Syariah Mandiri 2007-2012,” Semin. Nas. Cendekiawan, no. 7, pp. 735–745, 2015.

E. E. Ene, S. Atong, and J. C. Ene, “Effect of Interest Rates Deregulation on the Performance of Deposit Money Banks in Nigeria,” Int. J. Manag. Stud. Res., vol. 3, no. 9, pp. 164–176, 2015, [Online]. Available: www.arcjournals.org.

B. Krawczyk, “Learning from imbalanced data: open challenges and future directions,” Prog. Artif. Intell., vol. 5, no. 4, pp. 221–232, 2016, doi: 10.1007/s13748-016-0094-0.

S. Moro, P. Cortez, and P. Rita, “A data-driven approach to predict the success of bank telemarketing,” Decis. Support Syst., vol. 62, pp. 22–31, 2014, doi: 10.1016/j.dss.2014.03.001.

G. Guo, Y. Yao, L. Liu, and T. Shen, “A novel ensemble approach for estimating the competency of bank telemarketing,” Sci. Rep., vol. 13, no. 1, pp. 1–10, 2023, doi: 10.1038/s41598-023-47177-7.

N. Ghatasheh, I. Altaharwa, and K. Aldebei, “Modeling the Telemarketing Process Using Genetic Algorithms and Extreme Boosting: Feature Selection and Cost-Sensitive Analytical Approach,” IEEE Access, vol. 11, no. June, pp. 67806–67824, 2023, doi: 10.1109/ACCESS.2023.3292840.

I. M. Hayder, G. A. N. Al Ali, and H. A. Younis, “Predicting reaction based on customer’s transaction using machine learning approaches,” Int. J. Electr. Comput. Eng., vol. 13, no. 1, pp. 1086–1096, 2023, doi: 10.11591/ijece.v13i1.pp1086-1096.

Y. Feng, Y. Yin, D. Wang, and L. Dhamotharan, “A dynamic ensemble selection method for bank telemarketing sales prediction,” J. Bus. Res., vol. 139, pp. 368–382, 2022, doi: 10.1016/j.jbusres.2021.09. 067.

Y. Religia, A. Nugroho, and W. Hadikristanto, “Analisis Perbandingan Algoritma Optimasi pada Random Forest untuk Klasifikasi Data Bank Marketing,” J. RESTI (Rekayasa Sist. dan Teknol. Informasi), vol. 5, no. 1, pp. 187–192, 2021.

A. Nugroho and Y. Religia, “Analisis Optimasi Algoritma Klasifikasi Naive Bayes menggunakan Genetic Algorithm dan Bagging,” J. RESTI (Rekayasa Sist. dan Teknol. Informasi), vol. 5, no. 3, pp. 504–510, 2021, doi: 10.29207/resti.v5i3.3067.

F. J. Alsolami, F. Saleem, and A. Al-Malaise Al-Ghamdi, “Predicting the Accuracy for Telemarketing Process in Banks Using Data Mining,” JKAU Comp. IT. Sci, vol. 9, no. 2, pp. 69–83, 2020, doi: 10.4197/Comp.

M. R. Smith, T. Martinez, and C. Giraud-Carrier, “An instance level analysis of data complexity,” Mach. Learn., vol. 95, no. 2, pp. 225–256, 2014, doi: 10.1007/s10994-013-5422-z.

N. A. Verdikha, T. B. Adji, and A. E. Permanasari, “Study of Undersampling Method: Instance Hardness Threshold with Various Estimators for Hate Speech Classification,” IJITEE (International J. Inf. Technol. Electr. Eng., vol. 2, no. 2, 2018, doi: 10.22146/ijitee.42152.

T. G. Dietterichl, “The handbook of brain theory and neural networks-ensemble learning,” MIT Press, vol. 40, 2002, [Online]. Available: https://courses.cs.washington.edu/courses/cse446/12wi/tgd-ensembles.pdf.

X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma, “A survey on ensemble learning,” Front. Comput. Sci., vol. 14, no. 2, pp. 241–258, 2020, doi: 10.1007/s11704-019-8208-z.

U. R. Salunkhe and S. N. Mali, “Classifier Ensemble Design for Imbalanced Data Classification: A Hybrid Approach,” Procedia Comput. Sci., vol. 85, no. Cms, pp. 725–732, 2016, doi: 10.1016/j.procs.2016. 05.259.

F. Divina, A. Gilson, F. Goméz-Vela, M. G. Torres, and J. F. Torres, “Stacking ensemble learning for short-term electricity consumption forecasting,” Energies, vol. 11, no. 4, pp. 1–31, 2018, doi: 10.3390/en11040949.

A. Baccouche, B. Garcia-Zapirain, C. C. Olea, and A. Elmaghraby, “Ensemble deep learning models for heart disease classification: A case study from Mexico,” Inf., vol. 11, no. 4, pp. 1–28, 2020, doi: 10.3390/INFO11040207.

M. H. D. M. Ribeiro and L. dos Santos Coelho, “Ensemble approach based on bagging, boosting and stacking for short-term prediction in agribusiness time series,” Appl. Soft Comput. J., vol. 86, p. 105837, 2020, doi: 10.1016/j.asoc.2019.105837.

R. Lazzarini, H. Tianfield, and V. Charissis, “A stacking ensemble of deep learning models for IoT intrusion detection,” Knowledge-Based Syst., vol. 279, p. 110941, 2023, doi: 10.1016/j.knosys.2023.110941.

A. Almulihi et al., “Ensemble Learning Based on Hybrid Deep Learning Model for Heart Disease Early Prediction,” Diagnostics, vol. 12, no. 12, pp. 1–17, 2022, doi: 10.3390/diagnostics12123215.

M. AlJame, I. Ahmad, A. Imtiaz, and A. Mohammed, “Ensemble learning model for diagnosing COVID-19 from routine blood tests,” Informatics Med. Unlocked, vol. 21, p. 100449, 2020, doi: 10.1016/j.imu.2020. 100449.

E. Demirovic and P. J. Stuckey, “Optimal Decision Trees for Nonlinear Metrics,” 35th AAAI Conf. Artif. Intell. AAAI 2021, vol. 5A, pp. 3733–3741, 2021, doi: 10.1609/aaai.v35i5.16490.

S. Ren, X. Cao, Y. Wei, and J. Sun, “Global refinement of random forest,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 723–730, 2015, doi: 10.1109/CVPR.2015.7298672.

T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., vol. 13-17-Augu, pp. 785–794, 2016, doi: 10.1145/2939672.2939785.

X. Zou, Y. Hu, Z. Tian, and K. Shen, “Logistic Regression Model Optimization and Case Analysis,” Proc. IEEE 7th Int. Conf. Comput. Sci. Netw. Technol. ICCSNT 2019, pp. 135–139, 2019, doi: 10.1109/ICCSNT47585.2019.8962457.

Y. Zizi, M. Oudgou, and A. El Moudden, “Determinants and predictors of smes’ financial failure: A logistic regression approach,” Risks, vol. 8, no. 4, pp. 1–21, 2020, doi: 10.3390/risks8040107.

J. Qiu, Q. Wu, G. Ding, Y. Xu, and S. Feng, “A survey of machine learning for big data processing,” EURASIP J. Adv. Signal Process., vol. 2016, no. 1, 2016, doi: 10.1186/s13634-016-0355-x.

P. Cerda and G. Varoquaux, “Encoding High-Cardinality String Categorical Variables,” IEEE Trans. Knowl. Data Eng., vol. 34, no. 3, pp. 1164–1176, 2022, doi: 10.1109/TKDE.2020.2992529.

M. X. Low et al., “Comparison of Label Encoding and Evidence Counting for Malware Classification,” J. Syst. Manag. Sci., vol. 12, no. 6, pp. 17–30, 2022, doi: 10.33168/JSMS.2022.0602.

F. Reusser, “Tabular Learning: Encoding for Entity and Context Embeddings,” pp. 1–13, 2024, [Online]. Available: http://arxiv.org/abs/2403.19405.

K. Takayama, “Encoding Categorical Variables with Ambiguity,” Int. Work. NFMCP conjunction with ECML-PKDD, 2019, [Online]. Available: http://contrib.scikit-learn.org/categorical-encoding/.

T. Le, L. H. Son, M. T. Vo, M. Y. Lee, and S. W. Baik, “SS symmetry A Cluster-Based Boosting Algorithm for Bankruptcy Prediction in a Highly Imbalanced Dataset,” pp. 1–12, 2018, doi: 10.3390/sym10070250.

R. Medar, V. S. Rajpurohit, and B. Rashmi, “Impact of Training and Testing Data Splits on Accuracy of Time Series Forecasting in Machine Learning,” 2017 Int. Conf. Comput. Commun. Control Autom. ICCUBEA 2017, pp. 1–6, 2017, doi: 10.1109/ICCUBEA.2017.8463779.

D. Wolpert, “Stacked Generalization ( Stacking ),” Neural Networks, vol. 5, pp. 241–259, 1992.

R. Polikar, Ensemble Machine Learning. 2012.

O. Sagi and L. Rokach, “Approximating XGBoost with an interpretable decision tree,” Inf. Sci. (Ny)., vol. 572, pp. 522–542, 2021, doi: 10.1016/j.ins.2021.05.055.

M. Bansal, A. Goyal, and A. Choudhary, “A comparative analysis of K-Nearest Neighbor, Genetic, Support Vector Machine, Decision Tree, and Long Short Term Memory algorithms in machine learning,” Decis. Anal. J., vol. 3, no. May, p. 100071, 2022, doi: 10.1016/j.dajour.2022.100071.

L. Khaidem, S. Saha, and S. R. Dey, “Predicting the direction of stock market prices using random forest,” vol. 00, no. 00, pp. 1–20, 2016, [Online]. Available: http://arxiv.org/abs/1605.00003.

L. Breiman, “Random Forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001, doi: 10.1023/A:1010933404324.

M. Guo, Z. Yuan, B. Janson, Y. Peng, Y. Yang, and W. Wang, “Older pedestrian traffic crashes severity analysis based on an emerging machine learning xgboost,” Sustain., vol. 13, no. 2, pp. 1–26, 2021, doi: 10.3390/su13020926.

J. Phillips, E. Cripps, J. W. Lau, and M. R. Hodkiewicz, “Classifying machinery condition using oil samples and binary logistic regression,” Mech. Syst. Signal Process., vol. 60, pp. 316–325, 2015, doi: 10.1016/j.ymssp.2014.12.020.

T. Denœux, “Logistic regression, neural networks and Dempster–Shafer theory: A new perspective,” Knowledge-Based Syst., vol. 176, pp. 54–67, 2019, doi: 10.1016/j.knosys.2019.03.030.

S. Sperandei, “Understanding logistic regression analysis,” Biochem. Medica, vol. 24, no. 1, pp. 12–18, 2014, doi: 10.11613/BM.2014.003.

Q. Gu, L. Zhu, and Z. Cai, “Evaluation measures of the classification performance of imbalanced data sets,” Commun. Comput. Inf. Sci., vol. 51, pp. 461–471, 2009, doi: 10.1007/978-3-642-04962-0_53.

F. Thabtah, S. Hammoud, F. Kamalov, and A. Gonsalves, “Data imbalance in classification: Experimental evaluation,” Inf. Sci. (Ny)., vol. 513, pp. 429–441, 2020, doi: 10.1016/j.ins.2019.11.004.

R. R. Sanni and H. S. Guruprasad, “Analysis of performance metrics of heart failured patients using Python and machine learning algorithms,” Glob. Transitions Proc., vol. 2, no. 2, pp. 233–237, 2021, doi: 10.1016/j.gltp.2021.08.028.

A. M. Zaki, N. Khodadadi, W. H. Lim, and S. K. Towfek, “Predictive Analytics and Machine Learning in Direct Marketing for Anticipating Bank Term Deposit Subscriptions,” Am. J. Bus. Oper. Res., vol. 11, no. 1, pp. 79–88, 2024, doi: 10.54216/ajbor.110110.

M. A. Fitriani and D. C. Febrianto, “Data Mining for Potential Customer Segmentation in the Marketing Bank Dataset,” JUITA J. Inform., vol. 9, no. 1, p. 25, 2021, doi: 10.30595/juita.v9i1.7983.

Published
2024-08-07
How to Cite
[1]
Bangun Watono, Ema Utami, and Dhani Ariatmanto, “STACKING ENSEMBLE LEARNING AND INSTANCE HARDNESS THRESHOLD FOR BANK TERM DEPOSIT ACCEPTANCE CLASSIFICATION ON IMBALANCED DATASET”, J. Tek. Inform. (JUTIF), vol. 5, no. 4, pp. 453-467, Aug. 2024.