Comparative Analysis Of Ant Lion Optimization And Jaya Algorithm For Feature Selection In K-Nearest Neighbor (Knn) Based Electricity Consumption Prediction

Authors

  • Retno Wahyusari Department of Informatics, Universitas Ahmad Dahlan, Indonesia
  • Sunardi Department of Electrical Engineering, Universitas Ahmad Dahlan, Indonesia
  • Abdul Fadlil Department of Electrical Engineering, Universitas Ahmad Dahlan, Indonesia

DOI:

https://doi.org/10.52436/1.jutif.2025.6.3.4692

Keywords:

Ant Lion Optimization, Feature selection, Genetic Algorithm, Jaya Algorithm, K-Nearest Neighbors, Teaching Learning Based Optimization

Abstract

The increase in demand for electrical energy is in line with increasing population, urbanization, industrial deployment, and technology. Accurate prediction of electrical energy consumption plays an important role in planning, analyzing, and managing electricity systems to ensure sustainable, safe, and economical electricity supply. K-Nearest Neighbors (KNN) is a simple and fast prediction algorithm based on the quality and relevance of the features used. This research proposes to improve the accuracy of energy consumption prediction through feature selection based on metaheuristic algorithms, namely Genetic Algorithm (GA), Ant Lion Optimization (ALO), Teaching Learning Based Optimization (TLBO), and Jaya Algorithm (JA). The dataset used is Tetouan City Power Consumption, with a preprocessing process of time feature extraction, min-max scaling normalization, and feature selection. The ALO+KNN and JA+KNN combinations delivered the best and most stable prediction performance, while TLBO+KNN performed poorly. GA+KNN showed the worst overall results among all combinations. The evaluation of model performance was based on RMSE, MAPE, and R² metrics. These findings highlight the importance of selecting a feature selection algorithm that aligns well with the characteristics of the model and dataset to enhance prediction accuracy.

Downloads

Download data is not yet available.

References

J. F. Torres, F. Martínez-Álvarez, and A. Troncoso, “A deep LSTM network for the Spanish electricity consumption forecasting,” Neural Comput. Appl., vol. 34, no. 13, pp. 10533–10545, 2022, doi: 10.1007/s00521-021-06773-2.

R. Nie, Z. Tian, R. Long, and W. Dong, “Forecasting household electricity demand with hybrid machine learning-based methods: Effects of residents’ psychological preferences and calendar variables,” Expert Syst. Appl., vol. 206, p. 117854, 2022, doi: https://doi.org/10.1016/j.eswa.2022.117854.

F. Kaytez, “A hybrid approach based on autoregressive integrated moving average and least-square support vector machine for long-term forecasting of net electricity consumption,” Energy, vol. 197, p. 117200, 2020, doi: https://doi.org/10.1016/j.energy.2020.117200.

A. R. Barzani, P. Pahlavani, and O. Ghorbanzadeh, “Ensembling of Decision Trees, Knn, and Logistic Regression With Soft-Voting Method for Wildfire Susceptibility Mapping,” ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., vol. 10, no. 4/W1-2022, pp. 647–652, 2023, doi: 10.5194/isprs-annals-X-4-W1-2022-647-2023.

A. Abubakar Mas’ud, “Comparison of three machine learning models for the prediction of hourly PV output power in Saudi Arabia,” Ain Shams Eng. J., vol. 13, no. 4, p. 101648, 2022, doi: 10.1016/j.asej.2021.11.017.

W. Liu et al., “Machine learning applications for photovoltaic system optimization in zero green energy buildings,” Energy Reports, vol. 9, pp. 2787–2796, 2023, doi: 10.1016/j.egyr.2023.01.114.

G. Hong, G. S. Choi, J. Y. Eum, H. S. Lee, and D. D. Kim, “The Hourly Energy Consumption Prediction by KNN for Buildings in Community Buildings,” Buildings, vol. 12, no. 10, 2022, doi: 10.3390/buildings12101636.

Z. Tian, D. Chen, and L. Zhao, “Short-Term Energy Consumption Prediction of Large Public Buildings Combined with Data Feature Engineering and Bilstm-Attention,” Appl. Sci., vol. 14, no. 5, 2024, doi: 10.3390/app14052137.

H. Su, “How Accurate are Predictions Made Using Big Data?,” Proc. 2022 7th Int. Conf. Soc. Sci. Econ. Dev. (ICSSED 2022), vol. 652, no. Icssed, pp. 806–810, 2022, doi: 10.2991/aebmr.k.220405.135.

M. Nssibi, G. Manita, and O. Korbaa, “Advances in nature-inspired metaheuristic optimization for feature selection problem: A comprehensive survey,” Comput. Sci. Rev., vol. 49, p. 100559, 2023, doi: https://doi.org/10.1016/j.cosrev.2023.100559.

R. Alkanhel et al., “Network Intrusion Detection Based on Feature Selection and Hybrid Metaheuristic Optimization,” Comput. Mater. Contin., vol. 74, no. 2, pp. 2677–2693, 2023, doi: 10.32604/cmc.2023.033273.

S. M. Kasongo, “An advanced intrusion detection system for IIoT Based on GA and tree based algorithms,” IEEE Access, vol. 9, pp. 113199–113212, 2021, doi: 10.1109/ACCESS.2021.3104113.

S. Samantaray, A. Sahoo, and D. P. Satapathy, “Prediction of groundwater-level using novel SVM-ALO, SVM-FOA, and SVM-FFA algorithms at Purba-Medinipur, India,” Arab. J. Geosci., vol. 15, no. 8, p. 723, 2022, doi: 10.1007/s12517-022-09900-y.

S. Hosseini and M. Khorashadizade, “Efficient Feature Selection Method using Binary Teaching-learning-based Optimization Algorithm,” J. AI Data Min., vol. 11, no. 1, pp. 29–37, 2023, doi: 10.22044/jadm.2023.12497.2400.

H. Das, B. Naik, and H. S. Behera, “A Jaya algorithm based wrapper method for optimal feature selection in supervised classification,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 6, pp. 3851–3863, 2022, doi: 10.1016/j.jksuci.2020.05.002.

C. Schröer, F. Kruse, and J. M. Gómez, “A systematic literature review on applying CRISP-DM process model,” Procedia Comput. Sci., vol. 181, no. 2019, pp. 526–534, 2021, doi: 10.1016/j.procs.2021.01.199.

F. Martínez-Plumed et al., CRISP-DM Twenty Years Later: From Data Mining Processes to Data Science Trajectories, vol. 33, no. 8. 2021. doi: 10.1109/TKDE.2019.2962680.

P. Chapman et al., CRIPS-DM 1.0 Step-by-step data mining guide, First Edit. CRISP-DM Consortium, 2000. [Online]. Available: https://www.the-modeling-agency.com/crisp-dm.pdf

H. H. Htun, M. Biehl, and N. Petkov, “Survey of feature selection and extraction techniques for stock market prediction,” Financ. Innov., vol. 9, no. 1, 2023, doi: 10.1186/s40854-022-00441-7.

U. M. Khaire and R. Dhanalakshmi, “Stability of feature selection algorithm: A review,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 4, pp. 1060–1073, 2022, doi: 10.1016/j.jksuci.2019.06.012.

E. Odhiambo Omuya, G. Onyango Okeyo, and M. Waema Kimwele, “Feature Selection for Classification using Principal Component Analysis and Information Gain,” Expert Syst. Appl., vol. 174, no. November 2020, p. 114765, 2021, doi: 10.1016/j.eswa.2021.114765.

K. Phorah, M. Sumbwanyambe, and M. Sibiya, “Systematic Literature Review on Data Preprocessing for Improved Water Potability Prediction : A Study of Data Cleaning , Feature Engineering , and Dimensionality Reduction Techniques,” Nanotechnol. Perceptions, vol. 20 No. S11, no. September, pp. 133–151, 2024.

S. Sinsomboonthong, “Performance Comparison of New Adjusted Min-Max with Decimal Scaling and Statistical Column Normalization Methods for Artificial Neural Network Classification,” Int. J. Math. Math. Sci., vol. 2022, no. 1, 2022, doi: 10.1155/2022/3584406.

A. Pranolo, F. Usha, and A. Khansa, “Enhanced Multivariate Time Series Analysis Using LSTM : A Comparative Study of Min-Max and Z-Score Normalization Techniques,” Ilk. J. Ilm., vol. 16, no. 2, pp. 210–220, 2024.

P. I. Dalatu and H. Midi, “New approaches to normalization techniques to enhance K-means clustering algorithm,” Malaysian J. Math. Sci., vol. 14, no. 1, pp. 41–62, 2020.

P. J. Muhammad Ali, “Investigating the Impact of Min-Max Data Normalization on the Regression Performance of K-Nearest Neighbor with Different Similarity Measurements,” Aro-the Sci. J. Koya Univ., vol. 10, no. 1, pp. 85–91, 2022, doi: 10.14500/aro.10955.

D. T. Larose, Discovering Knowledge In Data : an introductio n to data mining. In Automotive Industries AI . Canada: A John Wiley & Sons.Inc, 2005.

M. Pagan, M. Zarlis, and A. Candra, “Investigating the impact of data scaling on the k-nearest neighbor algorithm,” Comput. Sci. Inf. Technol., vol. 4, no. 2, pp. 135–142, 2023, doi: 10.11591/csit.v4i2.pp135-142.

R. Wahyusari and A. Fadlil, “Comparison of Machine Learning Methods for Predicting Electrical Energy Consumption,” Aviat. Electron. Inf. Technol. Telecommun. Electr. Control., vol. 7, no. 1, pp. 11–18, 2025.

M. Buyukkececi and M. C. Okur, “A Comprehensive Review of Feature Selection and Feature Selection Stability in Machine Learning,” Gazi Univ. J. Sci., vol. 36, no. 4, pp. 1506–1520, 2023, doi: 10.35378/gujs.993763.

N. Pudjihartono, T. Fadason, A. W. Kempa-Liehr, and J. M. O’Sullivan, “A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction,” Front. Bioinforma., vol. 2, no. June, pp. 1–17, 2022, doi: 10.3389/fbinf.2022.927312.

P. Agrawal, H. F. Abutarboush, T. Ganesh, and A. W. Mohamed, “Metaheuristic algorithms on feature selection: A survey of one decade of research (2009-2019),” IEEE Access, vol. 9, pp. 26766–26791, 2021, doi: 10.1109/ACCESS.2021.3056407.

V. Tomar, M. Bansal, and P. Singh, “Metaheuristic Algorithms for Optimization: A Brief Review,” Eng. Proc., vol. 59, no. 1, pp. 1–16, 2023, doi: 10.3390/engproc2023059238.

R. Venkata Rao, V. Savsani, and D. Vakharia, “Teaching-Learning-Based Optimization: A novel method for constrained mechanical design optimization problems,” Comput. Des., vol. 43, pp. 303–315, Mar. 2011, doi: 10.1016/j.cad.2010.12.015.

Y. Ma, Y. Li, and L. Yong, “Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application,” Mathematics, vol. 12, no. 10, 2024, doi: 10.3390/math12101596.

M. Grzywiński, “Weight Optimization of Tower Structures with Continuous Variables using Jaya Algorithm,” Acta Polytech. Hungarica, vol. 21, no. 1, pp. 91–101, 2024, doi: 10.12700/APH.21.1.2024.1.6.

L. S. A. da Silva, Y. L. S. Lúcio, L. dos S. Coelho, V. C. Mariani, and R. V. Rao, “A comprehensive review on Jaya optimization algorithm,” Artif. Intell. Rev., vol. 56, no. 5, pp. 4329–4361, 2023, doi: 10.1007/s10462-022-10234-0.

M. H. Ahmed, K. Kutsuzawa, and M. Hayashibe, “Transhumeral Arm Reaching Motion Prediction through Deep Reinforcement Learning-Based Synthetic Motion Cloning,” Biomimetics, vol. 8, no. 4, 2023, doi: 10.3390/biomimetics8040367.

G. S. Collins et al., “Evaluation of clinical prediction models (part 1): from development to external validation,” Bmj, no. part 1, 2024, doi: 10.1136/bmj-2023-074819.

A. Fadlil, Herman, and D. Praseptian M, “K Nearest Neighbor Imputation Performance on Missing Value Data Graduate User Satisfaction,” J. RESTI (Rekayasa Sist. dan Teknol. Informasi), vol. 6, no. 4, pp. 570–576, 2022, doi: 10.29207/resti.v6i4.4173.

L. Huang, J. Kang, M. Wan, L. Fang, C. Zhang, and Z. Zeng, “Solar Radiation Prediction Using Different Machine Learning Algorithms and Implications for Extreme Climate Events,” Front. Earth Sci., vol. 9, no. April, pp. 1–17, 2021, doi: 10.3389/feart.2021.596860.

A. J. P. Delima, “An enhanced K-nearest neighbor predictive model through metaheuristic optimization,” Int. J. Eng. Technol. Innov., vol. 10, no. 4, pp. 280–292, 2020, doi: 10.46604/ijeti.2020.4646.

Additional Files

Published

2025-06-23

How to Cite

[1]
R. Wahyusari, S. Sunardi, and A. Fadlil, “Comparative Analysis Of Ant Lion Optimization And Jaya Algorithm For Feature Selection In K-Nearest Neighbor (Knn) Based Electricity Consumption Prediction”, J. Tek. Inform. (JUTIF), vol. 6, no. 3, pp. 1373–1388, Jun. 2025.