Classification of Worker Productivity and Resource Allocation Optimization with Machine Learning: Garment Industry

Authors

  • A’isya Nur Aulia Yusuf Electrical Engineering, Fakultas Teknik, Universitas Jenderal Soedirman, Indonesia
  • Zakiyyan Zain Alkaf Industrial and Mechanical Engineering, Fakultas Teknik, Universitas Jenderal Soedirman, Indonesia
  • Elsa Sari Hayunah Nurdiniyah Electrical Engineering, Fakultas Teknik, Universitas Jenderal Soedirman, Indonesia
  • Tri Wisudawati Industrial and Mechanical Engineering, Fakultas Teknik, Universitas Jenderal Soedirman, Indonesia
  • Muhammad Ihsan Fawzi Informatics, Fakultas Teknik, Universitas Jenderal Soedirman, Indonesia

DOI:

https://doi.org/10.52436/1.jutif.2025.6.5.5263

Keywords:

Bayesian Optimization, Garment Industry, Linear Programming, Machine Learning, Productivity Classification, Random Forest

Abstract

This study presents an integrated predictive–prescriptive framework for improving workforce management in the garment industry by combining machine learning classification with linear programming optimization. Using a publicly available dataset of 1,197 production records, productivity levels were categorized into low, medium, and high classes. Data preprocessing included handling missing values, one-hot encoding of categorical variables, and class balancing using SMOTE. Eleven classification algorithms were evaluated, with LightGBM achieving the highest performance (accuracy 78.3%, weighted F1-score 78.3%, Cohen’s Kappa 63.4%) after hyperparameter tuning via Bayesian Optimization. The optimized model’s predictions were then incorporated into a linear programming model, implemented with PuLP, to maximize the allocation of high-productivity workers across production departments under capacity constraints. The results yielded an allocation plan assigning 117 high-productivity workers, significantly enhancing potential operational efficiency. The novelty of this work lies in integrating an optimized ensemble learning model with mathematical programming for end-to-end productivity classification and resource allocation, a combination rarely explored in labor-intensive manufacturing contexts. This framework offers a scalable decision-support tool for data-driven workforce planning and could be adapted to other manufacturing domains with similar operational structures. 

Downloads

Download data is not yet available.

References

J. S. Mah, “Industrial-Led Economic Development of Cambodia: Implications for Low-Income Developing Countries,” Southeast Asian Economies, vol. 39, no. 2, pp. 198–210, 2022, doi: 10.1355/ae39-2e.

J. VAN BIESEBROECK, “ROBUSTNESS OF PRODUCTIVITY ESTIMATES *,” J Ind Econ, vol. 55, no. 3, pp. 529–569, Sep. 2007, doi: 10.1111/j.1467-6451.2007.00322.x.

I. Balla, S. Rahayu, J. Jaya Purnama, and C. Author, “GARMENT EMPLOYEE PRODUCTIVITY PREDICTION USING RANDOM FOREST,” Jurnal TECHNO Nusa Mandiri, vol. 18, no. 1, pp. 49–54, Mar. 2021, [Online]. Available: www.nusamandiri.ac.id

H. R. Saad, “Use Bagging Algorithm to Improve Prediction Accuracy for Evaluation of Worker Performances at a Production Company,” Industrial Engineering & Management, vol. 07, no. 02, 2018, doi: 10.4172/2169-0316.1000257.

F. M. Shakirullah, M. Uddin Ahammad, and M. Forhad Uddin, “Profit Optimization of an Apparel Industry in Bangladesh by Linear Programming Model,” American Journal of Applied Mathematics, vol. 8, no. 4, p. 182, 2020, doi: 10.11648/j.ajam.20200804.13.

R. W. M. Kong, D. Ning, and T. H. T. Kong, “A Mixed-Integer Linear Programming (MILP) for Garment Line Balancing,” International Journal of Scientific Research and Modern Technology (IJSRMT) , Feb. 2025.

M. ZLOBIN and V. BAZYLEVYCH, “BAYESIAN OPTIMIZATION FOR TUNING HYPERPARAMETRS OF MACHINE LEARNING MODELS: A PERFORMANCE ANALYSIS IN XGBOOST,” Computer systems and information technologies, no. 1, pp. 141–146, Mar. 2025, doi: 10.31891/csit-2025-1-16.

J. Snoek, H. Larochelle, and R. P. Adams, “Practical Bayesian Optimization of Machine Learning Algorithms,” Aug. 2012, [Online]. Available: http://arxiv.org/abs/1206.2944

T. A. Lai, N. Nguyen, and Q. Bui, “Hyper‐parameter optimization of gradient boosters for flood susceptibility analysis,” Transactions in GIS, vol. 27, no. 1, pp. 224–238, Feb. 2023, doi: 10.1111/tgis.13023.

A. Candelieri, A. Ponti, and F. Archetti, “Bayesian Optimization in Wasserstein Spaces,” 2022, pp. 248–262. doi: 10.1007/978-3-031-24866-5_19.

Z. Zhang, Y. Zhang, Y. Wen, and Y. Ren, “Data-driven XGBoost model for maximum stress prediction of additive manufactured lattice structures,” Complex & Intelligent Systems, vol. 9, no. 5, pp. 5881–5892, Oct. 2023, doi: 10.1007/s40747-023-01061-z.

M. R. Mahani, I. A. Nechepurenko, T. Flisgen, and A. Wicht, “Combining Bayesian Optimization, Singular Value Decomposition, and Machine Learning for Advanced Optical Design,” ACS Photonics, vol. 12, no. 4, pp. 1812–1821, Apr. 2025, doi: 10.1021/acsphotonics.4c02157.

X. Guidetti, A. Rupenyan, L. Fassl, M. Nabavi, and J. Lygeros, “Advanced Manufacturing Configuration by Sample-Efficient Batch Bayesian Optimization,” IEEE Robot Autom Lett, vol. 7, no. 4, pp. 11886–11893, Oct. 2022, doi: 10.1109/LRA.2022.3208370.

A. S. Asru, H. Khosravi, I. Ahmed, and A. Azeem, “From automation to autonomy in smart manufacturing: a Bayesian optimization framework for modeling multi-objective experimentation and sequential decision making,” The International Journal of Advanced Manufacturing Technology, vol. 137, no. 9–10, pp. 5027–5057, Apr. 2025, doi: 10.1007/s00170-025-15407-z.

Y. Li, Q. Zhang, M. Limaye, and G. Li, “Constrained Bayesian Optimization under Bivariate Gaussian Process with Application to Cure Process Optimization,” May 2025, [Online]. Available: http://arxiv.org/abs/2506.00174

W. J. Peck, “Workforce planning: a review of methodologies,” Ann Oper Res, Aug. 2025, doi: 10.1007/s10479-025-06734-1.

B. Vahedi-Nouri, R. Tavakkoli-Moghaddam, Z. Hanzálek, and A. Dolgui, “Workforce planning and production scheduling in a reconfigurable manufacturing system facing the COVID-19 pandemic,” J Manuf Syst, vol. 63, pp. 563–574, Apr. 2022, doi: 10.1016/j.jmsy.2022.04.018.

M. K. So and S. L. Kek, “Workforce size problem in manufacturing with dynamic programming approach,” 2020, p. 090005. doi: 10.1063/5.0018444.

Quantagonia, “How to get started: Workforce Management and Shift Planning Optimization.” Accessed: Aug. 02, 2025. [Online]. Available: https://www.quantagonia.com/post/workforce-management-shift-planning-optimization

R. Mitchell, E. Frank, and G. Holmes, “GPUTreeShap: massively parallel exact calculation of SHAP scores for tree ensembles,” PeerJ Comput Sci, vol. 8, p. e880, Apr. 2022, doi: 10.7717/peerj-cs.880.

S. Makubhai, G. R. Pathak, and P. R. Chandre, “Interpreting Healthcare Insights: The Power of Explainable AI and Enhanced Data Analysis,” 2024, pp. 413–424. doi: 10.1007/978-981-97-6684-0_33.

H. Kumar, K. K. Bhartiy, D. Dhabliya, R. Agarwal, S. Kumar, and S. Tripathi, “Explainable Bayesian-Optimized XGBoost Model for Component Failure Detection in Predictive Maintenance,” 2024, pp. 137–155. doi: 10.4018/979-8-3693-1347-3.ch010.

Saulo11340, “AI-driven budget allocation using machine learning and Bayesian optimization.,” Mar. 2025. Accessed: Aug. 02, 2025. [Online]. Available: https://github.com/Saulo11340/AI-Budget-Allocation

R. Shi, X. Xu, J. Li, and Y. Li, “Prediction and analysis of train arrival delay based on XGBoost and Bayesian optimization,” Appl Soft Comput, vol. 109, p. 107538, Sep. 2021, doi: 10.1016/j.asoc.2021.107538.

N. Bruschi, A. Garofalo, F. Conti, G. Tagliavini, and D. Rossi, “Enabling mixed-precision quantized neural networks in extreme-edge devices,” in Proceedings of the 17th ACM International Conference on Computing Frontiers, New York, NY, USA: ACM, May 2020, pp. 217–220. doi: 10.1145/3387902.3394038.

L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,” Neurocomputing, vol. 415, pp. 295–316, Nov. 2020, doi: 10.1016/j.neucom.2020.07.061.

A. Al Imran, S. Rahim, and T. Ahmed, “Mining the productivity data of the garment industry,” 2021.

B. Qolomany et al., “Leveraging Machine Learning and Big Data for Smart Buildings: A Comprehensive Survey,” IEEE Access, vol. 7, pp. 90316–90356, 2019, doi: 10.1109/ACCESS.2019.2926642.

N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of Artificial Intelligence Research, vol. 16, pp. 321–357, Jun. 2002, doi: 10.1613/jair.953.

E. R. Kessler, M. Shah, S. K. Gruschkus, and A. Raju, “Cost and Quality Implications of Opioid‐Based Postsurgical Pain Control Using Administrative Claims Data from a Large Health System: Opioid‐Related Adverse Events and Their Impact on Clinical and Economic Outcomes,” Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy, vol. 33, no. 4, pp. 383–391, Apr. 2013, doi: 10.1002/phar.1223.

O. FAUST, P. C. A. ANG, S. D. PUTHANKATTIL, and P. K. JOSEPH, “DEPRESSION DIAGNOSIS SUPPORT SYSTEM BASED ON EEG SIGNAL ENTROPIES,” J Mech Med Biol, vol. 14, no. 03, p. 1450035, Jun. 2014, doi: 10.1142/S0219519414500353.

V. Borisov, T. Leemann, K. Seßler, J. Haug, M. Pawelczyk, and G. Kasneci, “Deep Neural Networks and Tabular Data: A Survey,” IEEE Trans Neural Netw Learn Syst, vol. 35, no. 6, pp. 7499–7519, Jun. 2024, doi: 10.1109/TNNLS.2022.3229161.

C. J. Van Rijsbergen, “Information retrieval: theory and practice,” Proceedings of the Joint IBM/University of Newcastle upon Tyne Seminar on Data Base Systems, 1979.

D. J. Hand, P. Christen, and N. Kirielle, “F*: an interpretable transformation of the F-measure,” Mach Learn, vol. 110, no. 3, pp. 451–456, Mar. 2021, doi: 10.1007/s10994-021-05964-1.

M. A. Setitra, M. Fan, B. L. Y. Agbley, and Z. E. A. Bensalem, “Optimized MLP-CNN Model to Enhance Detecting DDoS Attacks in SDN Environment,” Network, vol. 3, no. 4, pp. 538–562, Dec. 2023, doi: 10.3390/network3040024.

D. Zuo, L. Yang, Y. Jin, H. Qi, Y. Liu, and L. Ren, “Machine learning-based models for the prediction of breast cancer recurrence risk,” BMC Med Inform Decis Mak, vol. 23, no. 1, p. 276, Nov. 2023, doi: 10.1186/s12911-023-02377-z.

Thulasi. M and G. Thailambal, “An Ensemble Machine Learning Model for Osteoporosis Risk Prediction from Medical Data,” in 2025 International Conference on Machine Learning and Autonomous Systems (ICMLAS), IEEE, Mar. 2025, pp. 411–415. doi: 10.1109/ICMLAS64557.2025.10968017.

S. A. Alex, J. Jesu Vedha Nayahi, and S. Kaddoura, “Deep convolutional neural networks with genetic algorithm-based synthetic minority over-sampling technique for improved imbalanced data classification,” Applied Soft Computing, vol. 156, p. 111491, May 2024, doi: 10.1016/j.asoc.2024.111491.

M. Lindauer et al., “SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization,” Journal of Machine Learning Research, vol. 23, 2022.

Additional Files

Published

2025-10-16

How to Cite

[1]
A. N. A. . Yusuf, Z. Z. . Alkaf, E. S. H. . Nurdiniyah, T. . Wisudawati, and M. I. . Fawzi, “Classification of Worker Productivity and Resource Allocation Optimization with Machine Learning: Garment Industry”, J. Tek. Inform. (JUTIF), vol. 6, no. 5, pp. 2991–3001, Oct. 2025.