Data Augmentation Techniques on the Accuracy of Fertile and Infertile Egg Classification Using Convolutional Neural Networks
DOI:
https://doi.org/10.52436/1.jutif.2025.6.5.5234Keywords:
Candling Images, Convolutional Neural Network, Data Augmentation, Egg Classification, EfficientNetB4Abstract
The classification of fertile and infertile chicken eggs is crucial in the poultry industry to ensure optimal incubation efficiency and hatchability. However, the visual similarity between both egg types under candling conditions poses a significant challenge for manual inspection. This study aims to develop a convolutional neural network (CNN) model using the EfficientNetB4 architecture to automatically classify egg fertility based on image data. The dataset comprises candling images of chicken eggs, which underwent preprocessing steps such as resizing, normalization, and histogram stretching to enhance contrast. To improve model generalization, aggressive data augmentation techniques were applied, including rotation, flipping, zooming, and brightness adjustment. The model was trained in two phases—feature extraction and fine-tuning—using transfer learning and class balancing strategies. Evaluation results demonstrated high performance with an F1-score of 0.95 and balanced classification across both classes. The model's interpretability was further enhanced using Grad-CAM visualization, showing relevant activation regions. These findings indicate that the proposed method is effective in automating egg fertility classification and has potential for broader application in agricultural image diagnostics.
Downloads
References
F. Nurdiyansyah, S. Fatriana Kadir, I. Akbar, and L. Ursaputra, “Penerapan Convolutional Neural Network Untuk Deteksi Kualitas Telur Ayam Ras Berdasarkan Warna Cangkang,” J. Mnemon., vol. 7, no. 1, pp. 40–47, 2024, doi: 10.36040/mnemonic.v7i1.8767.
S. Ghosal and R. Cucchiara, “AI‑Based Egg Classification: A Survey of Approaches and Trends,” Comput. Electron. Agric., vol. 190, 2021, doi: 10.1016/j.compag.2021.106394.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-December, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
M. F. A. Pratama, “Klasifikasi Kualitas Telur Ayam Menggunakan CNN,” eProceedings Eng., vol. 10, no. 1, 2023.
M. FIRDAUS, KUSRINI, and M. Rudyanto Arief, “Impact of Data Augmentation Techniques on the Implementation of a Combination Model of Convolutional Neural Network (CNN) and Multilayer Perceptron (MLP) for the Detection of Diseases in Rice Plants,” J. Sci. Res. Educ. Technol., vol. 2, no. 2, pp. 453–465, 2023, doi: 10.58526/jsret.v2i2.94.
Z. Xu, A. Meng, Z. Shi, W. Yang, Z. Chen, and L. Huang, “Continuous Copy‑Paste for Multi‑Object Tracking and Segmentation,” ICCV Proc., 2021.
K. Baek, D. Bang, and H. Shim, “GridMix: Strong regularization through local context mapping,” Pattern Recognit., vol. 109, 2021, doi: 10.1016/j.patcog.2020.107594.
A. Dabouei, S. Soleymani, F. Taherkhani, and N. M. Nasrabadi, “SuperMix: Supervising the mixing data augmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 13789–13798, 2021, doi: 10.1109/CVPR46437.2021.01358.
G. Ghiasi et al., “Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 2917–2927, 2021, doi: 10.1109/CVPR46437.2021.00294.
S. Y. Prasetyo, “Overcoming Overfitting in CNN Models for Potato Disease Classification Using Data Augmentation,” Eng. Math. Comput. Sci. J., vol. 6, no. 3, pp. 179–184, 2024, doi: 10.21512/emacsjournal.v6i3.11840.
K. Alomar, H. I. Aysel, and X. Cai, “Data Augmentation in Classification and Segmentation: A Survey and New Strategies,” J. Imaging, vol. 9, no. 2, 2023, doi: 10.3390/jimaging9020046.
N. Cauli and D. Reforgiato Recupero, “Survey on Videos Data Augmentation for Deep Learning Models,” Futur. Internet, vol. 14, no. 3, 2022, doi: 10.3390/fi14030093.
Z. Liu et al., “Swin Transformer: Hierarchical Vision Transformer using Shifted Windows,” Proc. IEEE Int. Conf. Comput. Vis., pp. 9992–10002, 2021, doi: 10.1109/ICCV48922.2021.00986.
B. Nurhakim, A. Rifai, D. A. Kurnia, D. Sudrajat, and U. Supriatna, “Smart Attendance Tracking System Employing Deep Learning for Face Anti-Spoofing Protection,” JITK (Jurnal Ilmu Pengetah. dan Teknol. Komputer), vol. 10, no. 3, pp. 496–505, 2025, doi: 10.33480/jitk.v10i3.5992.
S. R. Yang, H. C. Yang, F. R. Shen, and J. Zhao, “Image Data Augmentation for Deep Learning: A Survey,” Ruan Jian Xue Bao/Journal Softw., vol. 36, no. 3, pp. 1390–1412, 2025, doi: 10.13328/j.cnki.jos.007263.
A. Mumuni, F. Mumuni, and N. K. Gerrar, “A Survey of Synthetic Data Augmentation Methods in Machine Vision,” Mach. Intell. Res., vol. 21, no. 5, pp. 831–869, 2024, doi: 10.1007/s11633-022-1411-7.
Z. Liu, H. Mao, C. Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 11966–11976, 2022, doi: 10.1109/CVPR52688.2022.01167.
M. Xu, S. Yoon, A. Fuentes, and D. S. Park, “A Comprehensive Survey of Image Augmentation Techniques for Deep Learning,” Pattern Recognit., vol. 137, 2023, doi: 10.1016/j.patcog.2023.109347.
D. Lewy and J. Mańdziuk, “An overview of mixing augmentation methods and augmentation strategies,” Artif. Intell. Rev., vol. 56, no. 3, pp. 2111–2169, 2023, doi: 10.1007/s10462-022-10227-z.
Rachel Kim and Emily White, “Convolutional neural network for data augmentation,” World J. Adv. Eng. Technol. Sci., vol. 13, no. 2, pp. 870–886, 2024, doi: 10.30574/wjaets.2024.13.2.0528.
M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” in Proceedings of Machine Learning Research, 2021, vol. 139, pp. 10096–10106, [Online]. Available: https://proceedings.mlr.press/v139/tan21a.html.
S. Song, Q. Li, and J. Xu, “Deep Learning-Based Classification of Broiler Chicken Egg Fertility Using Convolutional Neural Networks,” Appl. Sci., vol. 11, no. 14, p. 6403, 2021, doi: 10.3390/app11146403.
L. Huang, A. He, M. Zhai, Y. Wang, R. Bai, and X. Nie, “A multi-feature fusion based on transfer learning for chicken embryo eggs classification,” Symmetry (Basel)., vol. 11, no. 5, p, 2019, doi: 10.3390/sym11050606.
“A Multi-Feature Fusion Based on Transfer Learning with Hyperspectral Imaging and PCA,” Symmetry (Basel)., 2021, doi: [DOI].
S. Saifullah and A. P. Suryotomo, “Identification of chicken egg fertility using SVM classifier based on first-order statistical feature extraction,” Ilk. J. Ilm., vol. 13, no. 3, pp. 285–293, 2021, doi: 10.33096/ilkom.v13i3.937.285-293.
V. Gupta, A. Mandloi, S. Pawar, T. V. Aravinda, and K. R. Krishnareddy, “Deep learning based object detection using mask R-CNN,” AI IoT-based Intell. Heal. Care Sanit., pp. 207–221, 2023, doi: 10.2174/9789815136531123010016.
A. O. Adegbenjo, L. Liu, and M. O. Ngadi, “Non-destructive assessment of chicken egg fertility,” Sensors (Switzerland), vol. 20, no. 19, pp. 1–23, 2020, doi: 10.3390/s20195546.
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Bani Nurhakim, Dodi Solihudin, Dina Amalia, Irly Arelia

This work is licensed under a Creative Commons Attribution 4.0 International License.
 
						
 
  
 




 
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
 