Deep Convolutional Generative Adversarial Network-Enhanced Data Augmentation for Imbalance Facial Acne Severity Classification Using a Fine-Tuned EfficientNet-B1
DOI:
https://doi.org/10.52436/1.jutif.2026.7.2.5548Keywords:
Acne Severity Classification, Convolutional Neural Network, Data Augmentation, DCGAN, Deep Learning, EfficientNet-B1Abstract
Imbalanced datasets often hinder the generalization capability of Convolutional Neural Networks (CNNs) in medical image classification, leading to overfitting and reduced performance on minority classes. This study aims to develop an acne severity classification model using EfficientNet-B1 combined with geometric and photometric augmentation, as well as and Deep Convolutional Generative Adversarial Network (DCGAN)-based augmentation to address class imbalance. The dataset consists of 1,380 facial images categorized into four acne severity levels: Normal, Level 0, Level 1, and Level 2. Preprocessing includes RGB conversion, bilinear resizing, and center cropping. The data are split into training (80%), validation (10%), and testing (10%) sets. Geometric and photometric augmentation applies horizontal flipping, 45° rotation, color jittering, and random resized cropping, while DCGAN generates synthetic samples to balance minority classes. The EfficientNet-B1 model is fine-tuned using compound scaling, MBConv blocks, Swish activation, Batch Normalization, Cross-Entropy loss, and AdamW optimizer, with 5-fold cross-validation for robustness. Experimental results demonstrate that DCGAN-based augmentation achieves superior performance, with a test accuracy of 94% and an average F1-score of 0.93, outperforming geometric and photometric data augmentation (90% accuracy and 0.88 F1-score). DCGAN augmentation also significantly reduces misclassification between visually similar acne severity levels, particularly Level 0 and Level 1. These findings indicate that integrating DCGAN with EfficientNet-B1 effectively enhances generalization on imbalanced medical image datasets, providing a robust and replicable framework for acne severity classification and related medical imaging applications.
Downloads
References
M. Krichen, “Convolutional Neural Networks: A Survey,” Computers, vol. 12, no. 8, pp. 1–41, 2023, doi: 10.3390/computers12080151.
Z. Wang et al., “A Comprehensive Survey on Data Augmentation,” vol. 14, no. 8, pp. 1–20, 2025, [Online]. Available: http://arxiv.org/abs/2405.09591
H. K. Jeong, C. Park, R. Henao, and M. Kheterpal, “Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations,” JID Innov., vol. 3, no. 1, p. 100150, 2023, doi: 10.1016/j.xjidi.2022.100150.
P. M. Shah et al., “DC-GAN-based synthetic X-ray images augmentation for increasing the performance of EfficientNet for COVID-19 detection,” Expert Syst., vol. 39, no. 3, pp. 1–13, 2022, doi: 10.1111/exsy.12823.
I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono, “Deep Convolutional Neural Networks in Medical Image Analysis: A Review,” Inf., vol. 16, no. 3, pp. 1–28, 2025, doi: 10.3390/info16030195.
J. Lee, T. Y. Kim, S. Beak, Y. Moon, and J. Jeong, “Real-Time Pose Estimation Based on ResNet-50 for Rapid Safety Prevention and Accident Detection for Field Workers,” Electron., vol. 12, no. 16, pp. 1–22, 2023, doi: 10.3390/electronics12163513.
S. R. Yang, H. C. Yang, F. R. Shen, and J. Zhao, “Image Data Augmentation for Deep Learning: A Survey,” Ruan Jian Xue Bao/Journal Softw., vol. 36, no. 3, pp. 1390–1412, 2025, doi: 10.13328/j.cnki.jos.007263.
A. Mumuni, F. Mumuni, and N. K. Gerrar, “A Survey of Synthetic Data Augmentation Methods in Machine Vision,” Mach. Intell. Res., vol. 21, no. 5, pp. 831–869, 2024, doi: 10.1007/s11633-022-1411-7.
T. Kumar, R. Brennan, A. Mileo, and M. Bendechache, “Image Data Augmentation Approaches: A Comprehensive Survey and Future Directions,” IEEE Access, vol. 12, pp. 187536–187571, 2024, doi: 10.1109/ACCESS.2024.3470122.
S. Guan et al., “Strip Steel Defect Classification Using the Improved GAN and EfficientNet,” Appl. Artif. Intell., vol. 35, no. 15, pp. 1887–1904, 2021, doi: 10.1080/08839514.2021.1995231.
S. Rahmadani, A. Subekti, and M. Haris, “Improving Classification Performance on Imbalance Medical Data using Generative Adversarial Network,” vol. 1, pp. 9–17, 2024.
Z. Gao, “Enhancing Image Classification Performance via GAN-based Data Augmentation,” vol. 0, pp. 10–20, 2025, doi: 10.54254/2755-2721/2025.22710.
Y. Jim and D. Carri, “Biomedical Signal Processing and Control Gan-based data augmentation to improve breast ultrasound and mammography mass classification,” vol. 94, no. February, 2024, doi: 10.1016/j.bspc.2024.106255.
I. Khazrak, S. Takhirova, M. M. Rezaee, and M. Yadollahi, “Addressing Small and Imbalanced Medical Image Datasets Using Generative Models : A Comparative Study of DDPM and PGGANs with Random and Greedy K Sampling”.
T. Agustin, I. A. Saputro, and M. L. Rahmadi, “Optimizing Rice Plant Disease Classification Using Data Augmentation with GANs on Convolutional Neural Networks,” INTENSIF J. Ilm. Penelit. dan Penerapan Teknol. Sist. Inf., vol. 9, no. 1, pp. 97–114, 2025, doi: 10.29407/intensif.v9i1.23834.
M. W. Purbandanu, R. Yanuarta, and A. Kurniawan, “Optimization of Skin Cancer Detection to Improve Accuracy with the Application of Efficient Convolutional Neural Network and EfficientNetB2 Models,” J. Intell. Comput. Heal. Informatics, vol. 5, no. 2, pp. 43–50, 2024, doi: 10.26714/jichi.v5i2.14338.
R. Javed, T. Saba, T. J. Alahmadi, S. Al-Otaibi, B. AlGhofaily, and A. Rehman, “EfficientNetB1 Deep LearningModel for Microscopic Lung Cancer Lesion Detection and Classification Using Histopathological Images,” Comput. Mater. Contin., vol. 81, no. 1, pp. 809–825, 2024, doi: 10.32604/cmc.2024.052755.
N. S. Suriani, S. S. A. Tarmizi, M. N. H. Mohd, and S. M. Shah, “Acne Severity Classification on Mobile Devices using Lighweight Deep Learning Approach,” Int. J. Adv. Comput. Sci. Appl., vol. 15, no. 6, pp. 680–687, 2024, doi: 10.14569/IJACSA.2024.0150668.
H. Wen et al., “Acne detection and severity evaluation with interpretable convolutional neural network models,” Technol. Heal. Care, vol. 30, no. S1, pp. S143–S153, 2022, doi: 10.3233/THC-228014.
F. Ramadhani, S. Rahardiantoro, and M. Masjkur, “Acne Severity Classification Study Using Convolutional Neural Network Algorithm with MobileNetV2 Architecture,” Indones. J. Stat. Its Appl., vol. 8, no. 2, pp. 112–128, 2024, doi: 10.29244/ijsa.v8i2p112-128.
A. A. Odho, A. Bilal, N. Ur, and R. Malik, “ISSN (e) 3007-3138 (p) 3007-312X SKIN ACNE SKIN DISEASE CLASSIFICATION BY USING FINE TUNED CONVOLUTIONAL NEURAL NETWORK,” vol. 3138, pp. 639–648, 2025, [Online]. Available: https://doi.org/10.5281/zenodo.15260008
M. S. Junayed, M. B. Islam, A. A. Jeny, A. Sadeghzadeh, T. Biswas, and A. F. M. S. Shah, “ScarNet: Development and Validation of a Novel Deep CNN Model for Acne Scar Classification with a New Dataset,” IEEE Access, vol. 10, pp. 1245–1258, 2022, doi: 10.1109/ACCESS.2021.3138021.
N. Gao et al., “Evaluation of an acne lesion detection and severity grading model for Chinese population in online and offline healthcare scenarios,” pp. 1–11, 2025.
K. Watanabe, K. Iinuma, C. Nakashima, H. Yamamoto, N. Oiso, and A. Otsuka, “Deep Learning-Based Acne Severity Classification Using Standardized Facial Images of Japanese Patients,” vol. 17, no. 9, 2025, doi: 10.7759/cureus.91944.
J. Hussain, M. Båth, and J. Ivarsson, “Generative adversarial networks in medical image reconstruction : A systematic literature review,” Comput. Biol. Med., vol. 191, no. July 2024, p. 110094, 2025, doi: 10.1016/j.compbiomed.2025.110094.
O. Rainio, “applied sciences Exploring Generative Adversarial Network-Based Augmentation of Magnetic Resonance Brain Tumor Images,” 2024.
J. Wang and S. Lee, “Data augmentation methods applying grayscale images for convolutional neural networks in machine vision,” Appl. Sci., vol. 11, no. 15, 2021, doi: 10.3390/app11156721.
A. Kotte and S. S. Ahmad, “Implementation of innovative approach for detecting brain tumors in magnetic resonance imaging using NeuroFusionNet model,” Int. J. Electr. Comput. Eng., vol. 14, no. 6, pp. 6628–6641, 2024, doi: 10.11591/ijece.v14i6.pp6628-6641.
D. A. Kurnia, O. Mohd, M. F. Abdollah, D. Sudrajat, D. M. Efendi, and S. Rahmatullah, “Digital Image Processing (DIP) and Generative Adversarial Networks (GANs) Techniques for Improvement Low-Resolution Face Recognition,” Ing. des Syst. d’Information, vol. 29, no. 6, pp. 2251–2263, 2024, doi: 10.18280/isi.290615.
Y. Bai et al., “How Important is the Train-Validation Split in Meta-Learning?,” Proc. Mach. Learn. Res., vol. 139, pp. 543–553, 2021.
A. Mumuni and F. Mumuni, “Data augmentation: A comprehensive survey of modern approaches,” Array, vol. 16, no. November, p. 100258, 2022, doi: 10.1016/j.array.2022.100258.
I. Hrga and M. Ivasic-Kos, “Effect of Data Augmentation Methods on Face Image Classification Results,” Int. Conf. Pattern Recognit. Appl. Methods, vol. 1, no. Icpram, pp. 660–667, 2022, doi: 10.5220/0010883800003122.
T. Islam, M. S. Hafiz, J. R. Jim, M. M. Kabir, and M. F. Mridha, “A systematic review of deep learning data augmentation in medical imaging: Recent advances and future research directions,” Healthc. Anal., vol. 5, no. April, p. 100340, 2024, doi: 10.1016/j.health.2024.100340.
J. Ma, C. Hu, P. Zhou, F. Jin, X. Wang, and H. Huang, “Review of Image Augmentation Used in Deep Learning-Based Material Microscopic Image Segmentation,” Appl. Sci., vol. 13, no. 11, pp. 1–18, 2023, doi: 10.3390/app13116478.
R. Yang, R. Wang, Y. Deng, X. Jia, and H. Zhang, “Rethinking the random cropping data augmentation method used in the training of cnn-based sar image ship detector,” Remote Sens., vol. 13, no. 1, pp. 1–23, 2021, doi: 10.3390/rs13010034.
S. Mishra et al., “Object-aware Cropping for Self-Supervised Learning,” Trans. Mach. Learn. Res., vol. 2022-Decem, no. 2012, pp. 1–15, 2022.
S. Surono, C. W. Onn, A. Mujhid, N. Irsalinda, and G. K. Wen, “Deep convolutional generative adversarial networks for data imbalance in convolutional neural networks for facial expression classification,” vol. 1, no. 1, pp. 1–8, 2023.
A. Makhlouf, M. Maayah, N. Abughanam, and C. Catal, “The use of generative adversarial networks in medical image augmentation,” Neural Comput. Appl., vol. 35, no. 34, pp. 24055–24068, 2023, doi: 10.1007/s00521-023-09100-z.
A. H. Basori and S. J. Malebary, “applied sciences Hybrid Deep Convolutional Generative Adversarial Network ( DCGAN ) and Xtreme Gradient Boost for X-ray Image Augmentation and Detection,” 2023.
J. Blarr, S. Klinder, W. V Liebig, K. Inal, L. Kärger, and K. A. Weidenmann, “Deep convolutional generative adversarial network for generation of computed tomography images of discontinuously carbon fiber reinforced polymer microstructures,” Sci. Rep., pp. 1–13, 2024, doi: 10.1038/s41598-024-59252-8.
X. Du, X. Ding, M. Xi, Y. Lv, S. Qiu, and Q. Liu, “A Data Augmentation Method for Motor Imagery EEG Signals Based on DCGAN-GP Network,” Brain Sci., vol. 14, no. 4, 2024, doi: 10.3390/brainsci14040375.
M. S. Meor Yahaya and J. Teo, “Data augmentation using generative adversarial networks for images and biomarkers in medicine and neuroscience,” Front. Appl. Math. Stat., vol. 9, 2023, doi: 10.3389/fams.2023.1162760.
M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” Proc. Mach. Learn. Res., vol. 139, pp. 10096–10106, 2021.
X. Zhao, L. Wang, Y. Zhang, X. Han, M. Deveci, and M. Parmar, A review of convolutional neural networks in computer vision, vol. 57, no. 4. Springer Netherlands, 2024. doi: 10.1007/s10462-024-10721-6.
D. B. Mulindwa and S. Du, “An n-Sigmoid Activation Function to Improve the Squeeze-and-Excitation for 2D and 3D Deep Networks,” Electron., vol. 12, no. 4, 2023, doi: 10.3390/electronics12040911.
R. Raza et al., “Lung-EffNet: Lung cancer classification using EfficientNet from CT-scan images,” Eng. Appl. Artif. Intell., vol. 126, no. PB, p. 106902, 2023, doi: 10.1016/j.engappai.2023.106902.
W. Zhao, Y. Wang, X. Xiong, and Y. Li, “Cfm‐rfm: A cascading failure model for inter‐domain routing systems with the recovery feedback mechanism,” Inf., vol. 12, no. 6, 2021, doi: 10.3390/info12060247.
Y. Peerthum and M. Stamp, “An empirical analysis of the shift and scale parameters in BatchNorm,” Inf. Sci. (Ny)., vol. 637, no. February, p. 118951, 2023, doi: 10.1016/j.ins.2023.118951.
S. Lange, K. Helfrich, and Q. Ye, “Batch Normalization Preconditioning for Neural Network Training,” J. Mach. Learn. Res., vol. 23, pp. 1–41, 2022.
Y. Yao et al., “Jo-SRC: A Contrastive Approach for Combating Noisy Labels,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 5188–5197, 2021, doi: 10.1109/CVPR46437.2021.00515.
Z. Zhuang, M. Liu, A. Cutkosky, and F. Orabona, “Understanding AdamW through Proximal Methods and Scale-Freeness,” Trans. Mach. Learn. Res., vol. 2022-Augus, no. 2019, 2022.
P. Zhou, X. Xie, Z. Lin, and S. Yan, “Towards Understanding Convergence and Generalization of AdamW,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 9, pp. 6486–6493, 2024, doi: 10.1109/TPAMI.2024.3382294.
Z. Shen, Z. Liu, J. Qin, M. Savvides, and K. T. Cheng, “Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning,” 35th AAAI Conf. Artif. Intell. AAAI 2021, vol. 11A, pp. 9594–9602, 2021, doi: 10.1609/aaai.v35i11.17155.
J. Sun, C. Tang, W. Xie, and X. Zhou, “Nonpa ra metric re c eiv e r ope rating cha racte ris tic curve analysis with an imperfect gold standard,” vol. 80, no. 3, 2024.
L. Ferrer, “Analysis and Comparison of Classification Metrics,” pp. 1–36, 2023, [Online]. Available: http://arxiv.org/abs/2209.05355
Jude Chukwura Obi, “A comparative study of several classification metrics and their performances on data,” World J. Adv. Eng. Technol. Sci., vol. 8, no. 1, pp. 308–314, 2023, doi: 10.30574/wjaets.2023.8.1.0054.
J. Opitz, “A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice,” Trans. Assoc. Comput. Linguist., vol. 12, no. 2018, pp. 820–836, 2024, doi: 10.1162/tacl_a_00675.
Y. Benny, T. Galanti, S. Benaim, and L. Wolf, “Evaluation Metrics for Conditional Image Generation,” Int. J. Comput. Vis., 2021, doi: 10.1007/s11263-020-01424-w.
W. Zhao, W. Chen, L. Fan, Y. Shang, Y. Wang, and W. Situ, “MAN-GAN : a mask-adaptive normalization based generative adversarial networks for liver multi- phase CT image generation,” pp. 1–11, 2025.
T. T. Cai, “Theoretical Foundations of t-SNE for Visualizing High-Dimensional Clustered Data,” vol. 23, pp. 1–54, 2022.
C. G. Cess and L. Haghverdi, “Gene expression Compound-SNE : comparative alignment of t-SNEs for multiple single-cell omics data visualization,” vol. 40, no. February, 2024.
L. Gonzalez-abril, C. Angulo, and J. A. Ortega, “Statistical Validation of Synthetic Data for Lung Cancer Patients Generated by Using Generative Adversarial Networks,” pp. 1–15, 2022.
D. N. S. Radhika, M. P. Shyamasunder, and N. B. Manohara, “A review of deep learning and Generative Adversarial Networks applications in medical image analysis,” Multimed. Syst., vol. 30, no. 3, pp. 1–25, 2024, doi: 10.1007/s00530-024-01349-1.
M. M. Abdulqader and A. M. Abdulazeez, “A Comparative Study of Generative Adversarial Networks in Medical Image Processing,” pp. 1–23, 2025.
J. J. Jeong, A. Tariq, T. Adejumo, H. Trivedi, J. W. Gichoya, and I. Banerjee, “Systematic Review of Generative Adversarial Networks ( GANs ) for Medical Image Classification and Segmentation,” J. Digit. Imaging, pp. 137–152, 2022, doi: 10.1007/s10278-021-00556-w.
Additional Files
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Khoirun Nisya, Sugiyarto Surono, Aris Thobirin

This work is licensed under a Creative Commons Attribution 4.0 International License.





