Automated Facial Wrinkle Segmentation for Dermatological Assessment Using VGG-Based U-Net with Hybrid Augmentation

Authors

  • Wahyu Fajar Setiawan Department of Informatic, Sepuluh Nopember Institute of Technology (ITS), Indonesia
  • Nanik Suciati Department of Informatic, Sepuluh Nopember Institute of Technology (ITS), Indonesia

DOI:

https://doi.org/10.52436/1.jutif.2026.7.2.5561

Keywords:

Data Augmentation, Deep Learning, Facial Wrinkle Segmentation, Transfer Learning, U-Net Architecture

Abstract

Manual and automated facial wrinkle segmentation remains challenging due to the fine-grained nature of wrinkles, uneven distribution across facial regions, severe class imbalance (~2% wrinkle pixels), and sensitivity to lighting variations—limiting the reliability of existing dermatological assessment tools. This study aims to evaluate VGG transfer learning with hybrid augmentation strategies for U-Net-based automated facial wrinkle segmentation. Using the FFHQ-Wrinkle dataset comprising 1,000 manually annotated high-resolution images (1024×1024 pixels), this study systematically evaluates three U-Net variants (Baseline, VGG16-based, VGG19-based) across four augmentation strategies: no augmentation, hierarchical image enhancement (CLAHE, gamma correction, bilateral filtering), geometric transformation (rotation, translation, shear, zoom, flip), and hybrid combination. A multi-component loss function integrating Focal Loss, Dice Loss, IoU Loss, and Boundary Loss addresses class imbalance while optimizing both region overlap and edge localization. The proposed VGG19-based U-Net with hybrid augmentation achieves state-of-the-art performance: Dice coefficient of 0.6585, IoU of 0.4970, precision of 0.6186, recall of 0.7344, and Boundary F1 of 0.9185. Key findings demonstrate that VGG19 transfer learning provides +21.54% Dice improvement over Baseline U-Net with 12.7-fold reduction in overfitting, while hybrid augmentation yields +4.87% Dice improvement with +2.24% synergistic gain beyond individual strategies. This research advances automated dermatological tools for precise skin health assessment, reducing subjectivity in clinical evaluations and providing actionable guidelines for practitioners developing automated wrinkle analysis systems.  

Downloads

Download data is not yet available.

References

S. Kim, H. Yoon, J. Lee, and S. Yoo, “Facial wrinkle segmentation using weighted deep supervision and semi-automatic labeling,” Artif Intell Med, vol. 145, no. 102679, p. 102679, 2023, doi: 10.1016/j.artmed.2023.102679.

M.-Y. Yang, Q.-L. Shen, D.-T. Xu, X.-L. Sun, and Q.-B. Wu, “Striped WriNet: Automatic wrinkle segmentation based on striped attention module,” Biomed Signal Process Control, vol. 90, no. 105817, p. 105817, 2024, doi: 10.1016/j.bspc.2023.105817.

Z. Liu, Q. Qi, S. Wang, and G. Zhai, “A novel approach to the detection of facial wrinkles: Database, detection algorithm, and evaluation metrics,” Comput Biol Med, vol. 174, no. 108431, p. 108431, 2024, doi: 10.1016/j.compbiomed.2024.108431.

M. H. Yap, N. Batool, C.-C. Ng, M. Rogers, and K. Walker, “A survey on facial wrinkles detection and inpainting: Datasets, methods, and challenges,” IEEE Trans Emerg Top Comput Intell, vol. 5, no. 4, pp. 505–519, 2021, doi: 10.1109/tetci.2021.3075723.

J. Moon, H. Chung, and I. Jang, “Facial Wrinkle Segmentation for Cosmetic Dermatology: Pretraining with Texture Map-Based Weak Supervision,” ArXiv, Nov. 2024, doi: 10.48550/arXiv.2408.10060.

R. M. Elbashir and M. Hoon Yap, “Evaluation of automatic facial wrinkle detection algorithms,” J Imaging, vol. 6, no. 4, p. 17, 2020, doi: 10.3390/jimaging6040017.

J. Chen, M. He, and W. Cai, “Facial wrinkle detection with multiscale spatial feature fusion based on image enhancement and ASFF-SEUnet,” Electronics (Basel), vol. 12, no. 24, p. 4897, 2023, doi: 10.3390/electronics12244897.

H. Yoon, S. Kim, J. Lee, and S. Yoo, “Deep-learning-based morphological feature segmentation for facial skin image analysis,” Diagnostics (Basel), vol. 13, no. 11, p. 1894, 2023, doi: 10.3390/diagnostics13111894.

C.-I. Moon and O. Lee, “Skin microstructure segmentation and aging classification using CNN-based models,” IEEE Access, vol. 10, pp. 4948–4956, 2022, doi: 10.1109/access.2021.3140031.

G. Carlos da Silva, M. B. Barbosa, F. B. C. Júnior, P. L. Moreira, R. Werka, and A. A. Martin, “Detection of skin wrinkles and quantification of roughness using a novel image processing technique from a dermatoscope device,” Skin research and technology, vol. 29, no. 6, p. e13335, 2023, doi: 10.1111/srt.13335.

M. Güler and E. Namlı, “Brain tumor detection with deep learning methods’ classifier optimization using medical images,” Appl Sci (Basel), vol. 14, no. 2, p. 642, 2024, doi: 10.3390/app14020642.

N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031–82057, 2021, doi: 10.1109/access.2021.3086020.

J. Jang et al., “A deep learning-based segmentation pipeline for profiling cellular morphodynamics using multiple types of live cell microscopy,” Cell reports methods, vol. 1, no. 7, p. 100105, 2021, doi: 10.1016/j.crmeth.2021.100105.

A. Ghaznavi, R. Rychtáriková, P. Císař, M. M. Ziaei, and D. Štys, “Symmetry breaking in the U-Net: Hybrid deep-learning multi-class segmentation of HeLa cells in reflected light microscopy images,” Symmetry (Basel), vol. 16, no. 2, p. 227, 2024, doi: 10.3390/sym16020227.

D. Shao et al., “Pixel-level classification of five histologic patterns of lung adenocarcinoma,” Anal Chem, vol. 95, no. 5, pp. 2664–2670, 2023, doi: 10.1021/acs.analchem.2c03020.

A. Li, D. Li, and A. Wang, “A two-stage YOLOv5s-U-Net framework for defect localization and segmentation in overhead transmission lines,” Sensors (Basel), vol. 25, no. 9, p. 2903, 2025, doi: 10.3390/s25092903.

S. Alzahrani, B. Al-Bander, and W. Al-Nuaimy, “Attention mechanism guided deep regression model for acne severity grading,” Computers, vol. 11, no. 3, p. 31, 2022, doi: 10.3390/computers11030031.

A. Abedalla, M. Abdullah, M. Al-Ayyoub, and E. Benkhelifa, “Chest X-ray pneumothorax segmentation using U-Net with EfficientNet and ResNet architectures,” PeerJ Comput Sci, vol. 7, no. e607, p. e607, 2021, doi: 10.7717/peerj-cs.607.

E. Kotei and R. Thirunavukarasu, “Ensemble technique coupled with deep transfer learning framework for automatic detection of tuberculosis from chest X-ray radiographs,” Healthcare (Basel), vol. 10, no. 11, p. 2335, 2022, doi: 10.3390/healthcare10112335.

S. A. Wagle, Harikrishnan, J. Sampe, F. Mohammad, and S. H. Md Ali, “Effect of data augmentation in the classification and validation of tomato plant disease with deep learning methods,” Traitement du signal, vol. 38, no. 6, pp. 1657–1670, 2021, doi: 10.18280/ts.380609.

S. Uçkun, M. Ağrali, and V. Kiliç, “Deep learning-based ischemic stroke segmentation on brain computed tomography images,” European Journal of Science and Technology, 2023, doi: 10.31590/ejosat.1258247.

N.-A.- Alam, M. Ahsan, M. A. Based, J. Haider, and M. Kowalski, “COVID-19 detection from chest X-ray images using feature fusion and deep learning,” Sensors (Basel), vol. 21, no. 4, p. 1480, 2021, doi: 10.3390/s21041480.

Y. Fu, P. Xue, and E. Dong, “Densely connected attention network for diagnosing COVID-19 based on chest CT,” Comput Biol Med, vol. 137, no. 104857, p. 104857, 2021, doi: 10.1016/j.compbiomed.2021.104857.

M. Prodan, E. Paraschiv, and A. Stanciu, “Applying deep learning methods for mammography analysis and breast cancer detection,” Appl Sci (Basel), vol. 13, no. 7, p. 4272, 2023, doi: 10.3390/app13074272.

Y. Wang, R. Wan, W. Yang, H. Li, L.-P. Chau, and A. Kot, “Low-light image enhancement with normalizing flow,” Proc Conf AAAI Artif Intell, vol. 36, no. 3, pp. 2604–2612, 2022, doi: 10.1609/aaai.v36i3.20162.

H. M. Balaha, E. R. Antar, M. M. Saafan, and E. M. El-Gendy, “A comprehensive framework towards segmenting and classifying breast cancer patients using deep learning and Aquila optimizer,” J Ambient Intell Humaniz Comput, vol. 14, no. 6, pp. 7897–7917, 2023, doi: 10.1007/s12652-023-04600-1.

H. M. Balaha, M. H. Balaha, and H. A. Ali, “Hybrid COVID-19 segmentation and recognition framework (HMB-HCF) using deep learning and genetic algorithms,” Artif Intell Med, vol. 119, no. 102156, p. 102156, 2021, doi: 10.1016/j.artmed.2021.102156.

M. Abdullah, F. B. Abrha, B. Kedir, and T. Tamirat Tagesse, “A Hybrid Deep Learning CNN model for COVID-19 detection from chest X-rays,” Heliyon, vol. 10, no. 5, p. e26938, 2024, doi: 10.1016/j.heliyon.2024.e26938.

Additional Files

Published

2026-04-18

How to Cite

[1]
W. F. Setiawan and N. Suciati, “Automated Facial Wrinkle Segmentation for Dermatological Assessment Using VGG-Based U-Net with Hybrid Augmentation”, J. Tek. Inform. (JUTIF), vol. 7, no. 2, pp. 1609–1620, Apr. 2026.