PEOPLE COUNTING FOR PUBLIC TRANSPORTATIONS USING YOU ONLY LOOK ONCE METHOD

  • Tsabita Al Asshifa Hadi Kusuma Telecommunication Engineering, Electrical Engineering Faculty, Telkom University, Indonesia
  • Koredianto Usman Telecommunication Engineering, Electrical Engineering Faculty, Telkom University, Indonesia
  • Sofia Saidah Telecommunication Engineering, Electrical Engineering Faculty, Telkom University, Indonesia
Keywords: CNN, deep learning, IoU, mAP, people counting, YOLOv4

Abstract

People counting have been widely used in life, including public transportations such as train, airplane, and others. Service operators usually count the amount of passengers manually using a hand counter. Nowadays, in an era that most of human-things are digital, this method is certainly consuming enough time and energy. Therefore, this research is proposed so the service operator doesn't have to count manually with a hand counter, but using an image processing with You Only Look Once (YOLO) method. This project is expected that people counting is no longer done manually, but already based on computer vision. This Final Project uses YOLOv4 that is the latest method in detecting untill 80 classes of object. Then it will use transfer learning as well to change the number of classes to 1 class. This research was done by using Python programming language with various platforms. This research also used three training data scenarios and two testing data scenarios. Parameters measured are accuration, precision, recall, F1 score, Intersection of Union (IoU), and mean Average Precision (mAP). The best configurations used are learning rate 0.001, random value 0, and sub divisions 32. And the best accuration for this system is 69% with the datasets that has been trained before. The pre-trained weights have 72.68% of accuracy, 77% precision, and 62.88% average IoU. This research has resulted a proper performance for detecting and counting people on public transportations.

Downloads

Download data is not yet available.

References

S. Saxena and D. Songara, “Design of people counting system using MATLAB,” in 2017 Tenth International Conference on Contemporary Computing (IC3), 2017, pp. 1–3, doi: 10.1109/IC3.2017.8284344.

S. Mane and S. Mangale, “Moving Object Detection and Tracking Using Convolutional Neural Networks,” in 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), 2018, pp. 1809–1813, doi: 10.1109/ICCONS.2018.8662921.

H. Zhang, Y. Sun, L. Liu, X. Wang, L. Li, and W. Liu, “ClothingOut: a category-supervised GAN model for clothing segmentation and retrieval,” Neural Comput. Appl., vol. 32, no. 9, pp. 4519–4530, 2020.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.

K. Shih, C. Chiu, and Y. Pu, “Real-time Object Detection via Pruning and a Concatenated Multi-feature Assisted Region Proposal Network,” in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 1398–1402, doi: 10.1109/ICASSP.2019.8683842.

A. Mrutyunjay, P. Kondrakunta, and H. Rallapalli, “Non-max Suppression for Real-Time Human Localization in Long Wavelength Infrared Region,” in Advances in Decision Sciences, Image Processing, Security and Computer Vision, Springer, 2020, pp. 166–174.

A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv Prepr. arXiv2004.10934, 2020.

J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv Prepr. arXiv1804.02767, 2018.

C. Wang, H. Mark Liao, Y. Wu, P. Chen, J. Hsieh, and I. Yeh, “CSPNet: A New Backbone that can Enhance Learning Capability of CNN,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020, pp. 1571–1580, doi: 10.1109/CVPRW50498.2020.00203.

L. V. Magnusson and R. Olsson, “Improving the Canny Edge Detector Using Automatic Programming: Improving Non-Max Suppression,” in Proceedings of the Genetic and Evolutionary Computation Conference 2016, 2016, pp. 461–468.

H. Caesar, J. Uijlings, and V. Ferrari, “COCO-Stuff: Thing and Stuff Classes in Context,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 1209–1218, doi: 10.1109/CVPR.2018.00132.

L. Yang, L. Jing, J. Yu, and M. K. Ng, “Learning Transferred Weights From Co-Occurrence Data for Heterogeneous Transfer Learning,” IEEE Trans. Neural Networks Learn. Syst., vol. 27, no. 11, pp. 2187–2200, 2016, doi: 10.1109/TNNLS.2015.2472457.

P. M. Radiuk, “Impact of training set batch size on the performance of convolutional neural networks for diverse datasets,” Inf. Technol. Manag. Sci., vol. 20, no. 1, pp. 20–24, 2017.

Y. Wu et al., “Demystifying Learning Rate Policies for High Accuracy Training of Deep Neural Networks,” in 2019 IEEE International Conference on Big Data (Big Data), 2019, pp. 1971–1980, doi: 10.1109/BigData47090.2019.9006104.

J. Konar, P. Khandelwal, and R. Tripathi, “Comparison of Various Learning Rate Scheduling Techniques on Convolutional Neural Network,” in 2020 IEEE International Students’ Conference on Electrical,Electronics and Computer Science (SCEECS), 2020, pp. 1–5, doi: 10.1109/SCEECS48394.2020.94.

N. D. Miranda, L. Novamizanti, and S. Rizal, “Convolutional Neural Network Pada Klasifikasi Sidik Jari Menggunakan Resnet-50,” J. Tek. Inform., vol. 1, no. 2, pp. 61–68, 2020.

G. Zhang, L. Ge, Y. Yang, Y. Liu, and K. Sun, “Fused Confidence for Scene Text Detection via Intersection-over-Union,” in 2019 IEEE 19th International Conference on Communication Technology (ICCT), 2019, pp. 1540–1543, doi: 10.1109/ICCT46805.2019.8947307.

Published
2021-02-05
How to Cite
[1]
T. A. A. H. Kusuma, K. Usman, and S. Saidah, “PEOPLE COUNTING FOR PUBLIC TRANSPORTATIONS USING YOU ONLY LOOK ONCE METHOD”, J. Tek. Inform. (JUTIF), vol. 2, no. 1, pp. 57-66, Feb. 2021.