DEEP LEARNING IMPLEMENTATION FOR EMPLOYEE ATTENDANCE SYSTEM IN UNIVERSITAS PERTAMINA

Meredita Susanty
Sahrul Sahrul
Erwin Setiawan
Herminarto Nugroho

Abstract


Attendance recording with RFID tags scan, make human resource (HR) staff’s task more effective and efficient because it is saving time and effort in performing manual recording and recapitulation that must be performed by the HR staff in a company. However, the number of cases where employees forget to bring their identification cards, which has an RFID tag increases the workload of human resource staff. This study proposes a facial recognition prototype as an alternative way to record employee attendance. The model used in this study uses artificial neural networks that have more than one hidden layer and uses a supervised learning approach. The results of the study show that when a high-resolution image provided for the training data, the prototype able to make an accurate prediction. However, some further study is needed before replacing existing attendance recordings with face recognition to address several problems such as distance between camera and object and accessories that affect the essential features in a face like glasses, headscarves, and mask. Further research should find the maximum distance between the object(s) and the camera and the position (angle) of the object towards the camera.


Keywords


deep learning;face recognition;artificial neural networkimage processing;computer vision

Teks Lengkap:

PDF

Referensi


P. Nikitin, K. V. S. Rao, and S. Lazar, “An Overview of Near Field UHF RFID,” in 2007 IEEE International Conference on RFID, 2007, pp. 167–174.

K. Domdouzis, B. Kumar, and C. Anumba, “Radio-Frequency Identification (RFID) applications: A brief introduction,” Adv. Eng. Informatics, vol. 21, no. 4, pp. 350–355, Oct. 2007.

S. Hawkin, Neural Networks and Learning Machine. 2014.

H. Nugroho, “Tuning of Optical Beamforming Networks: A Deep Learning Approach,” Thesis, 2015.

H. Nugroho, W. K. Wibowo, A. R. Annisa, and H. M. Rosalinda, “Deep Learning for Tuning Optical Beamforming Networks,” TELKOMNIKA (Telecommunication Comput. Electron. Control., vol. 16, no. 4, p. 1607, Aug. 2018.

T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A Simple Deep Learning Baseline for Image Classification?”

G. Hinton et al., “Deep Neural Networks for Acoustic Modeling in Speech Recognition.”

F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering.”

A. Sekhon and P. Agarwal, “Face Recognition Using Artificial Neural Networks,” Int. J. Comput. Sci. Inf. Technol., vol. 7, no. 2, pp. 896–899, 2016.

O. N. A. Al-Allaf, “Review of Face Detection Systems Based Artificial Neural Networks Algorithms,” Int. J. Multimed. Its Appl., vol. 6, no. 1, 2014.

Salem Alaisawi, “Comparison of Face Recognition Methods,” Atilim University, 2017.

A. Geitgey, “The world’s simplest facial recognition api for Python and the command line,” 2017. [Online]. Available: https://github.com/ageitgey/face_recognition. [Accessed: 20-May-2019].

Eileen McNulty, “What’s The Difference Between Supervised and Unsupervised Learning? - Dataconomy,” 2015. [Online]. Available: https://dataconomy.com/2015/01/whats-the-difference-between-supervised-and-unsupervised-learning/. [Accessed: 21-May-2019].

I. Sommerville, Software Engineering Tenth Edition. Pearson, 2016.

R. S. Pressman and B. Maxim, Software Engineering: A Practitioner’s Approach 8th Edition. McGraw-Hill Education, 2014.

A. Gitgey, “Face Recognition Accuracy Problems.” 2018.

U. Toseeb, E. J. Bryant, and D. R. T. Keeble, “The Muslim Headscarf and Face Perception: ‘They All Look the Same, Don’t They?,’” PLoS One, vol. 9, no. 2, p. e84754, Feb. 2014.




DOI: https://doi.org/10.24176/simet.v11i2.4605

Article Metrics

Abstract views : 200| PDF views : 133

Refbacks

  • Saat ini tidak ada refbacks.


free hit counter View My Stats

Indexed by:

 

Flag Counter

Creative Commons License
Simetris : Jurnal Teknik Mesin, Elektro dan Ilmu Komputer is licensed under a Creative Commons Attribution 4.0 International License.

Dedicated to: