Penentuan Emosi pada Video dengan Convolutional Neural Network
DOI:
https://doi.org/10.14421/jiska.2020.51-04Abstract
Emosi seseorang dapat ditunjukan melalui ekspresi wajah. Ekspresi wajah manusia dapat berubah-ubah secara dinamis tanpa disadari oleh orang tersebut. Penelitian ini melakukan penentuan emosi dengan melakukan pengenalan ekspresi wajah manusia dan melakukan perekaman untuk setiap perubahan ekspresi wajah tersebut. Metode dalam penelitian ini adalah dengan melakukan klasifikasi terhadap 6 ekspresi dasar wajah manusia ditambah ekspresi netral dengan Convolutional Neural Network (CNN). Pemerataan distribusi data dilakukan untuk meningkatkan kinerja model. Dari pemodelan tersebut, dihasilkan model klasifikasi yang dapat diterapkan pada sebuah video. Model tersebut diuji menggunakan data yang terpisah dari data latih dan dievaluasi menggunakan confusion matrix. Sebagai hasil evaluasi, diperoleh akurasi 74%, rata-rata presisi 75,05%, dan rata-rata recall 74%. Di akhir penelitian ini, peneliti melakukan percobaan dengan menerapkan model klasifikasi tersebut pada beberapa video yang mewakili ekspresi seseorang di dalam video tersebut. Setiap perubahan ekspresi akan direkam dan dianalisis sehingga ditemukan emosi yang paling dominan.
References
Chauhan, M. (2014) ‘Study & Analysis of Different Face Detection Techniques’, 5(2), pp. 1615–1618.
Goodfellow, I. J. et al. (2013) ‘Challenges in representation learning: A report on three machine learning contests’, Neural Networks, 64, pp. 59–63. doi: 10.1016/j.neunet.2014.09.005.
Ioffe, S. and Szegedy, C. (2015) ‘Batch normalization: Accelerating deep network training by reducing internal covariate shift’, 32nd International Conference on Machine Learning, ICML 2015, 1, pp. 448–456.
Khan, A. et al. (2019) ‘A Survey of the Recent Architectures of Deep Convolutional Neural Networks’, ArXiv, pp. 1–67.
Khoshdeli, M., Cong, R. and Parvin, B. (2017) ‘Detection of Nuclei in H & E Stained Sections Using Convolutional Neural Networks’, Conference: IEEE International Conference on Biomedical Health Informatics, (February).
Kotropoulos, C. and Pitas, I. (1997) ‘Rule-based face detection in frontal views’, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 4, pp. 2537–2540. doi: 10.1109/icassp.1997.595305.
Lecun, Y. et al. (1998) ‘Gradient-Based Learning Applied to Document Recognition’, (November), pp. 1–46.
Li, S. Z. and Jain, A. K. (2011) Handbook of Face Recognition. 2nd edn, Handbook of Face Recognition. 2nd edn. Edited by S. Z. Li1 and A. K. Jain. Springer. doi: 10.2990/29_1_103.
Liew, S. S. et al. (2016) ‘Gender classification: A convolutional neural network approach’, Turkish Journal of Electrical Engineering and Computer Sciences, 24(3), pp. 1248–1264. doi: 10.3906/elk-1311-58.
Mahmoodi, M. R. (2017) ‘Fast and Efficient Skin Detection for Facial Detection’, arXiv. Available at: http://arxiv.org/abs/1701.05595.
Parkhi, O. M., Vedaldi, A. and Zisserman, A. (2015) ‘Deep Face Recognition’, Conference: British Machine Vision Conference 2015, (Section 3), pp. 41.1-41.12. doi: 10.5244/c.29.41.
Russell, S. and Norvig, P. (2009) Artificial Intelligence: A Modern Approach (3rd edition). 3rd edn. Prentice Hall.
Saito, S. et al. (2016) ‘Photorealistic facial texture inference using deep neural networks’, arXiv, pp. 14 pp.-14 pp.
Sokolova, M., Japkowicz, N. and Szpakowicz, S. (2006) ‘Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation’, Australasian Joint Conference on Artificial Intelligence, WS-06-06(c), pp. 24–29.
Tian, Y., Kanade, T. and Cohn, J. F. (2011) ‘Facial Expression Recognition’, in Handbook of Face Recognition, pp. 487–519. doi: 10.1007/978-0-85729-932-1.
Trigueros, D. S., Meng, L. and Hartnett, M. (2018) ‘Face Recognition: From Traditional to Deep Learning Methods’, ArXiv, (October 2018). Available at: http://arxiv.org/abs/1811.00116.
Vasanth, P. . and Nataraj, K. . (2015) ‘Facial Expression Recognition Using SVM Classifier’, Indonesian Journal of Electrical Engineering and Informatics (IJEEI), 3(1), pp. 16–20.
Viola, P. and Michael, J. (2004) ‘Robust Real-Time Face Detection’, nternational Journal of ComputerVision, 57, pp. 137–154. doi: 10.1023/B:VISI.0000013087.49260.fb.
Yang, M.-H., Kriegman, D. J. and Ahuja, N. (2002) ‘Detecting Faces In Image : A Survey - Presentation’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1), pp. 1–25.
Yang, S. et al. (2016) ‘WIDER FACE: A face detection benchmark’, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-Decem, pp. 5525–5533. doi: 10.1109/CVPR.2016.596.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms as stated in http://creativecommons.org/licenses/by-nc/4.0
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.