Effective of contrastive learning framework in driver behavior analysis
Author affiliations
DOI:
https://doi.org/10.15625/1813-9663/20349Keywords:
Contrastive learning, driver behavior, deep learning, combined loss functions.Abstract
The demand for advanced driver behavior analysis systems to support the car driver arises, leading to reduced accidents. The solutions have been researched and developed for a long time, but the results have recently been acknowledged since some deep learning methods have been published widely. Our paper proposes some modifications and one of the effective deep learning models, Contrastive Learning Framework (CLF), for better understanding and impact overall. However, it met a lot of challenges such as data imbalance and real-time predicting problems. In more detail, we propose the CENCE loss function for computing comparable visual features both positive and negative, and Cross Stage Partial Technique (CSPNet and CSPResnet) to improve the outcome in the base encoder. Our approach is evaluated on published datasets and the obtained results represent some positive performance in the analysis of driver behavior.
References
S. Abtahi, B. Hariri, and S. Shirmohammadi, “Driver drowsiness monitoring based on yawning detection,” 05 2011, pp. 1 – 4.
V. P. B and S. Chinara, “Automatic classification methods for detecting drowsiness using wavelet packet transform extracted time-domain features from single-channel eeg signal,” Journal of Neuroscience Methods, vol. 347, p. 108927, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0165027020303502
D. R. Cox, “The regression analysis of binary sequences,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 20, no. 2, pp. 215–232, 1958.
B. Cyganek and S. Gruszczyn ́ski, “Hybrid computer vision system for drivers’ eye recognition
and fatigue monitoring,” Neurocomputing, vol. 126, p. 78–94, 02 2014.
Z. Dai, M. Chen, X. Gu, S. Zhu, and P. Tan, “Batch dropblock network for person re-
identification and beyond,” 2023.
X. Fan, B.-C. Yin, and Y.-F. Sun, “Yawning detection for monitoring driver fatigue,” in 2007
International Conference on Machine Learning and Cybernetics, vol. 2, 2007, pp. 664–668.
S. Haykin, Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
M. Hearst, S. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18–28, 1998.
T. K. Ho, “Random decision forests,” in Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 1, 1995, pp. 278–282 vol.1.
J. Jo, S. Lee, K. Park, I.-J. Kim, and J. Kim, “Detecting driver drowsiness using feature-level fusion and user-specific classification,” Expert Systems with Applications, vol. 41, p. 1139–1152, 03 2014.
O. Kopuklu, J. Zheng, H. Xu, and G. Rigoll, “Driver anomaly detection: A dataset and con- trastive learning approach,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2021, pp. 91–100.
J. Krajewski, D. Sommer, U. Trutschel, D. Edwards, and M. Golz, “Steering wheel behavior based estimation of fatigue,” 01 2009.
Y. Lu and Z. Wang, “Detecting driver yawning in successive images,” in 2007 1st International Conference on Bioinformatics and Biomedical Engineering, 2007, pp. 581–583.
S. Masood, A. Rai, A. Aggarwal, M. Doja, and M. Ahmad, “Detecting distraction of drivers using convolutional neural network,” Pattern Recognition Letters, vol. 139, pp. 79–85, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167865517304695
G. Moody, R. Mark, and A. Goldberger, “Physionet: a research resource for studies of complex physiologic and biomedical signals,” Computers in cardiology, vol. 27, pp. 179–82, 02 2000.
M. Sabet, R. Zoroofi, K. Sadeghniiat, and M. Sabbaghian, “A new system for driver drowsiness and distraction detection,” 05 2012.
M. Shahverdy, M. Fathy, R. Berangi, and M. Sabokrou, “Driver behavior detection and classi- fication using deep convolutional neural networks,” Expert Systems with Applications, vol. 149, p. 113240, 07 2020.
B. Verma and A. Choudhary, “Deep learning based real-time driver emotion monitoring,” 09 2018, pp. 1–6.
C.-Y. Wang, H.-Y. M. Liao, I.-H. Yeh, Y.-H. Wu, P.-Y. Chen, and J.-W. Hsieh, “Cspnet: A new backbone that can enhance learning capability of cnn,” 2019.
Z. Wu, Y. Xiong, S. Yu, and D. Lin, “Unsupervised feature learning via non-parametric instance- level discrimination,” 2018.
S. Xie, R. Girshick, P. Doll ́ar, Z. Tu, and K. He, “Aggregated residual transformations for deep
neural networks,” 2017.
C. Yan, B. Zhang, and F. Coenen, “Driving posture recognition by convolutional neural net-
works,” IET Computer Vision, vol. 10, 10 2015.
W.-L. Zheng and B.-L. Lu, “A multimodal approach to estimating vigilance using eeg and
forehead eog,” Journal of Neural Engineering, vol. 14, 11 2016.
K. Zhou, Y. Yang, A. Cavallaro, and T. Xiang, “Omni-scale feature learning for person re-
identification,” 2019.
X. Zhu and D. Ramanan, “Face detection, pose estimation, and landmark localization in the wild,” 06 2012, pp. 2879–2886.
Downloads
Published
How to Cite
Issue
Section
License
1. We hereby assign copyright of our article (the Work) in all forms of media, whether now known or hereafter developed, to the Journal of Computer Science and Cybernetics. We understand that the Journal of Computer Science and Cybernetics will act on my/our behalf to publish, reproduce, distribute and transmit the Work.2. This assignment of copyright to the Journal of Computer Science and Cybernetics is done so on the understanding that permission from the Journal of Computer Science and Cybernetics is not required for me/us to reproduce, republish or distribute copies of the Work in whole or in part. We will ensure that all such copies carry a notice of copyright ownership and reference to the original journal publication.
3. We warrant that the Work is our results and has not been published before in its current or a substantially similar form and is not under consideration for another publication, does not contain any unlawful statements and does not infringe any existing copyright.
4. We also warrant that We have obtained the necessary permission from the copyright holder/s to reproduce in the article any materials including tables, diagrams or photographs not owned by me/us.

