CONTROL SYSTEMS
DATA PROCESSING AND ANALYSIS
IMAGE PROCESSING METHODS
QUANTUM INFORMATICS
SECURITY ISSUES
I. B. Lashkov, A. M. Kashevnik Determination of driver dangerous states using smartphone camera!based measurements while driving
I. B. Lashkov, A. M. Kashevnik Determination of driver dangerous states using smartphone camera!based measurements while driving

Abstract.

The paper considers the reference model for dangerous state recognition system based on the camera readings describing vehicle driver’s facial features. The model includes the developed schemes for determination of potentially unsafe driver’s behavior, observable by a number of visual cues, at each moment of vehicle movement. These schemes are focused on recognition of driver’s drowsiness and distraction, and tracking from camera video stream with aid of image processing methods and describing a set of facial features, including eyes, gaze direction, head pose, etc. To early recognize and classify the particular dangerous states of the driver the reference model for face features recognition is proposed, built upon the general schemes and characterizing visual driving behavior in certain situations that can potentially risk a driver. The evaluation of the proposed reference model was tested with smartphone-based prototype mobile application and showed preliminary results that shows performance and efficiency improvement in recognition of dangerous driving states while driving.

Keywords:

driver, driving behavior, smartphone, dangerous situation, vehicle.

PP. 84-96.

DOI 10.14357/20718632190209

References

1. Global status report on road safety 2018. Geneva: World Health Organization; 2018. Licence: CC BY- NC-SA 3.0 IGO, Available at: https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf (accessed April 16, 2019).
2. Owens, J.M., Dingus, T.A., Guo, F., Fang, Y., Perez, M., McClafferty, J., and Tefft, B. Prevalence of Drowsy Driving Crashes: Estimates from a Large-Scale Naturalistic Driving Study. (Research Brief.) Washington, D.C.: AAA Foundation for Traffic Safety, 2018.
3. Dinges, D., and Grace, R. PERCLOS: A valid psychophysiological measure of alertness as assessed by psychomotor vigilance, TechBrief NHTSA, Publication No. FHWAMCRT-98-006, 1998.
4. Fazeen, M., Gozick, B., Dantu, R., Bhukhiya, M., and Gonzalez M.C., Safe Driving Using Mobile Phones, IEEE Transactions on Intelligent Transportation Systems, vol. 13, issue 3, pp. 1462-1468, 2012.
5. Lashkov, I.B., Smartphone-based approach to determining driving style with on-board sensors, Information and Control Systems, 2018, Vol. 5, pp. 2–12.
6. Lashkov, I.B., and Kashevnik, A.M., An Ontology Model for Dangerous Situation Prevention Based on the In-cabin Driver Analysis, Intellectual Technologies on Transport, 2018, Vol. 4, pp. 11-19.
7. Lashkov, I., Smirnov, A., Kashevnik, A., and Parfenov, V., Ontology-Based Approach and Implementation of ADAS System for Mobile Device Use While Driving, Proceedings of the 6th International Conference on Knowledge Engineering and Semantic Web, Moscow, CCIS 518, pp. 117-131, 2015.
8. Ramachandran, M., and Chandrakala, S., Android OpenCV based effective driver fatigue and distraction monitoring system, 2015 International Conference on Computing and Communications Technologies (ICCCT), pp. 262-266, 2015.
9. Abulkhair, M., Alsahli, A.H., Taleb, K.M., Bahran, A.M., Alzahrani, F.M., Alzahrani, H.A., and Ibrahim, L.F., Mobile Platform Detect and Alerts System for Driver Fatigue, Procedia Computer Science, vol. 62, pp. 555–564, 2015.
10. García-García, M., Caplier, A., and Rombaut, M., Driver Head Movements While Using a Smartphone in a Naturalistic Context, 6th International Symposium on Naturalistic Driving Research, Jun 2017, The Hague, Netherlands. 8, pp. 1-5, 2017.
11. Schmidt, J., Laarousi, R., Stolzmann, W., and Karrer, K., Eye blink detection for different driver states in conditionally automated driving and manual driving using EOG and a driver camera, Behavior Research Methods, vol. 50, iss. 3, pp. 1088-1101, 2018.
12. Galarza, E. E., Egas, F. D., Silva, F., Velasco, P. M., and Galarza, E., Real Time Driver Drowsiness Detection Based on Driver’s Face Image Behavior Using a System of Human Computer Interaction Implemented in a Smartphone, Proceedings of the International Conference on Information Technology & Systems, pp. 563-572, 2018.
13. Mohammad, F., Mahadas, K., and Hung, G. K., Drowsy driver mobile application: Development of a novel scleralarea detection method, Computers in Biology and Medicine, vol. 89, pp. 76–83, 2017.
14. Bradski, G., Kaehler, A., Learning OpenCV: Computer Vision in C++ with the OpenCV Library, O'Reilly Media, Inc., 2nd edition, 2013.
15. Nambi, A. U., Bannur, S., Mehta, I., Kalra, H., Virmani, A., Padmanabhan, V. N., Bhandari R., and Raman, B., HAMS: Driver and Driving Monitoring using a Smartphone, Proceedings of the 24th Annual International Conference on Mobile Computing and Networking (MobiCom '18). ACM, New York, NY, USA, pp. 840-842, 2018.
16. Redmon, J., and Farhadi, A. YOLO9000: Better, Faster, Stronger, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6525, 2017.
17. Sidaway, B., Fairweather, M., Sekiya, H., and Mcnitt-Gray, J., Time-to-Collision Estimation in a Simulated Driving Task, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 38, issue 1, pp. 101-113, 1996.
18. Kiefer, R. J., Flannagan, C. A., and Jerome, C. J., Time-to-Collision Judgments Under Realistic Driving Conditions, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 48, issue 2, pp. 334–345, 2006.
19. Lisper, O., and Eriksson, B., Effect of the length of a rest break and food intake on subsidiary reaction-time performance in an 8-hour driving task, Journal of Applied Psychology, vol. 65, issue 1, pp. 117–122, 1980.
20. Wenhui, D., Peishu, Q., H. and Jing, Driver fatigue detection based on fuzzy fusion, Proceedings of the Chinese Control and Decision Conference (CCDC '08), pp. 2640–2643, Shandong, China, July 2008.
21. Bergasa, L., Nuevo, J., Sotelo, M., Barea, R., and Lopez, M., Real-Time System for Monitoring Driver Vigilance, IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 1, pp. 63–77, 2006.
22. King, D. E., Dlib-ml: A machine learning toolkit, Journal of Machine Learning Research, vol. 10, pp. 1755–1758, 2009.
23. Ignatov, A., Timofte, R, Szczepaniak, P., Chou, W., Wang, K., Wu, M., Hartley, T., and Gool, L. V., AI Benchmark: Running Deep Neural Networks on Android Smartphones, ECCV Workshops 2018, pp. 288-314, 2018.
24. Sagonas, C., Antonakos, E., Tzimiropoulos, G., Zafeiriou, S., and Pantic, M., 300 faces In-the-wild challenge: Data-base and results, Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild", issue 47, pp. 3-18, 2016.
25. Gulli, A., and Pal, S., Deep Learning with Keras, Packt Publishing, p. 296, 2017.
26. Howard A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H., MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, Computing Research Repository, 9 p., 2017.
27. Redmon, J., Divvala, S.K., Girshick, R.B., and Farhadi, A., You only look once: Unified, real-time object detection, IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
 

 

2024 / 01
2023 / 04
2023 / 03
2023 / 02

© ФИЦ ИУ РАН 2008-2018. Создание сайта "РосИнтернет технологии".