Computer vision system for Working time estimation by Human Activities detection in video frames
https://doi.org/10.15514/ISPRAS-2020-32(1)-7
Abstract
The goal of the research is to develop and to test methods for detecting people, parametric points for their hands and their current working tools in the video frames. The following algorithms are implemented: humans bounding boxes coordinates detection in video frames; human pose estimation: parametric points detection for each person in video frames; detection of the bounding boxes coordinates of the defined tools in video frames; estimation of which instrument the person is using at the certain moment. To implement algorithms, the existing computer vision models are used for the following tasks: Object detection, Pose estimation, Object overlaying. Machine learning system for working time detection based on computer vision is developed and deployed as a web-service. Recall, precision and f1-score are used as a metric for multi-classification problem. This problem is defined as what type of tool the person uses in a certain frame of video (Object Overlaying). Problem solution for action detection for the railway industry is new in terms of work activity estimation from video and working time optimization (based on human action detection). As the videos are recorded with a certain positioning of cameras and a certain light, the system has some limitations on how video should be filmed. Another limitation is the number of working tools (pliers, wrench, hammer, chisel). Further developments of the work might be connected with the algorithms for 3D modeling, modeling the activity as a sequence of frames (RNN, LSTM models), Action Detection model development, time optimization for the working process, recommendation system for working process from video activity detection.
Keywords
About the Authors
Sergey Evgenievich ShtekhinRussian Federation
Senior Data Analyst
Denis Konstantinovich Karachev
Russian Federation
Data Analyst
Justina Alekseevna Ivanova
Russian Federation
Data Analyst
References
1. Методические рекомендации по изучению затрат рабочего времени в структурных подразделениях ОАО «РЖД». Утверждены распоряжением ОАО «РЖД» от 10 апреля 2018 / Guidelines for the study of the costs of working time in structural divisions of JSC Russian Railways. Approved by the order of Russian Railways on April 10, 2018
2. Keypoint evaluation metrics used by COCO. Available at: http://cocodataset.org/#keypoints-eval, accessed 05.01.2020.
3. Andrea Gaetano Tramontano. Deep Learning Networks for Real-time Object Detection on Mobile Devices. Master’s Degree Thesis, University of Padova, Italy, 2018/2019.
4. C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single shot detector. arXiv:1701.06659, 2017.
5. J. Huang, V. Rathod et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3296-3297.
6. D. Gordon, A. Kembhavi, M. Rastegari, J. Redmon, D. Fox, and A. Farhadi. Iqa: Visual question answering in interactive environments. arXiv:1712.03316, 2017.
7. O. Russakovsky, L.-J. Li, and L. Fei-Fei. Best of both worlds: human-machine collaboration for object annotation. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2121–2131.
8. J. Parham, J. Crall, C. Stewart, T. Berger-Wolf, and D. Rubenstein. Animal population censusing at scale with citizen science and photographic identification. In Proc. of the AAAI 2017 Spring Symposium on Artificial Intelligence for the Social Good, 2017, pp. 37-44.
9. T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. ´ Focal loss for dense object detection. arXiv:1708.02002, 2017.
10. M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, vol. 88, no. 2, 2010, pp. 303– 338.
11. I. Krasin, T. Duerig et al. Openimages: A public dataset for large-scale multi-label and multi-class image classification, 2017. Available at: https://github.com/openimages, accessed 05.01.2020.
12. Joseph Redmon, Ali Farhadi: YOLOv3: An Incremental Improvement. arXiv:1804.02767, 2018
13. Jonathan Hui. mAP (mean Average Precision) for Object Detection. Available at: https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173, accessed 05.01.2020.
14. M. Scott. Smart camera gimbal bot scanlime:027, Dec 2017. 4
15. Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multiperson 2d pose estimation using part affinity fields. CVPR, 2017.
16. D. Osokin, Real-time 2d multi-person pose estimation on CPU: Lightweight OpenPose arXiv:1811.12004
17. Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun. Cascaded pyramid network for multi-person pose estimation. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7103-7112.
18. Xiao, Bin, Haiping Wu, and Yichen Wei. Simple Baselines for Human Pose Estimation and Tracking. Lecture Notes in Computer Science, vol. 11210, 2018, pp. 472-487.
19. G. Moon, J.Y. Chang and K.M. Lee. PoseFix: Model-Agnostic General Human Pose Refinement Network. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7765-7773.
20. Arun Gandhi. Data Augmentation. How to use Deep Learning when you have Limited Data – Part 2. Available at: https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/, accessed 05.01.2020.
21. Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. Communications of the ACM, vol. 60, no. 6, 2017, pp. 84-90.
22. Massimiliano Mancini, Hakan Karaoguz, Elisa Ricci, Patric Jensfelt, Barbara Caputo. Kitting in the Wild through Online Domain Adaptation. arXiv:1807.01028, 2018.
23. Hakan Karaoguz, Patric Jensfelt. Fusing Saliency Maps with Region Proposals for Unsupervised Object Localization, arXiv:1804.03905, 2018.
24. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1506.01497, 2015.
25. Kiranyaz S., Ince T. & Gabbouj M. Real-Time Patient-Specific ECG Classification by 1D Convolutional Neural Networks. IEEE Transactions on Biomedical Engineering, vol. 63, issue 3, 2016, pp.664–675.
26. M.D. Zeiler. ADADELTA: an adaptive learning rate method. arXiv:1212.5701, 2012.
27. Zha, Xuefan. (2018). A Comparison of 1-D and 2-D Deep Convolutional Neural Networks in ECG Classification. arXiv:1810.07088, 2018.
28. G. Huang, Z. Liu, and K.Q. Weinberger. Densely connected convolutional networks. arXiv:1608.06993, 2017..
29. A. Sherstinsky. Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. arXiv:1808.03314, 2018.
Review
For citations:
Shtekhin S.E., Karachev D.K., Ivanova J.A. Computer vision system for Working time estimation by Human Activities detection in video frames. Proceedings of the Institute for System Programming of the RAS (Proceedings of ISP RAS). 2020;32(1):121-136. (In Russ.) https://doi.org/10.15514/ISPRAS-2020-32(1)-7