DOI QR코드

DOI QR Code

Analysis of Table Tennis Swing using Action Recognition

동작인식을 이용한 탁구 스윙 분석

  • Heo, Geon (Graduate School of Automotive Engineering, Seoul National University of Science and Technology) ;
  • Ha, Jong-Eun (Department of Mechanical and Automotive Engineering, Seoul National University of Science and Technology)
  • 허건 (서울과학기술대학교 기계공학과) ;
  • 하종은 (서울과학기술대학교 자동차공학과)
  • Received : 2014.07.04
  • Accepted : 2014.10.21
  • Published : 2015.01.01

Abstract

In this paper, we present an algorithm for the analysis of poses while playing table-tennis using action recognition. We use Kinect as the 3D sensor and 3D skeleton data provided by Kinect for further processing. We adopt a spherical coordinate system and feature selected using k-means clustering. We automatically detect the starting and ending frame and discriminate the action of table-tennis into two groups of forehand and backhand swing. Each swing is modeled using HMM(Hidden Markov Model) and we used a dataset composed of 200 sequences from two players. We can discriminate two types of table tennis swing in real-time. Also, it can provide analysis according to similarities found in good poses.

Keywords

References

  1. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, "Real-time human pose recognition in parts from a single depth image," IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2011.
  2. L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition," Proc. of the IEEE, vol. 77, no. 2, pp. 257-285, 1989. https://doi.org/10.1109/5.18626
  3. A. Bobick and J. Davis, "The recognition of human movement using temporal templates," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2011.
  4. J. W. Davis and A. Tyagi, "Minimal-latency human action recognition using reliable-inference," Image and Vision Computing, vol. 24, no. 5, pp. 455-472, 2006. https://doi.org/10.1016/j.imavis.2006.01.012
  5. H. Fujiyoshi and A. Lipton, "Real-time human motion analysis by image skeletoniation," In IEEE Workshop on Applications of Computer Vision, pp. 15-21, 1998.
  6. E. Yu and J. K. Aggarwal, "Human action recognition with extremities as semantic posture representation," International Workshop on Semantic Learning and Applications in Multimedia (SLAM, in conjunction with CVPR), Jun. 2009.
  7. M. Z. Uddin, N. D. Thang, J. T. Kim, and T. S. Kim, "Human activity recognition using body joint-angle features and hidden Markov model," ETRI Journal, vol. 33, no. 4, pp. 569-579, 2011. https://doi.org/10.4218/etrij.11.0110.0314
  8. W. Li, Z. Zhang, and Z. Liu, "Action recognition based on a bag of 3D points," IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop, 2010.
  9. L. Xia, C.-C. Chen, and J. K. Aggarwal, "View invariant human action recognition using histograms of 3D joints," IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshop, 2012.
  10. X. Yang and Y. Tian, "Eigen Joits-based action recognition using naïve-bayes-nearest-neighbor," in Second international workshop on human activity understanding from 3d data in conjunction with CVPR, pp. 14-19, 2012.
  11. L. Xia, C.-C. Chen, and J. K. Aggarwall, "Human detection using depth information by Kinect," International Workshop on Human Activity Understanding from 3D Data in conjunction with CVPR (HAU3D), Jun. 2011.
  12. J. Sung, C. Ponce, B. Selman, and A. Saxena, "Human activity detection from RGBD images," In AAAI workshop on Pattern, Activity and Intent Recognition (PAIR), 2011.

Cited by

  1. Performance Improvement of an AHRS for Motion Capture vol.21, pp.12, 2015, https://doi.org/10.5302/J.ICROS.2015.15.0116
  2. Navigation based Motion Counting Algorithm for a Wearable Smart Device vol.21, pp.6, 2015, https://doi.org/10.5302/J.ICROS.2015.15.9030