DOI QR코드

DOI QR Code

AdaBoost-based Gesture Recognition Using Time Interval Window Applied Global and Local Feature Vectors with Mono Camera

모노 카메라 영상기반 시간 간격 윈도우를 이용한 광역 및 지역 특징 벡터 적용 AdaBoost기반 제스처 인식

  • Hwang, Seung-Jun (Department of Electronics and Information Engineering, Korea Aerospace University) ;
  • Ko, Ha-Yoon (Department of Electronics and Information Engineering, Korea Aerospace University) ;
  • Baek, Joong-Hwan (Department of Electronics and Information Engineering, Korea Aerospace University)
  • Received : 2018.01.15
  • Accepted : 2018.01.24
  • Published : 2018.03.28

Abstract

Recently, the spread of smart TV based Android iOS Set Top box has become common. This paper propose a new approach to control the TV using gestures away from the era of controlling the TV using remote control. In this paper, the AdaBoost algorithm is applied to gesture recognition by using a mono camera. First, we use Camshift-based Body tracking and estimation algorithm based on Gaussian background removal for body coordinate extraction. Using global and local feature vectors, we recognized gestures with speed change. By tracking the time interval trajectories of hand and wrist, the AdaBoost algorithm with CART algorithm is used to train and classify gestures. The principal component feature vector with high classification success rate is searched using CART algorithm. As a result, 24 optimal feature vectors were found, which showed lower error rate (3.73%) and higher accuracy rate (95.17%) than the existing algorithm.

최근 안드로이드, iOS 등의 셋톱박스 기반의 스마트 TV에 대한 보급에 따라 제스처로 TV를 컨트롤 할 수 있는 새로운 접근을 제안한다. 본 논문에서는 모노 카메라 센서를 이용한 AdaBoost 기반 제스처 인식에 관한 알고리즘을 제안한다. 우선, 신체 좌표 추출을 위해 가우시안 배경 제거 및 Camshift 기반 자세 추적 및 추정 알고리즘을 사용한다. AdaBoost 학습 모델을 신체 정규화된 광역 및 지역 특징 벡터의 집합을 특징 패턴으로 하여, 속도가 다른 동작들을 인식할 수 있도록 하였다. 또한 속도가 다른 다양한 제스처를 인식하기 위해 다중 AdaBoost 알고리즘을 적용하였다. CART 알고리즘을 이용하여 성공적인 중요 특징 벡터를 확인하고 중요도가 낮은 특징벡터를 제거하는 방식을 적용하면서 분류 성공률이 높은 최적의 특징 벡터를 탐색하였다. 그 결과 24개의 주성분 특징 벡터를 찾았으며, 기존 알고리즘에 비해 낮은 오분류율(3.73%)과 높은 인식률(95.17%)을 지닌 특징 벡터 및 분류기를 설계하였다.

Keywords

References

  1. L. Chen, H. Wei, and J. Ferryman. "A survey of human motion analysis using depth imagery," Pattern Recognition Letters, vol. 34, no. 15, pp. 1995-2006, Nov. 2013. https://doi.org/10.1016/j.patrec.2013.02.006
  2. K. Lee, Y. Shin, Y. Lee, and S. Seol, "A Study on User Interface and Control Method of Web-based Remote Control Platform," Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology, ISSN:2383-5287, vol. 7, no.6, pp. 827-837, June 2017.
  3. K. J. Lee, "A Study on Gesture Recognition using Edge Orientation Histogram and HMM," Journal of the Korea Institute of Information and Communication Engineering, vol. 15, no. 12, pp. 2647-2654, Dec. 2011. https://doi.org/10.6109/jkiice.2011.15.12.2647
  4. H. Duan and Y. Luo. "A Gestures Trajectory Recognition Method Based on DTW," in Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering, pp. 364-366, 2013.
  5. S. J. Hwang, et al. "Human Body Tracking and Pose Estimation Using Modified Camshift Algorithm," Journal of Software Engineering and Applications, vol. 6. no. 5B, pp. 37-42, May 2013.
  6. S, Jia. "A study of adaboost in 3D gesture recognition," Department of Computer Science, University of Toronto, Technical Report, 2003.
  7. Patsadu, Orasa, C. Nukoolkit, and B. Watanapa. "Human gesture recognition using Kinect camera," Computer Science and Software Engineering, 2012 International Joint Conference on. IEEE, Bangkok, Thailand, pp. 28-32, 2012.
  8. Arici, Tarik, et al. "Robust gesture recognition using feature pre-processing and weighted dynamic time warping," Multimedia Tools and Applications, vol. 72, no. 3, pp 3045-3062, Oct. 2013 https://doi.org/10.1007/s11042-013-1591-9
  9. Takimoto, Hironori, J. Lee, and A. Kanagawa. "A robust gesture recognition using depth data," International Journal of Machine Learning and Computing, vol. 3, no. 2, pp 245-249, Apr. 2013.
  10. Freund, Yoav, R. Schapire, and N. Abe. "A short introduction to boosting," Journal-Japanese Society For Artificial Intelligence vol. 14, no. 5, pp 771-780, Sep. 1999.
  11. J. Zhu, S. Rosset, H. Zou and T. Hastie. "Multi-class adaboost," Technical Report 430, Department of Statistics, University of Michigan, 2009.
  12. Hoffman, Michael, P. Varcholik, and Joseph J. LaViola. "Breaking the status quo: Improving 3d gesture recognition with spatially convenient input devices," Virtual Reality Conference, Waltham, MA, USA, pp. 59-66, Mar. 2010
  13. S. J. Hwang, G. P. Ahn, S. J. Park and J. H. Baek. "AdaBoost-Based Gesture Recognition Using Time Interval Trajectory Features," Journal of Advanced Navigation Technology, vol. 17, no. 2, pp 247-254, Apr. 2013. https://doi.org/10.12673/jkoni.2013.17.2.247