DOI QR코드

DOI QR Code

A Study on Hand Region Detection for Kinect-Based Hand Shape Recognition

Kinect 기반 손 모양 인식을 위한 손 영역 검출에 관한 연구

  • Park, Hanhoon (Department of Electronic Engineering, Pukyong National University) ;
  • Choi, Junyeong (Division of Computer Science and Engineering, Hanyang University) ;
  • Park, Jong-Il (Division of Computer Science and Engineering, Hanyang University) ;
  • Moon, Kwang-Seok (Department of Electronic Engineering, Pukyong National University)
  • Received : 2013.03.12
  • Accepted : 2013.05.15
  • Published : 2013.05.30

Abstract

Hand shape recognition is a fundamental technique for implementing natural human-computer interaction. In this paper, we discuss a method for effectively detecting a hand region in Kinect-based hand shape recognition. Since Kinect is a camera that can capture color images and infrared images (or depth images) together, both images can be exploited for the process of detecting a hand region. That is, a hand region can be detected by finding pixels having skin colors or by finding pixels having a specific depth. Therefore, after analyzing the performance of each, we need a method of properly combining both to clearly extract the silhouette of hand region. This is because the hand shape recognition rate depends on the fineness of detected silhouette. Finally, through comparison of hand shape recognition rates resulted from different hand region detection methods in general environments, we propose a high-performance hand region detection method.

손 모양 인식은 자연스러운 인간-컴퓨터 상호작용을 위한 기반 기술이다. 본 논문에서는 Kinect 기반 손 모양 인식을 위해 효과적으로 손 영역을 검출하기 위한 방법에 대해 논의한다. Kinect는 컬러 영상과 적외선 영상(혹은 깊이 영상)을 동시에 획득할 수 있는 카메라이기 때문에, 손 영역을 검출하는 과정에서 컬러 정보와 깊이 정보를 활용할 수 있다. 즉, 손 영역은 스킨 컬러를 가지는 영역으로 검출될 수도 있으며, 일정한 깊이 값을 가지는 영역으로 검출될 수도 있다. 그러므로, 이러한 방법들의 성능을 분석하여, 손 영역의 실루엣이 깔끔하게 도출될 수 있도록 적절히 결합하는 방법이 마련되어야 한다. 이는 손 모양 인식률을 크게 좌우하기 때문이다. 최종적으로 일반적인 환경에서 손 영역 검출 방법의 차이에 따른 손 모양 인식률을 비교함으로써, 성능이 우수한 손 영역 검출 방법을 제안한다.

Keywords

References

  1. C. Cao, Y. Sun, R. Li, and L. Chen, "Hand posture recognition via joint feature sparse representation," Optical Engineering, vol. 50, no. 12, pp. 127210, 2011. https://doi.org/10.1117/1.3662884
  2. J. Choi, H. Park, and J.-I. Park, "Hand shape recognition using distance transform and shape decomposition," Proc. of ICIP'11, pp. 3666-3669, 2011.
  3. J. Choi, J. Park, H. Park, and J.-I. Park, "iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones," Optical Engineering, vol. 52, no. 2, pp. 027206, 2013. https://doi.org/10.1117/1.OE.52.2.027206
  4. I. Oikonomidis, N. Kyriazis, and A. A. Argyros, "Markerless and efficient 26-DOF hand pose recovery," Proc. of ACCV'10, pp. 744-757, 2010.
  5. http://www.microsoft.com/en-us/kinectforwindows/
  6. S. Han, J. Choi, and J.-I. Park, "Two-hand-based interaction method using a hybrid camera," Proc. of IPIU'13, 2013.
  7. http://mathnathan.com/2011/02/depthvsdistance/
  8. http://labs.manctl.com/rgbdemo/
  9. A. Kim and S. Rhee, "Recognition of natural hand gesture by using HMM," J. of Korean Institute of Intelligent Systems, vol. 22, no. 5, pp. 639-645, 2012. https://doi.org/10.5391/JKIIS.2012.22.5.639
  10. M. J. Jones and J. M. Rehg, "Statistical color models with application to skin detection," IJCV, 46(1), pp. 81-96, 2002. https://doi.org/10.1023/A:1013200319198
  11. A. Kulshreshth, C. Zorn, and J. J. LaViola Jr., "Real-time markerless Kinect based finger tracking and gesture recognition for HCI," Proc. of IEEE Symposium on 3D User Interfaces, pp. 187-188, 2013.
  12. J. St. Jean, Kinect Hacks: Tips & Tools for Motion and Pattern Detection, O'Reilly Media, 2012.

Cited by

  1. A Design and Implementation of Natural User Interface System Using Kinect vol.15, pp.4, 2014, https://doi.org/10.9728/dcs.2014.15.4.473
  2. Hand Language Translation Using Kinect vol.18, pp.2, 2014, https://doi.org/10.7471/ikeee.2014.18.2.291
  3. A Finger Counting Method for Gesture Recognition vol.17, pp.2, 2016, https://doi.org/10.7472/jksii.2016.17.2.29
  4. Implementation of Markerless Augmented Reality with Deformable Object Simulation vol.17, pp.4, 2016, https://doi.org/10.7472/jksii.2016.17.4.35
  5. Multi-modal user interface combining eye tracking and hand gesture recognition vol.11, pp.3, 2017, https://doi.org/10.1007/s12193-017-0242-2