DOI QR코드

DOI QR Code

Touch Pen Using Depth Information

  • Lee, Dong-Seok (Dept. of Computer Software Engineering, Dongeui University) ;
  • Kwon, Soon-Kak (Dept. of Computer Software Engineering, Dongeui University)
  • Received : 2015.08.25
  • Accepted : 2015.11.09
  • Published : 2015.11.30

Abstract

Current touch pen requires the special equipments to detect a touch and its price increases in proportion to the screen size. In this paper, we propose a method for detecting a touch and implementing a pen using the depth information. The proposed method obtains a background depth image using a depth camera and extracts an object by comparing a captured depth image with the background depth image. Also, we determine a touch if the depth value of the object is the same as the background and then provide the pen event. Using this method, we can implement a cheaper and more convenient touch pen.

Keywords

1. INTRODUCTION

Recently, the interest in touch pen that induces an active interaction between the teacher and students is growing due to multimedia environment via computer that is built in the classroom. To strengthen an interaction and collaboration of lessons between students and teacher, using a touch pen works well. Therefore, the introduction of a touch pen is growing in various countries to promote the upgrading educational infrastructure.

Conventional methods of recognizing a touch in a touch pen have been presented; methods for using a screen including the physical touch sensors[1], methods for detecting infrared ray(IR) that is flowing over a screen attaching IR sensors[2,3], and methods for detecting ultrasonic waves by generating an ultrasonic when a pen is touched [4,5]. The method of using the screen including the physical touch sensors finds a touch point from calculating a change of the current if the screen is touched. This method has the high accuracy of a touch event and is not needed to a special touch devices, but a price of this device is expensive. The method of using IR sensors attaches a large number of IR sensors to surround the screen and shoots IR signals and then identifies a touch as if IR signal is covered when the screen is touched. This method does not relate to screen, so the production of this device is simple than the physical touch screen. However this method needs a configuration process. The method of using the ultrasonic sensors and the ultrasonic pen attaches ultrasonic sensors to surround the screen and detects a touch using the ultrasonic pen. The device using this method is cheaper than other methods, but the ultrasonic pen must be required to touch.

In this paper, we propose a method of implementing the touch pen by recognizing the touch by using a depth camera. As Microsoft's Kinect which can measure the depth information has been released in the late 2010, the studies using a depth value to recognize the user's motion have been presented[6,7,8]. In addition, the studies have been presented about the methods for providing an event with the user's motion that is obtained by using a depth camera [9,10,11].

This paper presents a method of recognizing the touch via the depth camera and providing the reasonable event by touch. We can generate a background depth image that is not affected by a light using depth information. Then we can obtain the object by comparing the captured depth image with the background depth image. We measure the distance between an object and the screen using the depth information of this object and realize whether this object is touch. Touch pointer may be multiple. This paper considers only a single touch by using the nearest point into the depth camera to the touch point to obtain the appropriate touch point. We can implement the touch pen by providing a touch event without any physical touch sensors.

 

2. PROPOSED TOUCH PEN METHOD USING DEPTH INFORMATION

In this paper, we implement the touch pen using the method of recognizing the touch from the depth image which is captured around the screen by depth camera. Fig. 1 shows the flowchart of the proposed method.

Fig. 1.Flowchart of the proposed method.

A depth camera is situated where it can capture the screen. After that, the depth camera captures the depth image. We obtain the background depth image by capturing the screen when any objects are not in front of the screen. Fig. 2 shows the visualization image of depth information of the background.

Fig. 2.The depth information of the background.

After obtaining the background depth image, we capture the depth image from the depth camera. In each pixel x, y in the image, we compare the depth information Dxy in the captured depth image with the depth information Bxy in the background depth image. We perform a binarization to remove the pixel which the Dxy is equal to the Bxy like Eq. 1 so we obtain the binarization image O.

There may be noises in the binarization image. The noises can be removed from image in labeling process to remove the object whose size that is smaller than a fixed size Nmin. Using this method, the object is separated from the image. Fig. 3 shows that the areas of noise (Fig. 3 (a)) is removed from the binarization image in labeling process using the method like Fig. 3 (b).

Fig. 3.The processing of the extraction object: (a) The binarization image and (b) The labeling image which noises are removed from.

We check the pixel which is the composition of the object. We consider that the touch is occurred if the object is close as a constant M. In the pixel x, y in the object, we compare the depth information Dxy in the captured depth image with the depth information Bxy in the background depth image, which is the screen. If the difference value between the Dxy and the Bxy is less than the M as in Eq. 2, we can consider that touch is occurred in this location. In this regard, M is considered about the error of the depth information by capturing by the depth camera.

Fig. 4.Comparing the captured depth image with the screen.

If touch is detected, the suitable touch pen event is provided in the touch location. In detecting touch, the proposed method draws a straight line between the touch point at previous frame and the touch point at current frame as shown Fig. 5 (a). The time slice between each frame is short, so the event of touch pen is work well as shown Fig. 5 (b).

Fig. 5.Providing touch pen event: (a) Drawing a straight line between the touch point at previous frame and the touch point at current frame, (b) Providing touch event by drawing straight lines between touch point at each frame.

 

3. SIMULATION RESULTS

In this paper, we use the Kinect developed by Microsoft as the depth camera. Kinect has a 640×480 resolution depth pixels within 30 fps.

For this simulation, we provide the touch pen event and measure an accuracy of the touch location when the touch event is performed. As shown in Fig. 6 and Fig. 7, we test the touch by the touch pen event.

Fig. 6.Environment of simulation.

Fig. 7.Result of a touch pen event using the proposed method.

We vary the M which is the value of the detecting touch distance and we simulate 20 times in each case. We locate the depth camera that is 1.5m away from the center of the screen and 30℃ from the plane of the screen. We set the Nmin as 500. We measure the accuracy of touch location where the touch pen event is occurred.

According to varying the M, the distance to begin to recognized touch, which is distance between the object and the screen, is changed. Fig. 8 shows the result of this simulation. In this result, we find that the relationship between the M and the distance to touch recognized is linear proportional relation.

Fig. 8.Relation between the M and maximum touch recognition distance.

We also measure the accuracy of the touch location to measure the distance between the location of the touch and the location where the touch event is performed. Fig. 9 shows the result of this simulation. In this result, the accuracy is higher when the M is lower. However, the frequency of incorrect touch increases when the M is lower.

Fig. 9.Relation between the M and the distance error of a touch event.

The location of the depth camera that is relative to the screen has an effect also on the accuracy. Thus, we measure the accuracy by changing the location of the depth camera. We change the angle between the X axis of the screen and the depth camera and then we measure the accuracy. We set the M as 3 and the other conditions are same. Fig. 10 shows this simulation.

Fig. 10.The condition of the touch location accuracy measurement through a location of the depth camera.

Table 1 shows the result of the accuracy of the location of the depth camera. We find that the accuracy is highest when θ is 30 degree. Also, we find that the accuracy is rather worse when θ is less than 15 degree. If the angle is too low, then the size of the screen within the captured depth image is small so the resolution of the screen within the captured depth image is not enough to detect the touch. On the other hand, when θ is more than 75 degree, the error of touch detection is occurred so it can not perform a touch event. If the angle is too high, then the difference of neighborhood depth values in the captured depth image is small so wrong touch detection is often occurred.

Table 1.Relation between the distance error of a touch event and an angle between the depth camera and the screen

We also measure the accuracy when the distance between the depth camera and the screen is changed. Fig. 11 shows the result of this simulation. In this result, the accuracy is higher when the d is lower. Further, the depth camera can not correct depth information when the distance is over 4m.

Fig. 11.Relation between the d and the distance error of a touch event.

The kind of the depth camera has an effect also on the accuracy. We use other depth camera, Xtion Pro Live, Kinect v2 to check this. Table 2 shows the result of this simulation. We find that the accuracy of Xtion Pro Live is lower than Kinect. We assume that the process of obtaining the depth information in Xtion Pro is simple. We also find out that the accuracy of Kinect v2 is higher than Kinect because Kinect v2 applies TOF method to obtain the depth information.

Table 2.Result according to various depth cameras

In the simulation results, we can get the best optimum touch event when M is 1 and the angle is 30 degree. The touch event is better accuracy as the location of the touch camera is closer to the screen.

 

4. CONCLUSION

In this paper, we propose the method of implementing the touch pen to obtain the background dpeth image by capturing the screen and providing event using touch location. First, we obtain the background depth image without any object. After that, we extract the object to perform binarization and do labeling. We detect the touch to compare the captured depth image with the background depth image. We can implement to provide the touch event if the touch is detected using the above method. We also measure an accuracy of the touch location when the touch event is performed as changing the location of the depth camera, the distance of the depth camera from the screen, and the kind of the depth camera. We find out that the proposed method can be implemented the touch pen using the depth camera. But, some error of touch location can be occurred because of perspective distortion that arises according to camera’s position. The accuracy of touch may be better to correct the distortion[12].

Recently, the touch pen in the market does not come into a wide use because of high cost or required special devices. We expect that the cost of the touch pens will be lower using this proposed method.

References

  1. G. Walker, “A Review of Technologies for Sensing Contact Location on the Surface of a Display,” Journal of the Society for Information Display, Vol. 20, No. 8, pp. 413-440, 2012. https://doi.org/10.1002/jsid.100
  2. V. Soni, M. Patel, and R.S. Narde, “An Interactive Infrared Sensor based Multi-Touch Panel,” International Journal of Scientific and Research Publications, Vol. 3, No. 3, pp. 610-623, 2013.
  3. J. Leitner, J. Powell, P. Brandl, T. Seifried, M. Haller, B. Doray, and P. To, "A Tilting Multi-Touch and Pen Based Surface," Proceeding on CHI'09 Extended Abstracts on Human Factors in Computing Systems, pp. 3211-3216, 2009.
  4. H. Nonaka and T. Da-te, "Ultrasonic Position Measurement and Its Applications to Human Interface,” IEEE Transactions on Instrumentation and Measurement, Vol. 44, No. 3, pp. 771-774, 1995. https://doi.org/10.1109/19.387329
  5. G.F. Russell, B.A. Smith, and T.G. Zimmerman, Digital Pen using Ultrasonic Tracking, U.S. Patent 6703570, 2004.
  6. M. Siddiqui and G. Medioni, "Human Pose Estimation from a Single View Point, Real-time Range Sensor," Proceeding on Computer Vision for Computer Games at Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2010.
  7. R. Munoz-Salinas, R. Medina-Carnicer, F.J. Madrid-Cuevas, and A. Carmona-Poyato, “Depth Silhouettes for Gesture Recognition,” Pattern Recognition Letters, Vol. 29, No. 3, pp. 319-329, 2008. https://doi.org/10.1016/j.patrec.2007.10.011
  8. P. Suryanarayan, A. Subramanian, and D. Mandalapu, "Dynamic Hand Pose Recognition Using Depth Data," Proceeding on International Conference on Pattern Recognition, pp. 3105-3108, 2010.
  9. S.J. Jung, G.J. Choi, and E.S. Cho, "Presentation Control using a Kinect Sensor," Proceeding on Korea Computer Congress 2012, Vol 39, No. 1(A), pp. 370-372, 2012.
  10. M.K. Lee and J.B, Jeon, "Personal Computer Control using Kinect," Proceeding on Korea Computer Congress 2012, Vol. 39, No. 1(A), pp. 343-345, 2012.
  11. S.K. Kwon and S.W. Kim, “Motion Estimation Method by using Depth Camera,” Journal of Broadcasting Engineering, Vol. 17, No. 4, pp. 676-683, 2012. https://doi.org/10.5909/JBE.2012.17.4.676
  12. S.K. Kwon and D.S. Lee, “Correction of Perspective Distortion Image Using Depth Information,” Journal of Korea Multimedia Society, Vol. 18, No. 2, pp. 106-112, 2015. https://doi.org/10.9717/kmms.2015.18.2.106

Cited by

  1. Recognition method of multiple objects for virtual touch using depth information vol.21, pp.1, 2016, https://doi.org/10.9723/jksiis.2016.21.1.027
  2. Depth Video Coding Method for Spherical Object vol.21, pp.6, 2016, https://doi.org/10.9723/jksiis.2016.21.6.023
  3. Video event control system by recognition of depth touch vol.21, pp.1, 2016, https://doi.org/10.9723/jksiis.2016.21.1.035
  4. 임베디드 환경에서 실시간 가상 터치 인식 시스템의 구현 vol.19, pp.10, 2015, https://doi.org/10.9717/kmms.2016.19.10.1759
  5. 적응적 필터를 통한 깊이 터치에 대한 움직임 경로의 보정 방법 vol.19, pp.10, 2015, https://doi.org/10.9717/kmms.2016.19.10.1767