DOI QR코드

DOI QR Code

Feature Matching Algorithm Robust To Viewpoint Change

시점 변화에 강인한 특징점 정합 기법

  • Received : 2015.09.17
  • Accepted : 2015.12.07
  • Published : 2015.12.30

Abstract

In this paper, we propose a new feature matching algorithm which is robust to the viewpoint change by using the FAST(Features from Accelerated Segment Test) feature detector and the SIFT(Scale Invariant Feature Transform) feature descriptor. The original FAST algorithm unnecessarily results in many feature points along the edges in the image. To solve this problem, we apply the principal curvatures for refining it. We use the SIFT descriptor to describe the extracted feature points and calculate the homography matrix through the RANSAC(RANdom SAmple Consensus) with the matching pairs obtained from the two different viewpoint images. To make feature matching robust to the viewpoint change, we classify the matching pairs by calculating the Euclidean distance between the transformed coordinates by the homography transformation with feature points in the reference image and the coordinates of the feature points in the different viewpoint image. Through the experimental results, it is shown that the proposed algorithm has better performance than the conventional feature matching algorithms even though it has much less computational load.

본 논문에서는 FAST(Features from Accelerated Segment Test) 특징점 검출기와 SIFT(Scale Invariant Feature Transform) 특징점 서술자(descriptor)를 사용하여 시점 변화에 강인한 특징점 정합 기법을 제안한다. 기존의 FAST 기법은 영상의 에지 부분을 따라서 불필요하게 특징점을 많이 추출하게 되는데 이러한 단점을 주곡률(principal curvatures)을 적용하여 개선한다. 추출된 특징점을 SIFT 서술자를 통해 기술하고 시점이 다른 두 영상으부터 구해진 정합쌍에 RANSAC(RANdom SAmple Consensus) 기법을 통하여 호모그래피(homography)를 계산한다. 시점 변화에 강인한 특징점 정합을 위해서 기준 영상의 특징점들을 호모그래피 변환을 통해 변경된 좌표와 시점이 다른 영상의 특징점 좌표간의 유클리디언(Euclidean) 거리를 통해 정합쌍을 분류한다. 같은 물체나 장소에 대해 시점이 변화된 여러 영상에 대한 실험을 통해서 제안하는 정합 기법이 적은 계산량으로 기존의 특징점 정합 기법보다 우수한 성능을 보여주는 것을 확인하였다.

Keywords

References

  1. D. Lowe, "Distinctive image features from scale-invariant keypoints," Int. J. Comput. Vision, vol. 60, no. 2, pp. 91-110, Nov. 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  2. K. Mikolajczyk, "Scale & affine invariant interest point detectors," Int. J. Comput. Vision, vol. 60, no. 1, pp. 63-86, Oct. 2004. https://doi.org/10.1023/B:VISI.0000027790.02288.f2
  3. K. Mikolajczyk, "A performance evaluation of local descriptors," Pattern Anal. and Machine Intell., vol. 27, no. 10, pp. 1615-1630, Oct. 2005. https://doi.org/10.1109/TPAMI.2005.188
  4. K. Midolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. kadir, and L. Van Gool. "A comparison of affine region detectors," Int. J. Comput. Vision, vol. 65, no. 1-2, pp. 43-72, Nov. 2005. https://doi.org/10.1007/s11263-005-3848-x
  5. E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," 9th Eur. Conf. Comput. Vision, Graz, Austria, pp. 430-443, May 2006.
  6. E. Rosten, "Faster and better : A machine learning approach to corner detection," Pattern Anal. and Machine Intell., vol. 32, no. 1, pp. 105-119, Jan. 2010. https://doi.org/10.1109/TPAMI.2008.275
  7. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, "Speeded-up robust feature," Computer Vision and Image Understanding, vol. 10, no. 3, pp. 346-359, Jun. 2008.
  8. M. A. Fischler and R. C. Bolles, "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, no. 6, pp. 381-395, Jun. 1981. https://doi.org/10.1145/358669.358692
  9. D. Comaniciu, V. Ramesh, and P. Meer, "Real-time tracking of non-rigid objects using mean shift," in Proc. 2000 IEEE Conf. Comput. Vision and Pattern Recognition, vol. 2, pp. 142-149, Jun. 2000.
  10. http://www.robots.ox.ac.uk/-vgg/research/affine/
  11. http://www.vision.caltech.edu/pmoreels/Datasets/
  12. M. M. Hossain, H. J. Lee, and J. S. Lee, "Fast image stitching for video stabilization using sift feature points," J. KICS, vol. 39, no. 10, pp. 957-966, Oct. 2014.
  13. B. W. Chung, K. Y. Park, and S. Y. Hwang, "A fast and efficient haar-like feature selection algorithm for object detection," J. KICS, vol. 38, no. 6, pp. 486-497, Jun. 2013.
  14. H. K. Jang, "The more environmentally robust edge detection of moving objects using improved Canny edge detector and Freeman chain code," J. KICS, vol. 37, no. 2, pp. 37-42, Apr. 2012.

Cited by

  1. 콘텐트 기반의 이미지검색을 위한 분류기 접근방법 vol.41, pp.7, 2015, https://doi.org/10.7840/kics.2016.41.7.816
  2. 서포트 벡터 머신을 이용한 자연 연상 통계 기반 저작물 식별 알고리즘 vol.42, pp.5, 2015, https://doi.org/10.7840/kics.2017.42.5.959
  3. 딥 러닝을 이용한 실감형 콘텐츠 특징점 인식률 향상 방법 vol.24, pp.2, 2015, https://doi.org/10.7471/ikeee.2020.24.2.419
  4. 딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법 vol.24, pp.2, 2015, https://doi.org/10.7471/ikeee.2020.24.2.529
  5. 달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석 vol.40, pp.4, 2015, https://doi.org/10.12652/ksce.2020.40.4.0437