DOI QR코드

DOI QR Code

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation

3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘

  • Jeon, Hyun Ho (Dept. of Mechatronics Eng., Graduate School, Chungnam National University) ;
  • Ko, Yun Ho (Dept. of Mechatronics Eng., Graduate School, Chungnam National University)
  • Received : 2017.09.04
  • Accepted : 2017.11.14
  • Published : 2017.12.31

Abstract

The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.

Keywords

References

  1. R. Madhavan and E. Messina, "Iterative Registreation of 3D Lader Data for Autonomous Navigation," Proceedings of the IEEE Symposium on Intelligent Vehicles, pp. 156-191, 2003.
  2. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, MIT Press, Cambridge, Massachusetts, 2005.
  3. J.H. Kim, H.S. Kang, H.H. Jeon, and Y.H. Ko, "The Study on Robust Feature Selection and Management Techniques for Monocular Visual Odometry," Proceeding of Workshop on Image Processing and Image Understanding, pp. 366-369, 2016.
  4. B.M. Ahn, Y.H. Ko, and J.H Lee, "Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker," Journal of Korea Multimedia Society, Vol. 19, No. 2, pp. 240-251, 2016. https://doi.org/10.9717/kmms.2016.19.2.240
  5. Y.W. Choi, K.D. Kim, J.W. Choi, and S.G. Lee, “Laser Image SLAM Based on Image Matching for Naviagation of a Mobile Robot,” Journal of Korean Society for Precision Engineering, Vol. 30, No. 2, pp. 177-184, 2013. https://doi.org/10.7736/KSPE.2013.30.2.177
  6. W.G. Aguilar, G.A. Rodriguez, L. Alvarez, S. Sandoval, F. Quisaguano, and A. Limaico, "Visual SLAM with a RGB-D Camera on a Quadrotor UAV Using on-Board Processing," Proceeding of International Work-Conference on Artificail Neural Networks, pp. 596-606, 2017.
  7. G. Klein and D. Murray, "Parallel Tracking and Mapping for Small AR Workspace," Mixed and Augmented Reality, pp. 225-234, 2007.
  8. A.J. Davison, I.D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: Real-time Single Camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 1052-1067, 2007. https://doi.org/10.1109/TPAMI.2007.1049
  9. J. Engel, T. Schops, and D. Cremers, "LSD-SLAM: Large-scale Direct Monocular SLAM," Proceeding of European Conference on Computer Vision, pp. 834-849, 2014.
  10. J. Engel, J. Stuckler, and D. Cremers, "Large-scale Direct SLAM with Stereo Cameras," Intelligent Robots and Systems, pp. 1935-1942, 2015.
  11. J. Engel, V. Koltun, and D. Cremers, "Direct Sparse Odometry," http://arxiv.org/pdf/1607.02565.pdf, 2016.
  12. A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer, "PL-SLAM: Real-Time Monocular Visual SLAM with Points and Lines," Proceeding of International Conference on Robotics and Automation, pp. 4503-4508, 2017.
  13. H.H. Jeon, J.H. Kim, and Y.H. Ko, "LiDAR Interpolation Algorithm for 3D-2D Frame to Frame Motion Estimation," Proceeding of the Spring Conference of the Korea Multimedia Society, pp. 690-693, 2017.
  14. H.H. Jeon, J.H. Kim, T.W. Kim, and Y.H. Ko, "Robust Visual Odometry Using RAFSet Motion Estimation Method," Proceeding of Workshop on Image Processing and Image Understanding, pp. 198-201, 2017.