DOI QR코드

DOI QR Code

Classification of 3D Road Objects Using Machine Learning

머신러닝을 이용한 3차원 도로객체의 분류

  • Received : 2018.11.19
  • Accepted : 2018.12.10
  • Published : 2018.12.31

Abstract

Autonomous driving can be limited by only using sensors if the sensor is blocked by sudden changes in surrounding environments or large features such as heavy vehicles. In order to overcome the limitations, the precise road-map has been used additionally. This study was conducted to segment and classify road objects using 3D point cloud data acquired by terrestrial mobile mapping system provided by National Geographic Information Institute. For this study, the original 3D point cloud data were pre-processed and a filtering technique was selected to separate the ground and non-ground points. In addition, the road objects corresponding to the lanes, the street lights, the safety fences were initially segmented, and then the objects were classified using the support vector machine which is a kind of machine learning. For the training data for supervised classification, only the geometric elements and the height information using the eigenvalues extracted from the road objects were used. The overall accuracy of the classification results was 87% and the kappa coefficient was 0.795. It is expected that classification accuracy will be increased if various classification items are added not only geometric elements for classifying road objects in the future.

급변하는 주변상황이나 대형차량과 같은 큰 지형지물에 센서가 가려질 경우에는 센서만을 이용한 완전 자율주행에는 한계가 따른다. 이에 자율주행을 위해서 센서를 이용한 한계점을 극복할 수 있도록 정밀한 도로지도를 부가적으로 이용하는 방법이 사용되고 있다. 본 연구는 국토지리정보원에서 제공하는 지상 MMS(Mobile Mapping System)로 취득된 3차원 점군자료를 이용하여 도로 객체를 분류하는 연구를 수행하였다. 본 연구를 위해서 원본 3차원 점군자료를 전처리 하고, 지면과 비지면점을 분리하기 위한 필터링 기법을 선정하였다. 또한 차선, 가로등, 안전펜스 등에 해당하는 도로객체를 초기 분할한 후 분할된 객체를 머신러닝의 종류인 서포트 벡터 머신을 이용하여 학습시킨 후 분류하였다. 학습데이터는 분할된 도로객체에서 추출한 고유값을 이용한 기하학적 요소와 높이정보만을 사용하였으며 분류결과 전체정확도는 87%, 카파계수는 0.795로 나타났다. 향후 도로객체의 분류를 위하여 기하학적인 요소 뿐만 아니라 다양한 항목을 추가한다면 분류정확도가 높아질 것으로 예상된다.

Keywords

GCRHBD_2018_v36n6_535_f0001.png 이미지

Fig. 1. Research flow chart

GCRHBD_2018_v36n6_535_f0002.png 이미지

Fig. 2. Standard normal distribution (Wikipedia, 2005)

GCRHBD_2018_v36n6_535_f0003.png 이미지

Fig. 4. Connected component labeling in 2d space (Wikipedia, 2010)

GCRHBD_2018_v36n6_535_f0004.png 이미지

Fig. 5. PCFA  

GCRHBD_2018_v36n6_535_f0005.png 이미지

Fig. 6. Geometric elements of 3d point cloud data using principal component analysis

GCRHBD_2018_v36n6_535_f0006.png 이미지

Fig. 7. Support vector machine

GCRHBD_2018_v36n6_535_f0007.png 이미지

Fig. 8. Data index

GCRHBD_2018_v36n6_535_f0008.png 이미지

Fig. 9. Representation of 3d point cloud data

GCRHBD_2018_v36n6_535_f0009.png 이미지

Fig. 10. Data preprocessing

GCRHBD_2018_v36n6_535_f0010.png 이미지

Fig. 11. Result of PCFA

GCRHBD_2018_v36n6_535_f0011.png 이미지

Fig. 12. Segmentation of road plane objects

GCRHBD_2018_v36n6_535_f0012.png 이미지

Fig. 13. Segmentation of road facility objects

GCRHBD_2018_v36n6_535_f0013.png 이미지

Fig. 14. Pre-processing and ground classification

GCRHBD_2018_v36n6_535_f0014.png 이미지

Fig. 15. Segmentation of road objects

GCRHBD_2018_v36n6_535_f0015.png 이미지

Fig. 16. Result of classification

GCRHBD_2018_v36n6_535_f0016.png 이미지

Fig. 17. Mis-classified objects

GCRHBD_2018_v36n6_535_f0017.png 이미지

Fig. 3. Structure of quadtree and octree Apple developer, 2018)

Table 1. Parameter values for PCFA (unit: m)

GCRHBD_2018_v36n6_535_t0001.png 이미지

Table 2. Training dataset example

GCRHBD_2018_v36n6_535_t0002.png 이미지

Table 3. Confusion matrix of SVM  ∙      

GCRHBD_2018_v36n6_535_t0003.png 이미지

Table 4. Confusion matrix of object classification

GCRHBD_2018_v36n6_535_t0004.png 이미지

References

  1. Apple developer. (2018), Spatial and logical arrangement of an example octree, Apple, URL: https://developer.apple.com/documentation/gameplaykit/gkoctree(last date accessed: 10 November 2018).
  2. Axelsson, P. (2000), DEM generation from laser scanner data using adaptive TIN models, International Archives of Photogrammetry and Remote Sensing, 16-22 July, Amsterdam, Nederland, Vol. 33, Part B4, pp. 110-117.
  3. Caputo, M., Denker, K., Franz, M.O., Laube, P., and Umlauf, G. (2014), Support vector machines for classification of geometric primitives in point clouds, Curves and Surfaces, Vol. 9213, pp. 80-95.
  4. Chang, Y., Habib, A., Lee, D.C., and Yom, J.H. (2008), Automatic classification of LIDAR data into ground and non-ground points, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 3-11 July, Beijing, China, Vol. 37, Part B4, pp. 457-462.
  5. Han, S.H. (2016), Introduction to Photogrammetry and Remote Sensing, Goomibook, Seoul.
  6. Hong, S.P. and Kim, E.M. (2017), Object segmentation of laser data using terrestrial mobile mapping system, Proceedings of Journal of Korean Society for Geospatial Information System, Korean Society for Geospatial Information Science, 18-19 May, Jeonju, Korea, pp. 197-198.
  7. Hong, S.P., Seo, H.D., and Kim, E.M. (2018), Road object classification using a terrestrial laser data, Proceedings of Journal of Korean Society for Geospatial Information System, Korean Society for Geospatial Information Science, 1-2 November, Jeju, Korea, pp. 199-200.
  8. Jeong, J.H. and Lee, I.P. (2016), Classification of mobile LIDAR data acquired from urban roads based on eigenvalue ratios and support vector machine, Journal of the Korean Cadastre Information Association, Vol. 18, No. 2, pp. 195-206. (in Korean with English abstract)
  9. Kim, E.M. and Cho, D.Y. (2012), Comprehensive comparisons among LIDAR filtering algorithms for the classification of ground and non-ground points, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 30, No. 1, pp. 39-48. (in Korean with English abstract) https://doi.org/10.7848/ksgpc.2012.30.1.039
  10. Lalonde, J.F., Vandapel, N., Huber, D.F., and Hebert, M. (2006), Natural terrain classification using three-dimensional LIDAR data for ground robot mobility, Journal of Field Robotics, Vol. 23, Issue 1, pp. 839-861. https://doi.org/10.1002/rob.20134
  11. Lee, G.W. and Son, H.U. (2016), Geo-Spatial Information System, Goomibook, Seoul.
  12. Lee, J.H. and Lee, D.C. (2010), LIDAR data segmentation using aerial images for building modeling, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 28, No. 1, pp. 47-55. (in Korean with English abstract)
  13. Lee, S.J., Park, J.Y., and Kim, E.M. (2014), Development of automated model of tree extraction using aerial LIDAR data, Journal of the Korea Academia-Industrial cooperation Society, Vol. 15, No. 5, pp. 3213-3219. (in Korean with English abstract) https://doi.org/10.5762/KAIS.2014.15.5.3213
  14. Lehtomaki, M., Jaakkola, A., Hyyppa, J., Lampinen, J., Kaartinen, H., Kukko, A., Puttonen, E., and Hyyppa, H. (2015), Object classification and recognition from mobile laser scanning point clouds in a road environment, IEEE Transactions on Geoscience and Remote Sensing, Vol. 54, No. 2, pp. 1226-1239. https://doi.org/10.1109/TGRS.2015.2476502
  15. NGII. (2015), A Study on the Construction of Precision Road Map for the Support of Autonomous Vehicle, Research report, National Geographic Information Institute, Korea, pp. 23-93.
  16. Park, S., Kim, K.J., Lee, J.S., and Lee, S.R. (2011), Red tide prediction using neural network and SVM, The Institute of Electronics Engineers of Korea-Signal Processing, Vol. 48. No. 5, pp. 39-45. (in Korean with English abstract)
  17. Rusu, R.B. and Cousins, S. (2011), 3D is here: point cloud library (pcl). IEEE International Conference on Robotics and Automation, 9-13 May, Shanghai, China, pp. 1-4.
  18. So, J.H. and Moon, Y.J. (2018), Plan for autonomous cooperation driving safety and infrastructure implementation, The Journal of The Korean Institute of Communication Sciences, Vol. 35, No. 5, pp. 37-43.
  19. Sun, Y., Wang, C., Li, J., Zhang, Z., Zai, D., Huang, P., and Wen, C. (2016), Automated segmentation of LIDAR point clouds for building rooftop extraction, IEEE International Geoscience and Remote Sensing Symposium, 10-15 July, Beijing, China, pp. 1472 - 1475.
  20. Wikipedia. (2005), Normal distribution curve that illustrates standard deviations, Wikimedia Foundation, Inc., URL: https://en.wikipedia.org/wiki/Standard_deviation (last date accessed: 10 November 2018).
  21. Wikipedia. (2010), Result of connected region labeling using two-pass raster scan, Wikimedia Foundation, Inc., URL: https://en.wikipedia.org/wiki/Connected-component_labeling(last date accessed: 10 November 2018).
  22. Yoo, H.H., Kim, E.M., and Chung, D.K. (2005), Assessment of classification accuracy of ground and non-ground points from LIDAR data, Journal of The Korean Society of Civil Engineers, Vol. 25, No. 6D, pp. 929-935. (in Korean with English abstract)
  23. Zhang, K. and Whitman, D. (2005), Comparison of three algorithms for filtering airborne LIDAR data, Photogrammetric Engineering and Remote Sensing, Vol. 71, No. 3, pp. 313-324. https://doi.org/10.14358/PERS.71.3.313
  24. Zhang, W., Qi, J., Wan, P., Wang, H., Xie, D., Wang., X., and Yan, G. (2016), An easy to use airborne LIDAR data filtering method based on cloth simulation, Remote Sensing, Vol. 8, No. 6, pp. 501-522. https://doi.org/10.3390/rs8060501

Cited by

  1. 포인트 클라우드에서 딥러닝을 이용한 객체 분류 및 변화 탐지 vol.50, pp.2, 2018, https://doi.org/10.22640/lxsiri.2020.50.2.37
  2. 정밀도로지도 제작을 위한 모바일매핑시스템 기반 딥러닝 학습데이터의 자동 구축 vol.39, pp.3, 2021, https://doi.org/10.7848/ksgpc.2021.39.3.133