DOI QR코드

DOI QR Code

Calculation of Tree Height and Canopy Crown from Drone Images Using Segmentation

  • Lim, Ye Seul (Dept. of Smart ICT Convergence, Konkuk University) ;
  • La, Phu Hien (Dept. of Advanced Technology Fusion, Konkuk University) ;
  • Park, Jong Soo (Korea Asset Management Corporation) ;
  • Lee, Mi Hee (Dept. of Advanced Technology Fusion, Konkuk University) ;
  • Pyeon, Mu Wook (Dept. of Civil Engineering, Social Echo-Tech Institute, Konkuk University) ;
  • Kim, Jee-In (Dept. of Smart ICT Convergence, Social Echo-Tech institute, Konkuk University)
  • Received : 2015.12.09
  • Accepted : 2015.12.29
  • Published : 2015.12.31

Abstract

Drone imaging, which is more cost-effective and controllable compared to airborne LiDAR, requires a low-cost camera and is used for capturing color images. From the overlapped color images, we produced two high-resolution digital surface models over different test areas. After segmentation, we performed tree identification according to the method proposed by , and computed the tree height and the canopy crown size. Compared with the field measurements, the computed results for the tree height in test area 1 (coniferous trees) were found to be accurate, while the results in test area 2 (deciduous coniferous trees) were found to be underestimated. The RMSE of the tree height was 0.84 m, and the width of the canopy crown was 1.51 m in test area 1. Further, the RMSE of the tree height was 2.45 m, and the width of the canopy crown was 1.53 m in test area 2. The experiment results validated the use of drone images for the extraction of a tree structure.

Keywords

1. Introduction

Increasing numbers of studies are being pursued on forests and green spaces to reduce the carbon dioxide levels in the atmosphere, with the aim of mitigating global warming (Lee et al., 2014). However, because a forest area is not easily accessible and has a large number of trees, we need to find an economic and accurate method to acquire tree information such as the heights and the diameters of trees (Chang et al., 2006). Methods for the automatic extraction of the tree area and height can be grouped by source types as follows: (1) spectral data of satellite images or aerial photographs, (2) airborne LiDAR(LIght Detection And Ranging) data to provide information about thevertical structure of a forest, and (3) combined data of (1) and (2) (Chang et al., 2012; Chang et al., 2006; Lee and Ru, 2012). Further, tree extraction with NDVI(Normalized Difference Vegetation Index) has been studied, but because the spectral reflectance tends to be more correlated with the tree leaf density than the tree height, it is difficult to obtain individual tree heights (Chopping et al., 2008). Therefore, a high-resolution DSM(Digital Surface Model) generation, which can support individual tree identification and compute the height of each tree, is important for the extraction of tree attributes (Zarco-Tejada et al., 2014).

Standard methodologies used for individual tree detection are based on photogrammetric data and, more recently, on the use of LiDAR data. Both require expensive cameras, well-trained personnel, and precise technology to obtain accurate results (Zarco-Tejada et al., 2014). Further, these data usually cover a large area. Therefore, the use of these techniques to investigate a small area or acquire data with very high resolution (spatial resolution of image lower than 10cm, or LiDAR point density of more than 10points/m2) is expensive. Moreover, advances in the fields of the UAV(Unmanned Aerial vehicle) technology and data processing have made it feasible to obtain very high-resolution imagery and 3D(tree-dimensional) data (Kattenborn et al., 2014). In fact, recent studies have demonstrated the capability of UAVs with respect to forest inventory (Hung et al., 2012; Zarco-Tejada et al., 2014). However, these studies have used only spectral data and height data; this might reduce the accuracy of information extraction.

In this study, we obtained color images by using a low-cost camera mounted on a remote-controlled drone and then, generated a high-resolution ortho-image and DSM in the overlapped area. Then, nDSM(Normalized Digital Surface Model) was generated by subtracting the available DTM(Digital Terrain Model) from the DSM. The individual trees were extracted on the basis of the segmentation of the fused image, which was generated by combining the nDSM and the color image. Next, the tree heights and crown widths were computed. These results were compared with the field measurements.

 

2. Data and Method

2.1 Test area and images

For the experiment, aerial images were taken by a drone camera around Sanghuh Memorial Library at Konkuk University on December 15, 2015. The drone used (Phantom-3 Professional, Djibouti) had four propellers, a camera, a GPS(Global Positioning System) receiver, and a gimbal (Fig. 1(a)). Further, it had an exclusive remote controller. The camera used for the experiment can take 1.2M-pixel images and video with 4K (3840 × 2160) images. The test areas had two different tree species. The species in test area 1 was Picea abies of Pinaceae, and the species in test area 2 was Metasequoia glyptostroboides (M. glyptostroboides) of Taxodiaceae (Figs. 1(b) and (c)).

Fig. 1.Photographing platform and mosaic imaged of test areas 1 and 2

The photographing conditions and accuracy are provided in Pix4D Quality Report. The flight height was 45m and 55m and the spatial resolution was 0.018m and 0.022m in test areas 1 and 2, respectively. The drone was manually controlled, and the frame image size was 4000×3000 pixels. The photographed area was 0.0088km2 for test area 1 and 0.0152km2 for test area 2. Furthermore, we ensure that the same areas were photographed more than 5 times for DSM production.

In this study, an ortho-mosaic image and DSM were generated by using Pix4D software in the fully automatic mode. The imagery was synchronized using the GPS position, and the triggering time recorded for each image. Only absolute GPS coordinates were used in this project for the generation of ortho-mosaics and DSMs; no GCP(Ground Control Point) was used. The derived ortho-image and DSM were resampled to 2cm. Some specific information of the drone-image processing conducted in Pix4D is presented in Table 1.

Table 1.Summary of drone-image processing using Pix4D

For the images taken by the drone, the external expression elements were determined by bundle block adjustment. Noise/smoothing filtering was done in the process to automatically produce the 3D height on the overlapped areas. Further, DSM was produced by inverse distance weighting.

2.2 Extraction of tree structure

In this study, we used the method described in La et al. (2015) to extract individual tree crowns from the high-resolution DSM and ortho-image mosaic. These DSM data were generated using 30 (test area 1) and 43 (test area 2) frame images. The overall experimental flowchart is illustrated in Fig. 2 1) The RGB(Red-Green-Blue) ortho-image and the DSM were derived by analyzing stereo-images taken by the drone using the Pix4D software. 2) The tree area was extracted on the basis of the classification of the RGB ortho-image. 3) The nDSM was generated by subtracting the available DTM from the DSM. nDSM was generated by subtracting DTM from the DSM. The DSM was automatically generated from the drone-image analysis conducted by using the Pix4D software. The DTM, which was built using the LiDAR data, was the available data in this study. DTM was subtracted from DSM by using the spatial modeler of ERDAS IMAGINE. 4) The RGB ortho-image and the nDSM were combined through layer stacking. 5) Segmentation available in the ERDAS IMAGINE software was carried out on these integrated data. 6) The tree position, tree height, and crown diameter were extracted by segments. 7) The experimental results were compared to the field data in order to assess accuracy.

Fig. 2.Experimental flowchart

Segmentation is a method of partitioning raster images into several segments on the basis of pixel values and locations. Pixels that are spatially connected and have similar values are grouped into a single segment. Image segmentation methods have been applied to conventional aerial photography for the identification of individual tree crowns (Gougeon and Leckie, 2003; Suárez et al., 2005). In our study, the available “Lambda Schedule Segmentation” in ERDAS IMAGINE was used. This applies a bottom-up merging algorithm and considers the spectral content as well as the segment’s texture, size, and shape for merging decisions. The result is a thematic image in which the pixel values represent the class IDs of contiguous raster objects (ERDAS IMAGINE Help, 2015).

 

3. Results and Analysis

3.1 Individual tree identification

Fig. 3(a) shows the DSM (on the left) established from drone images by using Pix4D, and the segmentation result (on the right) overlapping on the RGB ortho-image in study area 1. Similarly, Fig. 3(b) shows the DSM (on the left) and the segmentation result (on the right) overlapping the RGB ortho-image in study area 2.

Fig. 3.Segmentation result of test areas 1 and 2

Eleven trees (marked with red circles) were clearly identified from a total of thirteen trees, as shown in Fig. 4(a), and two trees (marked with yellow circles) were not recognized. One additional tree that was not targeted was identified (marked with a purple circle), as shown in Fig. 4(b). Unidentified tree A had a small canopy crown with compared to the trees on both sides, as shown in Fig. 4(c). In particular, we believe that the tree that has such a small width towards the treetop cannot be recognized. However, although tree B had a relatively wide canopy crown, it was not recognized because the center of the treetop is not definite with respect to the other trees, as shown in Fig. 4(d). However, the tree marked with a purple circle had a small canopy crown but could be identified because the width of the treetop canopy was definite.

Fig. 4.Tree identification results of test area 1

Ten trees were identified in test area 2, as shown in Fig. 5(a). Nine trees marked with red and purple circles were identified, as shown in Fig. 5(b). The tree marked with a yellow circle was detected as a single object, which was originally two trees. A certain tree, which had relatively few leaves compared to the trees on both sides, as shown in Fig. 5(c), could not be identified.

Fig. 5.Tree identification results of test area 2

3.2 Tree height

After performing the segmentation process, the resultant images were converted into shape files. The polygon parameters including the position of the centroid and the polygon area were derived by using ESRI ArcGIS. This information was used for estimating the individual tree parameters. The location of the tree crown was represented by the centroid of each tree segment. The individual tree height was determined from the highest value of the nDSM within the segmented polygon (Hyyppa et al., 2001; La et al., 2015).

Tree height measurement is classified into triangle similarity, triangulation, and distance measurement. The tools generally used include Weise Hypsometer, Transit, TS(Totalstation), Hega Hypsometer, and Sunto Hypsometer. In this study, we measured the tree height by using TS that had relatively high accuracy and used the triangulation principle.

In this experiment, we compared the tree heights determined using the drone’s frame images to the reference data (11 trees) acquired using a TS. Table 2 presents the tree height measured by the drone, the tree height measured by TS, and the difference between these values. Fig. 6 illustrates these values in a graphical form. As shown in Fig. 6(a), the tree height measured by the drone and that by TS did not show a significant difference. However, Tree No. 1 and 5 showed a significant difference (Fig. 6(b)). An error in measuring the height of Tree No. 1 by TS was considered possible because this tree stood slightly back from the other trees. Tree No. 5 looked as tall as Tree No. 6 by eye measurement, but it was identified to be taller than Tree No. 6 by the drone. It may be influenced by Tree B (Fig. 4) right next to it. Since the drone draws a circle and measures the highest point when measuring the tree height, it seems that the height of Tree B next to Tree No. 5 was measured. The difference in the average values measured using the TS and the drone was just about 0.5 m. Further, R2 was 0.91 and RMSE (Root Mean Square Error) was 0.84 for test area 1. The tree height measured by TS and that by the drone did not show a significant difference.

Table 2.(Drone: Computation results from drone images, TS: Field measurements using TS, Difference: Absolute value of difference between the value measured by TS and that by drone)

Fig. 6.Tree heights obtained using drone and using TS and difference in test area 1

Table 3 presents the tree height measured by the drone, that by TS, and the difference between them. Fig. 7(a) illustrates the values given in Table 3 in a graphical form. Fig. 7(b) shows the graph of the difference between the tree height measured by the drone and that by TS. In Fig. 7, Tree No. 2 particularly showed a considerable difference. While Tree No. 2 looked far taller than Tree No. 1 even by eye measurement, the tree heights of Tree Nos. 1 and 2 as measured by the drone did not show a significant difference. This indicates that the tree heights measured by the drone had errors. In test area 2, the tree heights computed using the drone’s frame images were underestimated compared to the reference data (six trees). The average value obtained using TS was 26.85m, and the average value obtained using the drone’s frame image was 24.78m. The difference between these two average values was larger than that of test area 1. This is caused by the difference between species distributed in two areas. While the coniferous tree (Picea abies) of test area 1 was not affected by the season, a number of leaves in test area 2 had already dropped because of the seasonal effects. Therefore, the matching points of the test area were not sufficient to extract the tree height precisely. R2 was 0.85 and RMSE was 2.77 in the test area.

Table 3.Tree heights in test area 2 (unit: m)

Fig. 7.Tree heights obtained using drone and using TS and difference in test area 2

3.3 Canopy crown width

The diameter of the crown could be calculated using the area of the corresponding segment (Hyyppa et al., 2001; La et al., 2015), as shown in Eq. (1):

where D: denotes the diameter of an individual tree, and A: represents the area of the segment

To filter out the incorrectly estimated tree parameters, the height and diameter thresholds were applied. On the basis of the derived values, tree locations, and crown diameters, the tree crowns were reconstructed.

Table 4 presents the crown width measured by the drone, that measured by the tape measure, and the difference between both the values. Fig. 8(a) shows the graph generated using the values presented in Table 4. Fig. 8(b) illustrates the graph of the difference between the values measured by TS and those measured by the drone.

Table 4.(Drone: Computation results from drone images, Tapeline: Measured crown width with a tapeline, Difference: Absolute value of difference between value measured by TS and that by drone)

Fig. 8.Computed crown width and field measurement and difference between the two

The canopy crown width is a measure of the linear distance across the projection surface of the tree canopy. The average value of the minimum width and the maximum width is used as the canopy crown width because it has a mostly oval or irregularly shape. In this study, the canopy crown width as reference data was acquired by a measuring tape using the same method. Table 5 presents the crown width measured by the drone, that measured by the tape measure, and the difference between both the values. Fig. 9(a) shows the graph generated using the values presented in Table 5. Fig. 9(b) illustrates the graph of the difference between the values measured by TS and those by the drone. In the accuracy assessment, the canopy crown width of test areas 1 and 2 was underestimated as compared to the reference data. This can be attributed to the fact that the canopy crown has the shape of a horn and the drone captures only the upper part of the canopy crown. Further, the overlap between the canopy crowns may have led to the underestimation of the canopy crown in the extraction process. The RMSE of site 1 was 1.51m, and the RMSE of site 2 was 1.54m.

Table 5.Crown width of test area 2 (unit: m)

Fig. 9.Computed crown width and field measurement and difference between the two

 

4. Conclusion

Using the on-board digital camera of the remotely controlled drone, we computed the numbers, heights, and canopy width of trees. As experimental results from the test areas of the Konkuk University campus, the computed results were close to field measurements. The average difference between the tree height measured by TS and that by the drone was 0.53m in test area 1 and 2.07m in test area 2. The average difference between the crown width measured by TS and that by the drone was 1.27m in test area 1 and 1.33m in test area 2. The crown width measured by TS and that by the drone did not show a significant difference. In particular, the tree heights measured in test area 1 did not show a significant difference as 0.53. Unlike test area 1, M. glyptostroboides in test area 2, the deciduous coniferous forest, was not properly identified because a large number of leaves fell down during the winter when the images were taken, that is, because of the seasonal effect. In consideration of such a seasonal influence, the results of this paper suggest a high possibility of measuring tree heights using images.

Some errors in tree identification were observed in the DSM segmentation. This showed that the input parameters of the segmentation were sensitive to the segmentation result. If the characteristics of the tree extraction area were known roughly, we could have decreased the amount of errors by an introduction of the initial values. Therefore, we expect the proposed method and the corresponding results to contribute to the estimation of the quantity of biomass and carbon emission.

References

  1. Chang, A.J., Kim, Y.I., Lee, B.K., and Yu, K.Y. (2006), Estimation of individual tree and tree height using color aerial photograph and LiDAR data, Korean Journal of Remote Sensing, Vol. 22, No. 6, pp. 543-551. https://doi.org/10.7780/kjrs.2006.22.6.543
  2. Chang, A.J., Kim, Y.M., Kim, Y.I., Lee, B.K., and Eo, Y.D. (2012), Estimation of canopy cover in forest using KOMPSAT-2 satellite images, Journal of the Korean Society for Geospatial Information System, Vol. 20, No. 1, pp. 83-91. https://doi.org/10.7319/kogsis.2012.20.1.083
  3. Chopping, M., Moisen, G., Su, L., Laliberte, A., Rango, A., Martonchik, J.V., and Petersm D.P. (2008), Large area mapping of south western forest crown cover, canopy height, and biomass using MISR, Remote Sensing of Environment, Vol. 112, No. 5, pp. 2051-2063 https://doi.org/10.1016/j.rse.2007.07.024
  4. Hexagon Geospatial (2015), ERDAS IMAGINE Help, http://www.hexagongeospatial.com/support/documentation (last date accessed: 06 December 2015).
  5. Gougeon, F.A. and Leckie, D.G. (2003), Forest Information Extraction from High Spatial Resolution Images Using an Individual Tree Crown Approach, Canadian Forest Service. Information Report BC-X-396.
  6. Hung, C., Bryson, M., and Sukkarieh, S. (2012), Multi-class predictive template for tree crown detection, ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 68, pp. 170-183. https://doi.org/10.1016/j.isprsjprs.2012.01.009
  7. Hyyppa, J., Kelle, O., Lehikoinen, M., and Inkinen, M. (2001), A segmentation-based method to retrieve stem volume estimates from 3-D tree heigh models produced by laser scanners, IEEE Trans on, Vol. 39, No. 5, pp. 969-975.
  8. Kattenborn, T., Sperlich, M., Bataua, K., and Koch, B. (2014). Automatic single palm tree detection in plantations using UAV-based photogrammetric point clouds, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS, 5-7 September, Zurich, Switzerland, Vol. XL-3, pp. 139-144. https://doi.org/10.5194/isprsarchives-XL-3-139-2014
  9. La, H.P., Eo, Y.D., Chang, A.J., and Kim, C.J. (2015), Extraction of individual tree crown using hyperspectral image and LiDAR data, KSCE Journal of Civil Engineering, Vol. 19, No. 4, pp. 1078-1087. https://doi.org/10.1007/s12205-013-1178-z
  10. Lee, H.J. and Ru, J.H. (2012), Application of LiDAR data & high-resolution satellite image for calculate forest biomass, Journal of the Korean Society for Geospatial Information System, Vol. 20, No. 1, pp. 53-63. https://doi.org/10.7319/kogsis.2012.20.1.053
  11. Lee, S.J., Park, J.Y., and Kim, E.M. (2014), Development of automated model of tree extraction using aerial LiDAR data, Journal of the Korea Academia- Industrial Cooperation Society, Vol. 15, No. 5, pp. 3213-3219. https://doi.org/10.5762/KAIS.2014.15.5.3213
  12. Suárez, J.C., Ontiveros, C., Smith, S., and Snape, S. (2005), Use of airborne LiDAR and aerial photography in the estimation of individual tree heights in forestry, Computers and Geosciences, Vol. 31, No. 2, pp. 253-262. https://doi.org/10.1016/j.cageo.2004.09.015
  13. Zarco-Tejada, P.J., Diaz-Varela, R., Angileri, V., and Loudjani, P. (2014), Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods, European Journal of Agronomy, Vol. 55, pp. 89-99. https://doi.org/10.1016/j.eja.2014.01.004

Cited by

  1. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest vol.8, pp.9, 2017, https://doi.org/10.3390/f8090340
  2. Semi-automatic Tree Detection from Images of Unmanned Aerial Vehicle Using Object-Based Image Analysis Method pp.0974-3006, 2019, https://doi.org/10.1007/s12524-018-0900-1
  3. Estimating crown diameters in urban forests with Unmanned Aerial System-based photogrammetric point clouds vol.40, pp.2, 2019, https://doi.org/10.1080/01431161.2018.1562255
  4. 고도가 다른 저사양 UAV 영상을 이용한 정사영상 및 DEM 제작 vol.34, pp.5, 2016, https://doi.org/10.7848/ksgpc.2016.34.5.535
  5. 도심지역 수목 높이값 측정을 위한 무인항공기에서 취득된 스테레오 영상의 활용 가능성 고찰 vol.33, pp.6, 2015, https://doi.org/10.7780/kjrs.2017.33.6.2.9
  6. Comparison of Orthophotos and 3D Models Generated by UAV-Based Oblique Images Taken in Various Angles vol.36, pp.3, 2015, https://doi.org/10.7848/ksgpc.2018.36.3.117
  7. Developing Tree Height Calculation by Using Aerial Photographs for Monitoring Results of Revegetation Ex-mining Land Based on Government Regulation in Indonesia vol.1196, pp.None, 2015, https://doi.org/10.1088/1742-6596/1196/1/012061
  8. Assessing spring phenology of a temperate woodland: A multiscale comparison of ground, unmanned aerial vehicle and Landsat satellite observations vol.223, pp.None, 2019, https://doi.org/10.1016/j.rse.2019.01.010
  9. Examining the Multi-Seasonal Consistency of Individual Tree Segmentation on Deciduous Stands Using Digital Aerial Photogrammetry (DAP) and Unmanned Aerial Systems (UAS) vol.11, pp.7, 2015, https://doi.org/10.3390/rs11070739
  10. UAV-Based Automatic Detection and Monitoring of Chestnut Trees vol.11, pp.7, 2019, https://doi.org/10.3390/rs11070855
  11. An End to End Process Development for UAV-SfM Based Forest Monitoring: Individual Tree Detection, Species Classification and Carbon Dynamics Simulation vol.10, pp.8, 2015, https://doi.org/10.3390/f10080680
  12. Drone‐based photogrammetry‐derived crown metrics for predicting tree and oil palm water use vol.12, pp.6, 2015, https://doi.org/10.1002/eco.2115
  13. 레이저스캐닝과 포토그래메트리 소프트웨어 기술을 이용한 조경 수목 3D모델링 재현 특성 비교 vol.24, pp.2, 2015, https://doi.org/10.6109/jkiice.2020.24.2.304
  14. Individual tree crown detection and delineation across a woodland using leaf-on and leaf-off imagery from a UAV consumer-grade camera vol.14, pp.3, 2015, https://doi.org/10.1117/1.jrs.14.034501
  15. Estimation of Genetic Parameters and Selection of Superior Genotypes in a 12-Year-Old Clonal Norway Spruce Field Trial after Phenotypic Assessment Using a UAV vol.11, pp.9, 2015, https://doi.org/10.3390/f11090992
  16. Retrieving individual tree heights from a point cloud generated with optical imagery from an unmanned aerial vehicle (UAV) vol.50, pp.10, 2015, https://doi.org/10.1139/cjfr-2019-0418
  17. Comparing Forest Structural Attributes Derived from UAV-Based Point Clouds with Conventional Forest Inventories in the Dry Chaco vol.12, pp.23, 2020, https://doi.org/10.3390/rs12234005
  18. Automatic Detection and Parameter Estimation of Ginkgo biloba in Urban Environment Based on RGB Images vol.2021, pp.None, 2015, https://doi.org/10.1155/2021/6668934
  19. Canopy Top, Height and Photosynthetic Pigment Estimation Using Parrot Sequoia Multispectral Imagery and the Unmanned Aerial Vehicle (UAV) vol.13, pp.4, 2015, https://doi.org/10.3390/rs13040705
  20. Computer vision-based citrus tree detection in a cultivated environment using UAV imagery vol.187, pp.None, 2015, https://doi.org/10.1016/j.compag.2021.106273