The hardware and bandwidth for this mirror is donated by dogado GmbH, the Webhosting and Full Service-Cloud Provider. Check out our Wordpress Tutorial.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]dogado.de.
FORTLS is used for processing of point cloud data derived from
terrestrial-based technologies such as Terrestrial Laser Scanning (TLS)
or Simultaneous Localization and Mapping (SLAM). Point cloud data must
be provided as .las or .laz files. The first obligatory step is the
normalization of the point cloud applying the function
normalize
. The obtained normalized point clouds serve as
input data for the tree detection functions
tree.detection.single.scan
,
tree.detection.multi.scan
and
tree.detection.several.plots
. The function
tree.detection.single.scan
detects trees from normalized
TLS single-scan data and tree.detection.multi.scan
from
normalized TLS multi-scan (or SLAM) data. If data from more than one
plot are to be analyzed automatically, the function
tree.detection.several.plots
should be used, which includes
both the normalization and the tree detection functions and executes
these functions on each input plot sequentially.
The aim of the normalization process is to obtain the coordinates
relative to the plot’s center and the ground level. In this process, the
functions readLAS
, clip_circle
,
classify_ground
, grid_terrain
and
normalize_height
from the lidR package are used
internally (Roussel et al., 20201). The following steps are executed:
The following figure shows the normalized point cloud which is used
as example data below. The function plot
from the lidR package is used
to generate the figure.
singleLAS <- lidR::readLAS(paste(dir.data, "PinusRadiata.laz", sep = "/"))
lidR::plot(singleLAS, color = "RGB")
normalize
The normalize
function is applied as follows:
pcd.single <- normalize(las = "PinusRadiata.laz",
normalized = NULL,
x.center = 0, y.center = 0,
max.dist = 10, min.height = NULL, max.height = NULL,
algorithm.dtm = "knnidw", res.dtm = 0.2,
csf = list(cloth_resolution = 0.5),
RGB = TRUE,
scan.approach = "single",
id = NULL, file = "single.txt",
dir.data = dir.data, save.result = FALSE, dir.result = NULL)
The name of the .las or .laz file containing the point cloud data is
introduced in las
argument and must
include the .las/.laz extension. Optionally, the plot identification
number (id
) and the file name
(file
) can be defined. Both are set to
NULL
by default, which assigns 1
to the plot
identification number and 1.txt
(same name as the
identification number) to the reduced point cloud saved in the working
directory specified in dir.result
.
The directory of the input .las/.laz files and the output file can be
specified in dir.data
and
dir.result
respectively. If not specified,
the current working directory is used. The output .txt files containing
the reduced point clouds will be saved if not otherwise specified in
save.result
(save.result = TRUE
by default).
If the point cloud in the input file was already normalized, the
argument normalized
can be set to
normalized = TRUE
(default setting
normalized = NULL
). As a result, one part of the internal
normalization process is skipped. Furthermore the scanning approach
applied for data collection must be specified in
scan.approach
with "single"
(set by default) indicating the TLS single-scan approach and
"multi"
indicating the TLS multi-scan and SLAM point clouds
approaches.
The planimetric coordinates \(x\)
and \(y\) of the center are by default
x.center = 0
and y.center = 0
. If this does
not coincide with the point cloud data, the coordinates of the plot
center must be specified by x.center
and
y.center
.
Furthermore the size of the point cloud can be reduced by the
arguments max.dist
,
min.height
and
max.height
. If the maximum horizontal
distance in meter to the plot center (max.dist
) is set,
points that are further away are discarded. Similarly, the minimum and
maximum height in meters (min.height
,
max.height
respectively) defines which points are
discarded, that are those below the minimum height and those above the
maximum height relative to the ground level. The default value for all
three arguments is NULL
. Hence, no points are discarded
from the point cloud after normalization.
normalize
functionIn order to generate the DTM, two different algorithms can be applied
specified by algorithm.dtm
. Spatial
interpolation based on a k-nearest neighbor approach with
inverse-distance weighting (knnidw
) is selected by default.
The second method is the Delaunay triangulation (tin
). The
resolution of the DTM (res.dtm
) is set to
0.2 m by default but can be adjusted manually.
To adjust the CSF algorithm, a list with parameters (e.g. the cloth
resolution which is set to 0.5 by default) can be introduced in
csf
.
When the point clouds are colorized, the RGB values can be used to
improve the normalization and tree detection process
(RGB
). The colors serve to distinguish
leaf from ground and stem points by the Green Leaf Algorithm (GLA,
Louhaichi et al., 20014). If the GLA algorithm should be applied to
remove some points from the point cloud (i.e. leaf points), it must be
indicated by RGB = TRUE
.
The normalize
function generates the data frame shown
below. Each row corresponds to one point of the point cloud of the input
data. The columns id
, file
and
point
indicate the plot identification number, the file
name and the point number respectively. The following columns contain
the normalized Cartesian, cylindrical and spherical coordinates
x
(distance on x axis in m), y
(distance on y
axis in m), z
(height relative to ground level in m),
rho
(horizontal distance in m), phi
(angle in
rad), r
(radial distance in m) and theta
(polar angle in rad). The column slope
displays the slope
of the terrain given in rad. If the GLA algorithm was used, the column
GLA
shows the results of that algorithm. Furthermore, a
selection probability is assigned to each point by applying the PCP
algorithm (prob
) and the column prob.select
shows the selected plots (indicated with 1) and discarded points
(indicated with 0).
id | file | point | x | y | z | rho | phi | r | theta | slope | R | G | B | GLA | prob | prob.selec |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | single.txt | 1 | 1.259 | -9.688 | 0.460 | 9.769464 | 3.012362 | 9.780288 | 0.0470507 | 0.1441573 | 43264 | 23552 | 28416 | -0.2068966 | 0.0858728 | 0 |
1 | single.txt | 2 | 1.259 | -9.688 | 0.466 | 9.769464 | 3.012362 | 9.780572 | 0.0476635 | 0.1441573 | 41728 | 23296 | 27648 | -0.1964680 | 0.0858778 | 0 |
1 | single.txt | 3 | 1.265 | -9.687 | 0.460 | 9.769247 | 3.011740 | 9.780071 | 0.0470518 | 0.1441573 | 39168 | 20992 | 25344 | -0.2115385 | 0.0858690 | 0 |
1 | single.txt | 4 | 1.265 | -9.687 | 0.466 | 9.769247 | 3.011740 | 9.780355 | 0.0476646 | 0.1441573 | 38912 | 21248 | 25600 | -0.2057416 | 0.0858740 | 0 |
1 | single.txt | 5 | 1.277 | -9.686 | 0.460 | 9.769817 | 3.010509 | 9.780640 | 0.0470490 | 0.1441573 | 38400 | 31232 | 33024 | -0.0669216 | 0.0858790 | 0 |
1 | single.txt | 6 | 1.277 | -9.686 | 0.466 | 9.769817 | 3.010509 | 9.780924 | 0.0476618 | 0.1441573 | 34304 | 28672 | 30976 | -0.0647182 | 0.0858840 | 1 |
The .txt file saved to the directory indicated by
dir.result
(if save.result = TRUE
) contains a
similar data frame to that shown above. However, the data frame will
only include the reduced point cloud, i.e. only the selected points
(prob.select = 1
). The data frame is saved without row
names as .txt file by using the vroom_write
function of the
vroom
package.
The tree detection functions include algorithms to detect as many trees as possible in the point clouds. Additionally, the diameter at 1.3 m above ground level (diameter at breast height, \(dbh\)) is estimated and the coordinates of the tree’s center are calculated for each detected tree. Depending on the TLS approach, different tree detection functions should be used.
When the single-scan approach was used to collect the data, the
function tree.detection.single.scan
can be applied as
follows:
tls.resolution = list(point.dist = 6.34, tls.dist = 10)
tree.list.single.tls <- tree.detection.single.scan(data = pcd.single,
dbh.min = 4, dbh.max = 200, h.min = 1.3,
ncr.threshold = 0.1,
tls.resolution = tls.resolution,
d.top = NULL,
plot.attributes = NULL,
breaks = 1.3, stem.range = NULL, stem.section = c(1,5),
save.result = FALSE, dir.result = NULL)
The normalized and reduced point cloud, i.e. the output of the
normalize
function, is the input data frame for this
function (data
). The different arguments
that can be specified are explained below.
With dbh.min
and
dbh.max
, the range of possible tree
diameters can be specified. Hence, only cluster of points with a bigger
diameter than dbh.min
and a smaller diameter than
dbh.max
will be considered as possible trees. Additionally,
min.height
defines the minimum height of a
possible tree or point cluster to be considered as a tree. If not
manually specified, the values are set to dbh.min = 4
,
dbh.max = 200
(values in cm) and h.min = 1.3
(value in m).
The resolution of the TLS scan
(tls.resolution
) can be defined either by
the aperture angle or the distance between to consecutive points. The
aperture angle is determined by the horizontal and vertical aperture
angles (horizontal.angle
and vertical.angle
).
When choosing the angle to define the TLS resolution, both elements must
be part of the list required in
tls.resolution = list(horizontal.angle, vertical.angle)
.
The second option to determine the resolution considers the distance of
two consecutive points (point.dist
) at a certain distance
from the TLS device (tls.dist
) also given in a list as it
is shown in the example above.
In plot.attributes
a data frame with
attributes at plot level (e.g. strata) can be inserted for additional
information. This data frame must contain a column named id
coinciding with that used in the id
argument of the
function normalize
. If there are strata, the column
specifying the strata must be named stratum
(numeric) for
other functions (e.g., estimation.plot.size
or
metrics.variables
). If this argument is not specified, it
will be set to NULL by default and the function will not add possible
plot attributes.
In order to distinguish stem points from points belonging to thin
branches or foliage, the local surface variation, also known as normal
change rate (NCR) is calculated for each point. This is a quantitative
measure of the curvature feature (Pauly et al., 20025). For each point, the
NCR index is estimated in a local neighborhood with a radius of 5 cm.
This radius is considered as suitable for the stem separation in forests
(Ma et al. 20156; Xia et al., 20157). Higher NCR values
indicate more curved surfaces e.g. branches and foliage. Therefore, a
threshold (ncr.threshold
) is established,
which can be modified manually. By defalut it is set to 0.1 according to
Zhang et al. (20198), meaning that points with a higher NCR
value than that threshold are discarded.
In order to improve the detection of trees, the point cloud is
reduced by removing parts of it with no trees. The argument
stem.section
serves to identify the part
of the point cloud, i.e. a range of the coordinate \(z\), which contains less bushes, branches
or other disruptive points. Hence, a range of the coordinate \(z\) and therefore a belt-like area is
selected, either by defining the range manually or by an internal
algorithm. This belt-like area includes predominantly the stems of the
trees. Within this horizontal area, point clusters with higher density
are chosen, which are supposedly the stems of the trees. Applying a
circular buffer around the stems, vertical cylinders are created, which
contain the stems. In the following algorithms only these vertical
cylindrical parts of the point cloud are used to detect trees.
After the cylinders have been selected from the point cloud,
breaks
defines the height (in m) of
horizontal slices on which the tree detection algorithms are applied. If
not otherwise specified, slices are taken every 0.3 m starting at a
height of 0.4 m until reaching the maximum height. The slices have a
extension of 0.1 m (height of slice +/- 5 cm). On each slice the
following algorithms are applied:
ncr.threshold
)dbscan
function of the dbscan package is
used to perform the clustering. The radius of the epsilon neighborhood
(eps
) is defined as the minimum distance between two
consecutive points at the furthest distance from the plot center in the
respective horizontal sliceAs explained above, these algorithms are applied on all horizontal
slices (defined by breaks
). Thus, tree sections are
identified at different heights. Those sections that belong to the same
tree are joined by applying the DBSCAN algorithm on the horizontal
projection of the different sections. Thereafter, tree attributes can be
estimated.
The diameter of the detected trees (\(dbh\)) is obtained at 1.3 m as the double of radius. If the tree is not detected in the section at 1.3 m, the \(dbh\) is estimated by fitting a linear taper function with radius as response variable and the section heights as explanatory variables. Thus, this function allows to estimate the radius at 1.3 m and to calculate \(dbh\).
The argument d.top
defines the top stem
diameter (in cm), which is used for the calculation of the commercial
stem volume. If this argument is not specified, the commercial stem
volume (v.com
) is not calculated.
id | file | tree | x | y | phi | h.dist | dbh | h | v | SS.max | sinuosity | n.pts | n.pts.red | n.pts.est | n.pts.red.est | partial.occlusion |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1.txt | 1 | 781876.2 | 183714.3 | 2.395829 | 0.0909626 | 22.1215 | 15.651 | 0.2926833 | 0.273423 | 1.007558 | 1351 | 676.3333 | 1436 | 723 | 1 |
The data frame shown above is the output of the
tree.detect.single.scan
function. Each row represents a
detected tree (consecutively numbered in the column tree
).
The columns id
and file
display the plot
identification number and the file name respectively equal to the
columns in the normalize
output table. The coordinates of
the detected trees are given as Cartesian coordinates of the tree’s
center (x
and y
, in m) and azimuthal angles of
the center (phi
in rad), the left border
(phi.left
in rad), the right border (phi.right
in rad) and the horizontal distance from the tree’s center to the plot’s
center (h.dist
in m).
Furthermore, the tree attributes dbh
(diameter at breast
height in cm), h
(total height in m), v
(tree
stem volume in m\(^3\)) are estimated.
If d.top
was defined as argument, the volume of the stem
from the ground to the height of the diameter given in
d.top
is estimated (commercial stem volume,
v.com
in m\(^3\)).
For each tree, the number of points of the normal section slice (1.3
m +/- 0.05 m) of the original point cloud and the reduced point cloud
(n.pts
and n.pts.red
respectively) are
calculated and also estimated (n.pts.est
and
n.pts.red.est
respectively). The column
partial.occlusion
describes whether the the detected tree
is partially occluded (1) or not (0).
The data frame is saved as .csv file without row names using the
write.csv
function from the utils package.
When multiple scans were performed in the same sampling plot
(multi-scan approach) or SLAM devices were used, the function
tree.detection.multi.scan
can be applied as follows below.
Additionally, the function normalize
must be adjusted by
specifying scan.approach = "multi"
.
pcd.multi <- normalize(las = "PiceaAbies.laz",
x.center = 0, y.center = 0,
scan.approach = "multi", file = "multi.txt",
dir.data = dir.data, save.result = FALSE)
tree.list.multi.tls <- tree.detection.multi.scan(data = pcd.multi[pcd.multi$prob.selec == 1, ],
dbh.min = 4, dbh.max = 200, h.min = 1.3,
slice = 0.15,
ncr.threshold = 0.1,
tls.precision = 0.05,
breaks = NULL, stem.section = c(1,5),
d.top = NULL,
plot.attributes = NULL,
save.result = FALSE, dir.result = NULL)
The function tree.detection.multi.scan
comes along with
the same arguments as the function
tree.detection.single.scan
, which are described in “Data
from TLS single-scan approach”. However, instead of specifying the
resolution, the precision of the TLS (in m) can be defined in
tls.precision
. If not defined, the default value is 0.03 m.
The procedure remains the same and the output data frame contains the
all the columns explained above:
id | file | tree | x | y | phi | h.dist | dbh | h | h.com | v | v.com | SS.max | sinuosity | n.pts | n.pts.red | n.pts.est | n.pts.red.est | partial.occlusion |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
133 | 133.txt | 1 | -0.6255135 | 2.230320 | 1.903864 | 2.596521 | 47.72769 | 29.794 | 22.59629 | 2.490724 | 2.390859 | 0.1141185 | 20.478916 | 274.33333 | 274.33333 | 306.6935 | 306.6935 | 1 |
133 | 133.txt | 2 | 1.6590828 | -3.401872 | 5.136634 | 3.487652 | 55.07259 | 29.429 | 23.76329 | 3.278033 | 3.199423 | 0.3156073 | 10.807046 | 337.66667 | 337.66667 | 353.8912 | 353.8912 | 1 |
133 | 133.txt | 3 | 2.8728250 | 6.563325 | 1.198616 | 7.285568 | 58.00902 | 33.992 | 27.92669 | 4.168106 | 4.083952 | 0.2681409 | 12.553299 | 369.66667 | 369.66667 | 372.7604 | 372.7604 | 1 |
133 | 133.txt | 4 | 4.9299154 | -6.044229 | 5.390092 | 7.492374 | 70.45516 | 24.724 | 21.52856 | 4.557866 | 4.513530 | 0.2316154 | 14.397739 | 24.33333 | 24.33333 | 452.7381 | 452.7381 | 1 |
133 | 133.txt | 5 | -4.6368417 | -6.521025 | 4.055099 | 7.954993 | 41.33213 | 30.237 | 21.05922 | 1.894102 | 1.766764 | 0.3832313 | 9.073311 | 690.66667 | 690.66667 | 265.5963 | 265.5963 | 1 |
133 | 133.txt | 6 | -0.1890250 | -8.495225 | 4.662564 | 8.282057 | 38.60144 | 23.131 | 15.41635 | 1.286238 | 1.179200 | NA | NA | 581.00000 | 581.00000 | 248.0492 | 248.0492 | 1 |
The following figure shows the same point cloud
("PiceaAbies.laz"
) as above. The trees that were detected
by the tree.detection.multi.scan
function are labeled with
a red belt at 1.3 m.
If data from multiple plots are to be analysed, the function
tree.detection.several.plots
can be used. This function
conducts both normalization and tree detection processes for each plot
automatically. The result tables (as explained above) are stored
directly and separately for each plot. Hence, if an error occurs in one
plot, the results of the previously analysed plots are stored.
In the function, the arguments for both the normalize
and tree.detection
functions must be specified as explained
above. The function is applied as follows:
tls.resolution = list(point.dist = 7.67, tls.dist = 10)
tree.list.tls <- tree.detection.several.plots(las.list = c("PinusSylvestris1.laz", "PinusSylvestris2.laz"),
id = NULL, file = NULL,
scan.approach = "single",
x.center = 0, y.center = 0,
max.dist = 10, min.height = 0.1, max.height = NULL,
algorithm.dtm = "knnidw", res.dtm = 0.2,
csf = list(cloth_resolution = 0.5),
dbh.min = 7, dbh.max = 200, h.min = 1.3,
tls.resolution = tls.resolution,
ncr.threshold = 0.05,
breaks = 1.3,
stem.section = c(0.5, 4),
d.top = NULL, plot.attributes = NULL,
dir.data = dir.data, save.result = FALSE, dir.result = NULL)
The names of the .las files have to be introduced as a character
vector in las.list
. Optionally, vectors
with the plot identification numbers and the file names can be specified
in id
and
file
. If not specified, the plots will be
named with correspondent numbers from 1 to n plots (id
) and
their respective id
in “id.txt” as file names. The other
arguments can be specified as explained in the sections above.
The input files are analysed successively. After finishing the
analysis of one plot, the reduced point cloud as .txt file and the tree
list as .csv file are saved to the directory indicated in
dir.results
.
Roussel, J.R., Auty, D., Coops, N.C., Tompalski, P., Goodbody, T.R.H., Sanchez Meador, A., Bourdon, J.F., de Boissieu, F., Achim, A., 2020. lidR: an R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sens. Environ. 251, 112061 https://doi.org/10.1016/j.rse.2020.112061.↩︎
Zhang, W., Qi, J., Wan, P., Wang, H., Xie, D., Wang, X., Yan, G., 2016. An easy-to-use airborne LiDAR data filtering method based on Cloth simulation. Rem. Sens. 8 (6), 501. https://doi.org/10.3390/rs8060501.↩︎
Molina Valero, J.A., Ginzo Villamayor, M.J., Novo Pérez, M.A., Álvarez-González, J.G., Pérez-Cruzado, C., 2019. Estimación del área basimétrica en masas maduras de Pinus sylvestris en base a una única medición del escáner láser terrestre (TLS). Cuad. Soc. Esp. Cienc. For. 45 (3), 97–116. https://doi.org/10.31167/csecfv0i45.19887.↩︎
Louhaichi, M., Borman, M. M., & Johnson, D. E. (2001). Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto International, 16(1), 65-70. doi:10.1080/10106040108542184↩︎
Pauly, M., Gross, M., & Kobbelt, L. P., (2002). Efficient simplification of point-sampled surfaces. In IEEE Conference on Visualization. (pp. 163-170). Boston, USA. https://doi.org/10.1109/VISUAL.2002.1183771↩︎
Ma, L., Zheng, G., Eitel, J. U., Moskal, L. M., He, W., & Huang, H. (2015). Improved salient feature-based approach for automatically separating photosynthetic and nonphotosynthetic components within terrestrial lidar point cloud data of forest canopies. IEEE Transactions Geoscience Remote Sensing, 54(2), 679-696. https://doi.org/10.1109/TGRS.2015.2459716↩︎
Xia, S., Wang, C., Pan, F., Xi, X., Zeng, H., & Liu, H. (2015). Detecting stems in dense and homogeneous forest using single-scan TLS. Forests. 6(11), 3923-3945. https://doi.org/10.3390/f6113923↩︎
Zhang, W., Wan, P., Wang, T., Cai, S., Chen, Y., Jin, X., & Yan, G. (2019). A novel approach for the detection of standing tree stems from plot-level terrestrial laser scanning data. Remote Sens. 11(2), 211. https://doi.org/10.3390/rs11020211↩︎
Ester, M., Kriegel, H.P., Sander, J., Xu, X., (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd 96 (34), 226–231.↩︎
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.
Health stats visible at Monitor.