Skip to content


  • Research
  • Open Access

Research on 3D building information extraction and image post-processing based on vehicle LIDAR

EURASIP Journal on Image and Video Processing20182018:121

  • Received: 22 July 2018
  • Accepted: 11 October 2018
  • Published:


Taking the elevation point cloud data of buildings as an object, this paper proposed a 3D reconstruction method. By using the simple clustering method, the vertices of a plane with similar normal vector components are classified as a set of points on the same plane. The directional clustering method is used to divide each plane point, and the least square method is used to complete the fitting of the data points. The outer boundary of the facade of the building is established, and then the 3D coordinates of each corner point of the elevation are obtained. By constantly changing the threshold of angle limit and iteration number, a maximum slope can be found by choosing the key point. Taking the regular buildings in real blocks as an example, the model is compared with the experimental results. Through comparative analysis experiments, the effectiveness of the proposed method is further demonstrated, and the 3D model of the building can be constructed more accurately.


  • Vehicle LIDAR
  • Point cloud data filtering
  • Least square method
  • Reconstruction of buildings

1 Introduction

The earliest method to establish building model mainly used camera technology to extract the main structure information of the building from a stereoscopic image. The development of science and technology has promoted the use of LIDAR technology to extract the building characteristic parameters. This technique has the advantage of high precision and fast acquisition speed.

The traditional automatic or semi-automatic reconstruction for 3D city is mainly based on the image modeling method, which is according to the principle of stereo vision, and the surfaces of buildings are reconstructed by using aerial image and ground image. This stereo reconstruction process is aimed at restoring the realistic 3D coordinates from two-dimensional images. This problem is the basic task of photogrammetry [1] and computer vision [2]. The data pre-processing of LIDAR system can be divided into two types: point cloud data filtering and building facade segmentation [3, 4]. The point cloud data is obtained by scanning the urban buildings by vehicle LIDAR. The data include buildings, trees, roadside obstacles, street lights, and so on. Because the obtained point cloud data is random and it does not include the attributes of objects, these factors greatly affect the difference between buildings and objects. So it is difficult to obtain useful building point cloud information from a large number of point cloud data [5].

As for the feature extraction of roof structure, Vosselman and Dijkman use the traditional 3D Hough transform to extract the roof plane which has a high computational complexity and is determined by the number of roof points and the discretization of two angles [6]. Elaksher and Bethel assume that each roof has only one slope and all the eaves are parallel to or perpendicular to the main direction of the house, reducing the 3D Hough transform to two dimensions to deal with [7]. Following the hypothetical verification approach, Hu proposes a new set of algorithms including controlled search, enhanced Hough transform, and serial link technique which automatically represent the boundary of the house as rectangle and regular polygon. Because it is very difficult to rebuild the roof of complex buildings, many algorithms focus on detecting the range of houses. Most algorithms also need to distinguish between buildings and vegetation by means of other auxiliary information, such as aerial images, plans, etc.

Based on the special attribute of point cloud data of LIDAR, the filtering method of point cloud data is established in this paper. At the same time, by setting the threshold of corresponding parameters, the facade segmentation of building point cloud filtering is completed. Then, the 3D model of the building is reconstructed by extracting the point cloud of the facade of the building.

2 Extraction of building point cloud from LIDAR point cloud data

There is a lot of echo signal information obtained by vehicle LIDAR equipment, which mainly includes three-dimensional coordinates of scanning points, echo intensity, echo frequency, and system parameters. The data recorded by different LIDAR systems are also quite different, but in practical applications, the commonly used information is point cloud geometry data (coordinate data), laser intensity data, and laser echo data of objects returned by laser pulses [8].

2.1 Geometric data

That is the 3D coordinate data calculated by the laser rangefinder which records the 3D spatial information of all terrain points in the whole area, and completes the geodetic coordinate conversion of the whole ground object area by coordinate solution and transformation. This part of the data is also the main LIDAR production data, and is the core data.

2.2 Laser intensity data

It reflects the response characteristics of surface objects to laser signals, and the intensity measurement methods used by different LIDAR systems are quite different. In theory, the laser intensity information is the best reflection of the type of ground objects, and the characteristics of these data can be analyzed to complete the classification. But in fact, the intensity of the laser echo is not only related to the characteristics of the reflected medium, it is also related to the incident angle of the laser, the atmospheric absorption of the laser in the process of laser pulse propagation, and so on. There are many disturbing factors, and these disturbing factors are difficult to be solved by accurate mathematical models; this disadvantage makes it difficult to classify ground objects according to strength data.

2.3 Laser echo data

Due to the penetration of the laser, different objects have different echo times and information intensity in the scanning process. When the laser pulse hits the flat top of the building or the bare ground surface, only one echo can be produced, and the echo signal is single. When the pulse is irradiated to the vegetation, the pulse can penetrate the vegetation to form multiple echoes.

2.4 Spectral data

The laser radar can directly obtain the 3D coordinates of the target point, which can make up for the lack of height information of the two-dimensional data such as optical image. Although the LIDAR data has its own advantages in extracting spatial position information; the spectral information contained in the image data also plays an important role in the recognition of objects.

The most important task of building reconstruction is to extract useful data from LIDAR point cloud. The point cloud filtering and classifying algorithm of geometric feature and attribute feature are usually used to extract the point cloud of building. The specific steps are shown in Fig. 1 [9, 10].
Fig. 1
Fig. 1

Extraction process of building point cloud data

3 Method

The plane (facade) of the building facing the direction of the vehicle does not always have to be a plane, but it is affected by the direction of the vehicle at a time when it produces different relative obliquity [11]. For example, curved, concave, or convex structures. Methods for rebuilding a building are as follows.

3.1 Facade laser point clustering of complex buildings

The ridge of a house belongs to a more complicated structural form. In general, the ridge line of a house should be parallel to the horizontal plane, but the calculated results are not necessarily the same. There are two ways to solve the problem. One way is to modify the building model when the whole building model is constructed, the other one is to add the constraint relation between the two planes to obtain the roof plane, and to obtain two best vertical feet simultaneously by joint estimation.

Using of irregular triangulation networks (TIN) stores proximity between discrete points in space. The TIN mathematical model divides the obtained discrete data into several triangles; each point in these surfaces is the vertex of the triangle. For scattered irregular elevation points, it can concentrate these points in a random set of points P, make each point in P correspond to its elevation value separately, so we can build a spatial TIN model [12]. The constructed TIN model is shown in Fig. 2a. For triangular facets ΔABC which can be shown in Fig. 2b, we can know the three-dimensional coordinates of the three vertices and normal vector \( \overrightarrow{v}=\overrightarrow{b}\times \overrightarrow{a} \) of ΔABC. Take the center of gravity of ΔABC as the starting point to make the normal vector \( \overrightarrow{v} \) of ΔABC, as shown in Fig. 2b. In complex buildings, the normal vectors have similar directions in the plane with the same slope or similar slope, while the normal vectors on the triangulated small planes on different slope planes have great differences.
Fig. 2
Fig. 2

TIN model and surface normal vector. a TIN model. b Surface normal vector

When a normal vector is assigned to the 3D model, the image rendered by the rendering engine becomes three-dimensional. However, the model lacks texture and still lacks authenticity. Texture is an important surface attribute in 3D model. If the real picture is taken as the texture of the 3D model, it can greatly enhance the reality of the model.

Using a simple clustering method, the vertices of a plane with similar normal vector components are classified as a set of points on the same plane. For regular herringbone facades, the elevation points are clustered into two classes by taking the mean value of normal vectors in x and y components as the center. By calculating the intersection of the result points set of each component clustering, the comprehensive clustering results of LIDAR points in x and y components are obtained, which is shown in Fig. 3.
Fig. 3
Fig. 3

Cluster distribution and fitting of plane point

3.2 Plane fitting of vertical laser points

Just like straight line fitting, the fitting plane of these n points can be obtained by fitting each point in space into the most similar plane. It also uses the least square method to complete the fitting of data points. That is to say, the plane fitted by these n points (xi, yi, zi) (i = 1, 2, …, n) should satisfy the minimum value of [zi-f (xi, yi)]2. The plane point set I and the plane point set II can be fitted to the plane I and plane II respectively by the least square fitting.

In the plane z = f (x, y), the initial contour of the elevation plane can be calculated by using the convex shell of the point set, thus the point set of the plane can be obtained [13]. For the point set S1 of point p1, p2,…,pn on plane. The vertices of the convex shell of point set {p1,p2,…,pn} are calculated by using the x and y coordinates of each point. The basic methods are (1) passing through the point p1 (a vertex of the convex shell) with the smallest y coordinate in S1 makes a ray parallel to the positive direction of the x axis. (2) Let l rotate counterclockwise around p1, if it meets the second point p2 in S1, then the ray l rotates around p2 counterclockwise, thus creating a segment between p1 and p2, which is one of the edges of the convex shell. (3) The ray continues to rotate; it rotates to the initial position of p1 where the closed polygon is the convex shell that needs to be obtained.

3.3 Determination of the outer boundary of the facade of complex buildings

The steps to track the boundary of the facade plane are as follows: First of all, finding the external boundary of the facade; and then, finding the intersection line between two planes; finally, establishing the boundaries of each plane.

When calculating the boundary of elevation plane. First of all, the intersection line of the elevation plane is calculated, and the minimum rectangle on the outer side of the intersection line which can contain convex shell is calculated by using the geometric relation of the intersection line. Its boundary is the boundary of the facade plane of the building. The intersection line l between the two adjacent planes is the boundary of the facade plane of each building. Given plane I: A1x + B1y + C1z = 1; plane II: A2x + B2y + C2z = 1. The intersection line l of plane I and plane II can be expressed in the form of coordinates as follows: (B1C2-B2C1, C1A2-C2A1, A1B2-A2B1), any point on the intersection can be represented as:
$$ \left\{\begin{array}{l}x={x}_0\\ {}y=\frac{C_1{A}_2-{C}_2{A}_1}{B_1{C}_2-{B}_2{C}_1}{x}_0-\frac{C_1-{C}_2}{B_1{C}_2-{B}_2C}\\ {}z=\frac{A_1{B}_2-{A}_2{B}_1}{B_1{C}_2-{B}_2{C}_1}{x}_0-\frac{B_1-{B}_2}{B_1{C}_2-{B}_2C}\end{array}\right. $$
The boundary corner of the corresponding facade can be obtained by Eq. (1). Using the above method, according to the spatial position of each point in the TIN model, the triangle with large spacing is removed. The initial boundary point can be obtained by selecting the boundary edge formed in the remaining triangular network. Among these initial boundary points, the point where the slope of the de-drop vector varies greatly, the other points are inflection points. By connecting the inflection points to each other, you can get a regular polygon of the structure shape which is basically the same as the shape of the actual building. However, in order to make all LIDAR points form the interior of polygon at inflection point, we need to obtain the extension point of the boundary. The boundary of the vertical plane formed by the extension point is the final extracted boundary of the elevation plane of the building. The elevation boundary extraction method of building LIDAR point cloud data is divided into four parts [14]:
  1. 1.

    Establishment of triangular networks: The point clouds on the facade of a building are combined to form a triangular network perpendicular to the horizontal plane, and the longer side triangles are removed from many triangles.

  2. 2.

    Filter initial boundary points: For the remaining triangulation, a triangle has only one boundary side, while the non-boundary side can appear in two triangles at the same time. Therefore, the initial boundary points can be separated by this method, as shown in Fig. 4. As you can see from the figure, this is a polygon with a concave angle which is obtained directly from an irregular triangular network; the boundary of the convex shell is shown in the black thick line in Fig. 4. It can also be seen that this thick black line does not really represent the facade boundary of the building. In this case, the triangles whose side length is obviously larger than the average point spacing are removed from these irregular triangulations, and the initial boundary points are removed by using the spatial position relation of the points in the remaining triangulation network. It can be seen that there is an irregular triangulation network structure in the upper left part of Fig. 4, and some sides are too long for polygons in the facade of the building. Therefore, the threshold can be removed by setting the threshold of the length. In this paper, the threshold of side length is set to be 1.5 times the average point spacing, and the triangle whose edge length is larger than the edge length threshold is removed, and the irregular triangular network is also removed. The treated facade is shown in Fig. 5.

  3. 3.

    Filter inflection point: In the above initial boundary point, the point with significant slope change is determined and extracted by the angle of vector between a certain point and its adjacent two points as the inflection point of the polygon. And these inflection points need to be filtered, according to the magnitude of the angle value of the two vectors formed by these three points, the key point is called the point whose angle value exceeds a certain threshold value of the vector.

Fig. 4
Fig. 4

Irregular triangular network

Fig. 5
Fig. 5

Initial boundary line and boundary point of elevation

Figure 6 is a geometric relationship of the three adjacent point vectors where Pneighbor1, Pcenter, and Pneighbor2 are the initial boundary points. Pneighbor1 and Pneighbor2 are two closely adjacent boundary points of the boundary point Pcenter, and θ is the angle of variation of the two boundary boundaries of the boundary point Pcenter which is called the angle between the two vectors. The larger the angle value is, the greater the slope value of the corresponding line will be; this property is used to judge that the boundary point is the key point in the initial boundary point. By changing the limit value of angle (threshold) and iterative subvalue, a key point of maximum slope change is found. In practical applications, the initial angle threshold is often set to 10°, and the increment of the holding angle threshold is 1°. The number of iterations is 30, that is, when the angle threshold reaches 40, it is a better threshold. The inflection points (such as “” in Fig. 7) can be obtained by continuous iteration; the boundary of building facades composed of these inflection points can clearly express the contours of building facades.
  1. 4.

    Extended borders: It can be seen from Fig. 7 that some data values of laser points are outside the contours of the facade of the building, so it is necessary to extend the boundary to make all laser data points fall inside the contour of the facade boundary. The idea of boundary expansion is to find a point with the largest distance to each side of the polygon by calculating the distance between the point and the straight line. The steps to implement: first, calculate the point Pmaxi (In the diagram, use “” marking) in the set {p1, p2,…,pm} which is the point of the maximum distance from the interior point to the boundary line. Then the point Pmaxi is judged on the inner or outer side of the polygon formed on the facade of the building. If the point Pmaxi is outside the polygon, the point is the extension of the boundary line; if the point Pmaxi is inside the polygon, we need to calculate the point Pmaxnewi which has maximum distance from the new line that crosses point Pmaxi and the directional vector is the same as the directional vector of the boundary line; this point is the extension point of the boundary line. The boundary obtained by using the above steps is shown in Fig. 7 as the black boundary which is the facade plane of the reconstructed building.

Fig. 6
Fig. 6

Relation of adjacent vectors

Fig. 7
Fig. 7

The turning point of the boundary

3.4 Building model establishment

As shown in Fig. 8, the regular building on the herringbone facade has ten vertices. Since the walls of the building are vertical to the ground and are symmetrical structures, the vertices 3, 4, 5, 6, 7, 8, 9, 10 correspond to the same values of x and y respectively. Vertices 1 and 2 have the same elevation values; vertices 3, 4, 5, 6 have the same elevation values; and vertices 7, 8, 9, 10 have the same elevation values [15]. Thus, the coordinates of each point can be obtained as follows: (x1, y1, z1), (x2, y2, z1), (x3, y3, z3), (x4, y4, z3), (x5, y5, z3), (x6, y6, z3), (x3, y3, z7), (x4, y4, z7), (x5, y5, z7), (x6, y6, z7). A total of 15 parameters are required in which z1 denotes elevation of the facade of the building and z3 indicates the height of the eaves of the building and z7 indicates the elevation of the ground. The parameters x1 to x6, y1 to y6, and z1 to z3 can be obtained from the above results of plane fitting and boundary determination. z7 is the height of the cloud data around the building. The 3D model of the building can be established when the 3D coordinates of the above corner points are determined.
Fig. 8
Fig. 8

Herringbone elevation model

Considering that the research object of this paper is mainly regular buildings. The facade of a building is mainly composed of a flat wall and some detailed features (such as windows), so the essence of dividing the point cloud of the facade of a building is to extract the plane slice from the data of the point cloud. When dealing with the point cloud data on the facade of a building, the main object to be solved is the partition of the wall, window, and other important plane objects contained in the point cloud data.

The vehicle laser scanning system acquires the spatial point cloud data by scanning the objects line by line. The laser scanning points are arranged in a two-dimensional grid, and each two-dimensional grid node corresponds to a three-dimensional spatial point. This kind of organization is similar to that of two-dimensional images, so point cloud data is also called range image. For the 3D reconstruction of vehicle point cloud data, most of the existing feature extraction methods are based on distance image, and the idea of image processing is used to extract the feature of point cloud. Li Bijun and others obtained the two-dimensional plane features of the facade of the building through the overall matching correction of the continuously scanned laser survey section, and then recalculated the original measurement data according to the correction information. The outer-plane contour information reflecting the geometric characteristics of the facade of the building is obtained.

After extracting line segment point set from original point cloud data, the influence of discrete points is eliminated. The remaining points mainly include ground point and building elevation point, and also include some linear segment points set formed by reflection of other ground objects. In order to distinguish different objects by line segment set, we must give corresponding attributes to the line segment set: (1) the slope of the line segment is obtained by fitting the points in the line segment. Among them, m = 1, …, M.  M is the number of line segment motor on the scan line; (2) the coordinate \( \left({X}_m^{\hbox{'}},{Y}_m^{\hbox{'}},{Z}_m^{\hbox{'}}\right) \) of the line segment is represented for the mean value of the coordinates at the beginning and the end of the line segment. The features of these line segments are analyzed to classify them accurately. Setting h0 and d0 as distance threshold and height threshold respectively, the elevation of ground points on the scanning line is generally smaller which meets \( \left({\mathrm{Z}}_{\mathrm{m}}^{\prime }-\min \left({\mathrm{Z}}^{\prime}\right)\right)<{\mathrm{d}}_0 \). And the facade of the building is far away from the scanning center, which meets \( \left|{\mathrm{Y}}_{\mathrm{m}}^{\prime }-\max \left({\mathrm{Y}}^{\prime}\right)\right|<{\mathrm{h}}_0 \).

4 Discussion and experimental results of building reconstruction

For the identified data, the size, position, and arrangement of several elements of the facade can be obtained. According to these relationships, the elements of the facade are classified, and the structural information is extracted, thus the fine 3D model of the facade of the building can be obtained.

The Optech ALTM1210 LIDAR test system is used to collect the data of two main buildings in the block. The average point density collected by the instrument is 0.67 points/m2. This series of data records the three-dimensional position information feedback from each laser pulse signal and the last pulse signal; the blocks measured consist of more buildings and some nearby vegetation.

In this paper, GPS and other direct geographic positioning devices are not used in the acquisition of ground LIDAR data. Therefore, in order to facilitate late data registration. During scanning, the site layout of ground LIDAR data is briefly recorded and the schematic spatial position relationship between the surrounding buildings and the stations can be obtained by sketching the outline of the surrounding buildings.

The obtained LIDAR point cloud data are shown in Fig. 9. The building on the left is marked “1” and the building on the right is marked “2”. Figure 10 shows the reconstructed building.
Fig. 9
Fig. 9

LIDAR raw point cloud data

Fig. 10
Fig. 10

Reconstruction of buildings. a Building 1. b Building 2

As for the irregular and irregular discrete date in Fig. 9, before the reconstruction of 3D buildings, we extract and separate the main point clouds using the methods used earlier in this paper, fit the extracted elevation cloud of buildings, and rebuild the buildings using the previous method. Through the comparative analysis experiment, the validity and feasibility of the method described in this paper are further demonstrated, and the proposed reconstruction method is able to construct the 3D model of the building more rapidly. The drawback of vehicular LIDAR is that it does not have better data about point clouds on the roof.

5 Conclusion

In this paper using a simple clustering method, the vertices of a plane with similar normal vector components are classified as a set of points on the same plane. The directional clustering method is used to divide each plane point, and the least square method is used to complete the fitting of the data points. And we concluded that:
  1. 1.

    Using the simple clustering method, the vertices of a plane with similar normal vector components are classified as a set of points on the same plane.

  2. 2.

    By using the direction clustering method to divide each plane point and fitting of the data points by using the least square method, the outer boundary of the elevation of the building is established, then the three-dimensional coordinates of each corner point of the elevation are obtained. The initial contour of the elevation plane is calculated by using the convex shell of the point set.

  3. 3.

    In these irregular triangulation networks, the triangles whose sides are obviously larger than the average point spacing are removed, and the initial boundary points are removed from the remaining triangulation by using the spatial position relation of the points. By changing the limit value of angle (threshold) and iterative subvalue, a key point of maximum slope change is found. The idea of boundary expansion is to find a point with the largest distance from each side of the polygon by calculating the distance between the point and the straight line.

  4. 4.

    Through comparative analysis experiments, the effectiveness of the proposed method is further demonstrated and the 3D model of the building can be constructed more accurately.




The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

About the authors

1. Cang peng, 30 years old, doctor. Studying in Changchun University of Science and Technology. The main research on Electro-mechanical system control theory, technology, and image processing.

2. Yu zheng lin, corresponding author, 46 years old, professor. Working on Changchun University of Science and Technology. The main research on Electro-mechanical system control theory, technology, and image processing.


This work was supported by financial support which is provided by the National Science Foundation of China (51204100).

Availability of data and materials

We can provide the data.

Authors’ contributions

CP was responsible for the design and experiment of the manuscript. YZ was responsible for writing and revising the manuscript. Both authors read and approved the final manuscript.

Competing interests

We would also like to confirm that the manuscript has not been published before nor submitted to another journal for the consideration of publication. The manuscript does not have any potential competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

College of Mechanical and Electric Engineering, Changchun University of Science and Technology, Changchun, 130022, China


  1. M. Pierzchala, P. Giguere, R. Astrup, Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM [J]. Computers 145, 217–225 (2018)Google Scholar
  2. B. Brown, S. Rusinkiewicz, Global non-rigid alignment of 3-D scans [J]. ACM transactions on graphics (Proc. SIGGRAPH). 26(3), 185–188 (2007)View ArticleGoogle Scholar
  3. X.F. Huang, H. Li, X. Wang, F. Zhang, An overview of airborne LIDAR data filtering methods [J]. Acta Geodaetica Et Cartographica Sinica 38(5), 465–469 (2009)Google Scholar
  4. K. Zhang, S.C. Chen, D. Whitman, et al., A Progressive morphological filter for removing nonground measurement from LIDAR data [J]. IEEE Trans. Geosci. Remote Sens. 41(4), 872–882 (2003)View ArticleGoogle Scholar
  5. X.G. Lin, X.G. Ning, H.Y. Duan, J.X. Zhang, A single-stage LIDAR point cloud clustering method for stratified random sampling [J]. Science of Surveying and Mapping 42(4), 10–16 (2017)Google Scholar
  6. Razali H, Ouarti N. Lidarbox: A fast and accurate method for object proposals VIA lidar point clouds for autonomous vehicles[C]// IEEE International Conference on Image Processing. IEEE 1352–1356 (2017)Google Scholar
  7. Elaksher A, Bethel J Building extraction using LIDAR data [A]. In: ACSM2ASPRSAnnual Conference Proceedings [C] (Washington DC, 2002) 9Google Scholar
  8. M. Tan, B. Wang, Z. Wu, et al., Weakly supervised metric learning for traffic sign recognition in a LIDAR-equipped vehicle [J]. IEEE Transactions on Intelligent Transportation Systems 17(5), 1415–1427 (2016)View ArticleGoogle Scholar
  9. J.H. Mao, Y.J. Liu, P.G. Cheng, X.H. Li, Q.H. Zeng, J. Xia, Feature extraction with LIDAR data and aerial images [C]. The proceedings of the 14th international conference on geoinformatics, SPEI 6419(64190), 1–8 (2006)Google Scholar
  10. S.S. Zhu, H.W. Wang, Remote sensing image processing and application [M] (Science Press, Beijing, 2006)Google Scholar
  11. C. Zhao, B.M. Zhang, X.W. Chen, et al., A building extraction method based on LIDAR point cloud [J]. Bulletin of Surveying and Mapping (2), 35–39 (2017)Google Scholar
  12. J.Y. Rau, A line-based 3D roof model reconstruction algorithm: tin-merging and reshaping [J]. ISPRS Annals of Photogrammetry 1–3, 287–292 (2012)Google Scholar
  13. P.D. Zhou, Computation geometry—design and analysis of algorithms [M] (University Press, Beijing, 2005)Google Scholar
  14. D. Mongus, N. Lukač, B. Žalik, Ground and building extraction from LIDAR data based on differential morphological profiles and locally fitted surfaces [J]. ISPRS Journal of Photogrammetry & Remote Sensing 93(7), 145–156 (2014)View ArticleGoogle Scholar
  15. Zeng Q H. Airborne laser radar point cloud data processing and building 3D reconstruction [D]. (Shanghai University: 2009)Google Scholar


© The Author(s). 2018