1. bookAHEAD OF PRINT
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Acceso abierto

3D Modeling System of Indoor Environment Art Landscape Design under Statistical Equation

Publicado en línea: 15 Jul 2022
Volumen & Edición: AHEAD OF PRINT
Páginas: -
Recibido: 17 Feb 2022
Aceptado: 21 Apr 2022
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Introduction

In recent years, drone aerial photography technology has been increasingly used in forestry. UAV oblique photography technology is an emerging and effective means to obtain 3D models of survey areas in recent years quickly. It breaks through the limitation of traditional aerial survey cameras that only shoot from a vertical angle [1]. It collects aerial images with spatial information from multiple vertical and oblique angles simultaneously. It truly reflects objects’ appearance, position, and height and makes up for the low simulation degree of traditional artificial modeling. It is gradually applied in the field of rapid 3D modeling. This study proposes a 3D modeling method for indoor environmental artistic landscape combining UAV oblique photography and 3D laser scanning technology. This method realizes the complementary advantages of the two technologies.

Field data collection
Field data collection equipment
UAV tilt camera system

We use a micro drone tilt photography system with an electrically powered six-rotor drone equipped with a miniature five-lens AP5600 tilt camera [2]. The system can capture oblique photography work within 5km2. The data resolution can be up to 2cm. It meets the requirements of 1:500 surveying and mapping accuracy. The technical parameters are shown in Table 1.

Related technical parameters of tilt camera

Technical indicators Parameter
Number of sensors 5pcs
Sensor size 23.2mm×15.4mm
Pixel Physical Size 4.25μm
Lens focal length 20mm
Side view lens tilt angle 45°
Image resolution 5472×3648
Backpack 3D Laser Scanner

HERON backpack scanner is a high-precision laser SLAM synchronous positioning measurement system composed of a laser scanner, IMU, high-performance computer, lithium battery, and backpack rigid frame [3]. The scanning accuracy can reach the centimeter level. It can simultaneously build a two-dimensional and three-dimensional map measured at the centimeter level. The machine can realize large-scale scene modeling, measurement, mapping, spatial analysis, and other functions. It is a high-precision, high-efficiency, low-cost indoor and outdoor integrated 3D scanning and measurement equipment. Its technical parameters are shown in Table 2.

Related technical parameters of Heron backpack 3D laser scanning system

Technical indicators Parameter
Scan distance 100m
Scan angle −10; +30
Absolute accuracy 5cm
Relative accuracy 1cm
Absolute resolution 2cm
Initialization and battery life The 30s, 3~4h
Obtaining 3D laser point cloud on the ground

The data collection and processing process includes initial positioning estimation, pose optimization, segmentation map and trajectory update, fusion solution, and other parts (Figure 1). The IMU provides acceleration and angular rates and obtains a preliminary estimate of the system's position and attitude [4]. During the operation, the equipment optimizes the newly acquired pose results and the existing results to obtain accurate trajectory data. The system then performs closed-loop detection and adjustment on the generated trajectory and laser scanning data to obtain 3D point cloud data. Figure 2 is a tree point cloud collected by a backpack-type 3D laser scanner.

Figure 1

Backpack 3D laser scanning process

Figure 2

Tree point cloud acquired by backpack 3D laser scanner

Fusion modeling of laser point cloud and dense point cloud
Technical route design

The UAV oblique camera system captures oblique images from multiple angles. It extracts more feature points from multi-view images to form a dense point cloud through space three encryption and matching algorithm. The backpack-type 3D laser scanning system scans the ground at a certain angle to directly obtain the surface 3D coordinates of surface objects. This procedure creates a 3D point cloud. Both methods generate a large amount of point cloud data [5]. This makes it possible to fuse point clouds obtained by oblique photography with 3D laser point clouds. The UAV oblique photography technology can quickly obtain the three-dimensional information and texture information of the ground objects in a large area. Still, the accuracy of the obtained ground objects is low. The laser point cloud obtained by the backpack-type 3D laser scanning system can make up for the insufficiency of UAV oblique photography, which can accurately obtain the information of ground objects and the occluded area under the forest canopy. This study adopts a 3D modeling method that combines UAV oblique photography and backpack-type 3D laser scanning technology. This method improves the model's accuracy and solves the problems of loopholes in the canopy of the oblique photography 3D modeling. The ground objects under the canopy are occluded, and the model near the ground is partially drawn. This solution realizes high-precision 3D modeling. The modeling process is shown in Figure 3.

Figure 3

3D scene modeling process

Oblique image to generate a dense point cloud

We linearize the image point coordinates in the image plane to obtain an approximate solution. We use a photo as the basic unit of adjustment calculation [6]. After measuring the pixel coordinates of the control points on the photo, we estimate the area network. This determines the approximation of the outer orientation elements and encrypted point coordinates for each photo in the area. According to the collinearity condition, we then list the error equations separately for control points and encryption points. We unified the adjustment calculation in the area and solved the outer orientation elements of each photo and the ground coordinates of the encrypted points.

Suppose S is the center of photography. The coordinates in the world coordinate system are (XS, YS, ZS), M and is a point in space. The coordinate in the world coordinate system is (X, Y, Z), m, and is the conformation of M on the image [7]. Its image plane and image space auxiliary coordinates are (x, y, − f), (Xm, Ym, Zm) respectively. At this time, it can be seen that the three points of S, m, M are collinear. This gives the following formula: XmXXS=YmYYS=ZmZZS=λ {{{X_m}} \over {X - XS}} = {{{Y_m}} \over {Y - YS}} = {{{Z_m}} \over {Z - ZS}} = \lambda

According to the relationship between the image plane coordinates and the image space auxiliary coordinate system, the collinear equation can be solved.

x0, x0, f is the coordinate of the center point of the image plane and the main distance of the camera. They are the interior orientation elements of the image. The linear expansion of the collinear equation is as follows: FX=FX0+FxXSdXS+FxYSdYS+FxZSdZS+Fxφdφ+Fxωdω+Fxκdκ+FxXdX+FxYdY+FxZdZ \matrix{{{F_X} = {F_{X0}} + {{\partial Fx} \over {\partial {X_S}}}{d_{{X_S}}} + {{\partial Fx} \over {\partial {Y_S}}}{d_{{Y_S}}} + {{\partial Fx} \over {\partial {Z_S}}}{d_{{Z_S}}} + {{\partial Fx} \over {\partial \varphi}}{d_\varphi} + {{\partial Fx} \over {\partial \omega}}{d_\omega}} \hfill\cr{+ {{\partial Fx} \over {\partial \kappa}}{d_\kappa} + {{\partial Fx} \over {\partial X}}{d_X} + {{\partial Fx} \over {\partial Y}}{d_Y} + {{\partial Fx} \over {\partial Z}}{d_Z}} \hfill\cr} Fy=Fy0+FyXSdXS+FyYSdYS+FyZSdZS+Fyφdφ+Fyωdω+Fyκdκ+FyXdX+FyYdY+FyZdZ \matrix{{{F_y} = {F_{y0}} + {{\partial Fy} \over {\partial {X_S}}}{d_{{X_S}}} + {{\partial Fy} \over {\partial {Y_S}}}{d_{{Y_S}}} + {{\partial Fy} \over {\partial {Z_S}}}{d_{{Z_S}}} + {{\partial Fy} \over {\partial \varphi}}{d_\varphi} + {{\partial Fy} \over {\partial \omega}}{d_\omega} +} \hfill\cr{{{\partial Fy} \over {\partial \kappa}}{d_\kappa} + {{\partial Fy} \over {\partial X}}{d_X} + {{\partial Fy} \over {\partial Y}}{d_Y} + {{\partial Fy} \over {\partial Z}}{d_Z}} \hfill\cr}

In formula (4) and formula (5), FX0, Fy0, is the approximate value of the collinear equation function. dXS, dYS, dZS, dφ, dω, dκ, is the correction value of the outer orientation element. dX, dY, dZ, is the coordinate correction value of the point to be determined. According to formula (3), the error equation matrix form is: [VXVy]=[a11a12a13a14a15a16a21a22a23a24a25a26]×[dXSdYSdZSdφdωdk]+[a11a12a13a21a22a23]×[dXdYdZ][lxly] \left[{\matrix{{{V_X}}\cr{{V_y}}\cr}} \right] = \left[{\matrix{{{a_{11}}} & {{a_{12}}} & {{a_{13}}} & {{a_{14}}} & {{a_{15}}} & {{a_{16}}}\cr{{a_{21}}} & {{a_{22}}} & {{a_{23}}} & {{a_{24}}} & {{a_{25}}} & {{a_{26}}}\cr}} \right] \times \left[{\matrix{{{d_{{X_S}}}}\cr{{d_{{Y_S}}}}\cr{{d_{{Z_S}}}}\cr{{d_\varphi}}\cr{{d_\omega}}\cr{{d_k}}\cr}} \right] + \left[{\matrix{{- {a_{11}}} & {- {a_{12}}} & {- {a_{13}}}\cr{- {a_{21}}} & {- {a_{22}}} & {- {a_{23}}}\cr}} \right] \times \left[{\matrix{{{d_X}}\cr{{d_Y}}\cr{{d_Z}}\cr}} \right] - \left[{\matrix{{{l_x}}\cr{{l_y}}\cr}} \right]

We introduce weights and list the error equation for encrypted points. At this point, we assign a weight of 1 to the error equation. List the error equations for the control points and list the dummy error equations. We give it the authority P. Our normal equation established by ΣPVV minimum: [ATPAATPBBTPABTPB]×[tX][ATPLBTPL]=0 \left[{\matrix{{{A^T}\,PA} & {{A^T}\,PB}\cr{{B^T}\,PA} & {{B^T}\,PB}\cr}} \right] \times \left[{\matrix{t \cr X\cr}} \right] - \left[{\matrix{{{A^T}} & {PL}\cr{{B^T}} & {PL}\cr}} \right] = 0

According to formula (7), we can solve the correction value of the external orientation element and the coordinate correction value of the point. Aerial triangulation is a key step in 3D modeling after acquiring oblique images. At the same time, it is correcting and orienting the image. The empty triangulation operation can accurately match the points with the same name on multiple images [8]. Any point on the image has spatial geometric information. It has localization properties and scalability. The oblique photography technology adds 4 oblique viewing angles to the traditional vertical lens. This allows for more overlapping combinations between images from different viewing angles. In this study, the heading and side overlap was set to 75%. This enables more same-name constraints between oblique images. This method improves the precision and reliability of the null-three operation. We use Smart3D automatic modeling software after obtaining oblique images. We automatically match the same name points of all images through a high-precision image matching algorithm. At the same time, we extract more feature points from the image to form a dense point cloud (Figure 4).

Figure 4

Point cloud generated from oblique imagery

Two kinds of point cloud registration and fusion

We extract many dense point clouds from aerial imagery through automatic image matching. These point clouds need to be registered and fused with the 3D laser point clouds acquired by the 3D scanner. We register the point cloud before data fusion. The goal is to keep the two-point cloud data formats consistent [9]. The ICP algorithm is a high-level registration method based on free-form surfaces. The main purpose of this method is to find the rotation and translation parameters [10]. The algorithm uses one of the point cloud coordinate systems as the global coordinate system. The other point cloud is rotated and translated, and the overlapping parts of the two sets of point clouds completely overlap. The algorithm finds the closest corresponding point between two clouds and iteratively calculates the optimal coordinate transformation through least multiplication. We rotate the matrix R and translate the vector t to minimize the error function. The basic process follows 1) We match each point in the oblique camera point cloud to the closest point in the scan point cloud. 2) We solve the above rigid body transformation with the smallest average distance to the corresponding closest point. We apply this method to solve the rotation matrix and translation vector. 3) We use the transformation matrix to transform the target point cloud. 4) Iterative calculation until the number of iterations or the error is less than the threshold, the iteration is terminated.

We registered and fused many point clouds obtained by UAV oblique photography with the point cloud of backpack-style 3D laser scanning. The purpose is to ensure that the format of the two-point cloud data is consistent. Since the accuracy of the point cloud scanned by the HERON backpack-type 3D laser scanning system is much higher than that obtained by oblique photography, the 3D laser scanning point cloud is used as the benchmark in the registration and fusion of the two points clouds. We use a combination of the ICP algorithm (nearest neighbor registration method) and the manual registration method. The overall registration error of the point cloud is 2.37mm. In this way, high-precision fused point cloud data is obtained [11]. An example of registration accuracy is shown in Table 3. As shown in Figure 5, the blue part of the point cloud is the source point cloud of 3D laser scanning. The red part is the target point cloud obtained by oblique drone photography. We use the algorithm to match each point with the nearest point of the target point cloud and then perform the matching calculation. This study's point cloud density and data volume are relatively large. Its interior contains a more fine structure.

3D point cloud registration accuracy

N Name Error between point clouds/mm Polymerization point/% Confidence/%
1 F001 1.17 28 100
2 F002 1.2 42 100
3 F003 1.63 39 100
4 F004 2.14 28 100
5 F005 1.67 8 100
6 F006 1.99 32 100
7 F007 8.41 4 100
Overall error of point cloud 2.37mm

Figure 5

Registered 3D point cloud

3D scene model construction

We use a 3D laser scanner to supplement 3D point cloud data collection that drones cannot capture. We register and fuse the two source point clouds before constructing the 3D model. This method can effectively solve the problems of model loopholes and ambiguity. Figure 6 is the rendering of the 3D model of the forest scenic scene constructed after point cloud registration and fusion [12]. From the model diagrams from different angles, we can see that the detailed outline of the 3D model established after fusing the laser point cloud is clear. Its performance characteristics intuitively reflect the real structural state of the ground objects. This algorithm found that the model effect is greatly optimized through this algorithm. At the same time, manual participation in the 3D modeling process has been significantly reduced, and modeling efficiency has been greatly improved.

Figure 6

3D scene model effect

Discussion

Although the 3D landscape modeling method based on UAV oblique photography has low cost and high efficiency, it needs to rely on the ground control point adjustment to ensure the modeling accuracy. Due to the poor GPS signal under the canopy and the secure connection between the canopies, it is difficult for us to find the ground control point marks and set the control points. As a result, the accuracy of oblique photography results cannot be guaranteed. The method uses a backpack-type 3D laser scanner to collect ground point clouds. This provides precise ground control points for drone tilt photography. This method saves the acquisition process of ground control points and improves the modeling efficiency.

The UAV oblique photography system collects multi-view images synchronously from different perspectives with the help of the mounted multi-eye camera. In this way, rich, high-resolution texture information of the top and side views of the features is obtained. The 3D model established in this paper has strong sensory authenticity.

However, this method cannot obtain the indoor environmental art landscape information under the canopy and cannot guarantee the integrity of the scene model. Although backpack-type 3D laser scanning technology cannot obtain complete canopy information, it can obtain complete and accurate landscape information under the canopy. We combine the two technologies to achieve the complementary advantages of the two technologies. This method reduces the cost of 3D modeling of indoor environment artistic landscape and improves the accuracy of scene model.

We use the ICP algorithm to register and fuse the two-point clouds to build a 3D model with clear details and outlines, and the structure of near-ground objects is clear. The performance characteristics of the algorithm directly reflect the real structural state of the object. It effectively solves the problems of model loopholes and model partial pull flowers.

Conclusion

This paper firstly uses beam method adjustment and projection difference optimization algorithm to extract key points, automatic connection point matching, and density matching of images obtained by oblique photography. We process a large number of dense point clouds with this method. In the experiment, a backpack-type three-dimensional laser scanning system was used to supplement the collection of the landscape under the canopy. In this way, a 3D laser scanning point cloud is obtained. We use the ICP algorithm to register and fuse the point clouds obtained by the two devices to construct a 3D scene model of the scenic forest area. The results of our 3D scene modeling experiments show that the overall error of point cloud registration is less than 2.37mm. This can fully meet the accuracy requirements of indoor environment art landscape 3D scene modeling. The 3D model we built effectively solves the problem of model holes and ambiguity at the bottom of the tree. The detailed outline of the model clearly and intuitively reflects the real structure of the ground objects.

This method saves the field control point acquisition process of oblique photography and effectively supplements the shortcomings of the two devices. This method realizes the complementary advantages of the two technologies and reduces the cost of 3D modeling of indoor environment art landscapes. At the same time, the method improves the modeling efficiency. This has certain practical significance for accurate forest measurement and refined management.

Figure 1

Backpack 3D laser scanning process
Backpack 3D laser scanning process

Figure 2

Tree point cloud acquired by backpack 3D laser scanner
Tree point cloud acquired by backpack 3D laser scanner

Figure 3

3D scene modeling process
3D scene modeling process

Figure 4

Point cloud generated from oblique imagery
Point cloud generated from oblique imagery

Figure 5

Registered 3D point cloud
Registered 3D point cloud

Figure 6

3D scene model effect
3D scene model effect

Related technical parameters of tilt camera

Technical indicators Parameter
Number of sensors 5pcs
Sensor size 23.2mm×15.4mm
Pixel Physical Size 4.25μm
Lens focal length 20mm
Side view lens tilt angle 45°
Image resolution 5472×3648

Related technical parameters of Heron backpack 3D laser scanning system

Technical indicators Parameter
Scan distance 100m
Scan angle −10; +30
Absolute accuracy 5cm
Relative accuracy 1cm
Absolute resolution 2cm
Initialization and battery life The 30s, 3~4h

3D point cloud registration accuracy

N Name Error between point clouds/mm Polymerization point/% Confidence/%
1 F001 1.17 28 100
2 F002 1.2 42 100
3 F003 1.63 39 100
4 F004 2.14 28 100
5 F005 1.67 8 100
6 F006 1.99 32 100
7 F007 8.41 4 100
Overall error of point cloud 2.37mm

Zhang, Y., Li, L., & Liu, B. The Discussion on Interior Design Mode Based on 3D Virtual Vision Technology. Journal of Advanced Computational Intelligence and Intelligent Informatics., 2019;23(3):390–395 ZhangY. LiL. LiuB. The Discussion on Interior Design Mode Based on 3D Virtual Vision Technology Journal of Advanced Computational Intelligence and Intelligent Informatics 2019 23 3 390 395 10.20965/jaciii.2019.p0390 Search in Google Scholar

Raaphorst, K., Roeleveld, G., Duchhart, I., Van der Knaap, W., & Van den Brink, A. Reading landscape design representations as an interplay of validity, readability and interactivity: a framework for visual content analysis. Visual Communication., 2020; 19(2):163–197 RaaphorstK. RoeleveldG. DuchhartI. Van der KnaapW. Van den BrinkA. Reading landscape design representations as an interplay of validity, readability and interactivity: a framework for visual content analysis Visual Communication 2020 19 2 163 197 10.1177/1470357218779103 Search in Google Scholar

Shelke, Y., & Chakraborty, C. Augmented reality and virtual reality transforming spinal imaging landscape: a feasibility study. IEEE Computer Graphics and Applications., 2020;41(3):124–138 ShelkeY. ChakrabortyC. Augmented reality and virtual reality transforming spinal imaging landscape: a feasibility study IEEE Computer Graphics and Applications 2020 41 3 124 138 10.1109/MCG.2020.300035932746083 Search in Google Scholar

ŞMULEAC, A., HERBEI, M., POPESCU, G., POPESCU, T., POPESCU, C. A., BARLIBA, C., & ŞMULEAC, L. 3D Modeling of Patrimonium Objectives Using Laser Technology. Bulletin of University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca. Horticulture., 2019;76(1):106–113 ŞMULEACA. HERBEIM. POPESCUG. POPESCUT. POPESCUC. A. BARLIBAC. ŞMULEACL. 3D Modeling of Patrimonium Objectives Using Laser Technology. Bulletin of University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca Horticulture 2019 76 1 106 113 Search in Google Scholar

Huang, J., Lucash, M. S., Scheller, R. M., & Klippel, A. Walking through the forests of the future: using data-driven virtual reality to visualize forests under climate change. International Journal of Geographical Information Science., 2021;35(6):1155–1178 HuangJ. LucashM. S. SchellerR. M. KlippelA. Walking through the forests of the future: using data-driven virtual reality to visualize forests under climate change International Journal of Geographical Information Science 2021 35 6 1155 1178 10.1080/13658816.2020.1830997 Search in Google Scholar

Jalandoni, A., & May, S. K. How 3D models (photogrammetry) of rock art can improve recording veracity: a case study from Kakadu National Park, Australia. Australian Archaeology., 2020; 86(2):137–146 JalandoniA. MayS. K. How 3D models (photogrammetry) of rock art can improve recording veracity: a case study from Kakadu National Park, Australia Australian Archaeology 2020 86 2 137 146 10.1080/03122417.2020.1769005 Search in Google Scholar

Hu, X., Li, J. & Aram, Research on style control in planning and designing small towns. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 57–64 HuX. LiJ. Aram Research on style control in planning and designing small towns Applied Mathematics and Nonlinear Sciences 2021 6 1 57 64 10.2478/amns.2020.2.00077 Search in Google Scholar

Gençoğlu, M. & Agarwal, P. Use of Quantum Differential Equations in Sonic Processes. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 21–28 GençoğluM. AgarwalP. Use of Quantum Differential Equations in Sonic Processes Applied Mathematics and Nonlinear Sciences 2021 6 1 21 28 10.2478/amns.2020.2.00003 Search in Google Scholar

Portnova, T. Information Technologies in Art Monuments ducational Management and the New Cultural Environment for Art Historian. TEM Journal., 2019;8(1):189–194 PortnovaT. Information Technologies in Art Monuments ducational Management and the New Cultural Environment for Art Historian TEM Journal 2019 8 1 189 194 Search in Google Scholar

Edler, D. Where spatial visualization meets landscape research and “pinballology”: Examples of landscape construction in pinball games. KN-Journal of Cartography and Geographic Information., 2020; 70(2):55–69 EdlerD. Where spatial visualization meets landscape research and “pinballology”: Examples of landscape construction in pinball games KN-Journal of Cartography and Geographic Information 2020 70 2 55 69 10.1007/s42489-020-00044-1 Search in Google Scholar

Cai, Z., Fang, C., Zhang, Q., & Chen, F. Joint development of cultural heritage protection and tourism: the case of Mount Lushan cultural landscape heritage site. Heritage Science., 2021; 9(1):1–16 CaiZ. FangC. ZhangQ. ChenF. Joint development of cultural heritage protection and tourism: the case of Mount Lushan cultural landscape heritage site Heritage Science 2021 9 1 1 16 10.1186/s40494-021-00613-1 Search in Google Scholar

Song, M. J. The application of digital fabrication technologies to the art and design curriculum in a teacher preparation program: a case study. International Journal of Technology and Design Education., 2020; 30(4):687–707 SongM. J. The application of digital fabrication technologies to the art and design curriculum in a teacher preparation program: a case study International Journal of Technology and Design Education 2020 30 4 687 707 10.1007/s10798-019-09524-6 Search in Google Scholar

Artículos recomendados de Trend MD

Planifique su conferencia remota con Sciendo