Uneingeschränkter Zugang

Research on Digital Modeling and Optimization of Virtual Reality Scene


Zitieren

INTRODUCTION

Virtual reality technology[1,2] can create an environment similar to the real society and enable people to interact with the virtual environment with multidimensional information. The campus roaming system on the basis of geography, virtual reality, multimedia, broadband and other technologies, combining the content of attributes and geospatial information, constructs a realistic virtual campus environment. Users can access the campus landscape through computer networks and use the terminal computer to expand roaming and corresponding search and query in the virtual campus environment. This paper creates a virtual campus roaming system with functions such as panorama display, fixed path roaming, interactive roaming, collision detection response, and roaming system interaction from the aspects of OpenGL basic graphics element creation, rendering, and perspective conversion. The Z buffer algorithm is used to optimize 3d scenes and improve the modeling efficiency.

INTRODUCTION OF VIRTUAL CAMPUS ROAMING SYSTEM AND RELATED TECHNOLOGIES
OpenGL technology

openGL(open Graphics Library) technology which is also known as graphic program interface technology is a set of three-dimensional graphics processing library, a cross-programming language and cross-platform programming interface. It is a powerful and convenient underlying graphics library. OpenGL is characterized by good quality, high performance, good stability, industrial standard, high reliability, flexibility, scalability, extensibility, and easy-to-use. It can be integrated into window system such as Unix, Windows. The function of OpenGL can be provided for interface in the form of C functions, which developers can use to easily render the entire 3D graphics. This system takes Visual C++ as the platform, uses OpenGL technology to establish 3d scene sand uses 3DS MAX to model the building model and realize virtual roaming[4].

Virtual reality

Virtual reality[5,6] combines the application of multiple aspects, with the characteristics of autonomy, interactivity and perceptivity. It is an advanced simulation technology, which can experience the virtual world after created. It mainly includes simulation environment, perception, natural skills and media equipment. The computer can simulate the environment and let the user be devoted to the corresponding process with special equipment, so that the interaction between the user and the environment can be realized.

Virtual reality technology based on OpenGL

Virtual reality technology based on OpenGL[7,8] realizes modeling with the theory of computer graphics. In this process, it aims to achieve campus virtual roaming and interaction. This subject mainly discusses the design and implementation of virtual campus roaming system based on VC++ development platform and OpenGL. The drawing of the sky is realized by the Background node technology of Skybox algorithm, and the terrain rendering is implemented by LOD algorithm. Finally, the virtual campus roaming system is realized. The system development process is shown in figure 1.

Figure 1.

System development process

DESIGNOFROAMINGSYSTEM

The construction of the virtual scene can be divided into the following three steps. The first step is to establish a real virtual campus according to the distribution map and the specific plan of the buildings and environmental objects in the campus. In the second step, each individual entity object in the campus is modeled separately. On the basis of field, landscape objects such as topography and sky are continuously distributed in space. Topographic objects such as buildings, trees, and streetlights, which are characterized by discrete entities and exist independently. The third step is to construct individual physical scenes and then integrate them into a complete virtual campus scene.

In this design, OpenGL is used for modeling, 3D texture mapping is used to draw scenes. Discrete images collected and processed by cameras, Photoshop and other images are used to generate panoramic images after calculating and processing. At last, interactive control is carried out in VC++6.0 environment to realize virtual campus panoramic roaming system. The basic function module of the system is shown in figure 2.

Figure 2.

Basic function modules of the system

Data collection and collation

In the process of realizing the virtual campus, the complete 3d spatial data and image data are indispensable data for constructing the virtual campus.In the process of building topographic model, digital photogrammetry is usually adopted. Based on the basic principle of photogrammetry, the simulation model and digital model are generated by image processing and image matching using photos as original data, then convert into DEM format to input the formed DEM and digital map to the corresponding database. The data processing process is shown in figure 3.

Figure 3.

Data processing flow

Among the buildings, the height determination of different contents are usually carried out in accordance witharchitectural design drawings, which can be further extracted by using the architectural design drawings in the process of obtaining the three-dimensional model of the geometric features of the building. Scanning digitized vector data using existing maps should be a less demanding building for three-dimensional construction technology of the roof. If only for the two-dimensional vector data content, the height should be inferred according to the floor number.

For texture data, different reflection characteristics of scene surface can be represented by the texture image.The texture mapping represented in this process is to map part of the image that has been mapped to the texture mapping in the image fragment. This mapping is closely related to the image color to the image coordinate position, and further modification of the image segment RGBA color can be performed in this process.To use the current texture to draw pixels, we must make texture coordinates for each vertex before drawing it in which we just need call glTexCoord2d(s:Double; t:Double) function.In the function s and t are coordinates of s and t generated for 2D texture.For all textures, regardless of their size, the texture coordinates in the upper left corner (top) are (0,0) and in the lower right corner (1,1). In this process, the texture coordinate should be a number between 0 and 1.

The texture map implementation code is as follows:

namespace OGL

{classCCylinder

{floatm_Radia;// Cylindrical radius float m_Height;

int m_Slice;// A cylinder is divided by m Slice CGLTexture *m_pTexture;

Public;

CCylinder():m_pTexture(NULL)

{m_Radia= 1280-128;

m_Height=1024+1024;

m_Slice=12;

CharCylinderTex[255]=“Scene1.JPG”;

LoadCylinderTexture(CylinderTex);}

void LoadCylinderTexture(char*CylinderTex)

{If(m_pTexture!=NULL)delete m_pTexture;

m_pTexture=new CGLTexture(Cylinder Tex);}

~CCylinder()

{ deletem_pTexture;}}

Three-dimensional model construction and visualization

In the development of this system, the combination of 3d Max software and OpenGL can reduce the complexity of the system construction. The key to building the virtual campus model is to analyze the morphological characteristics of the scene. According to the state characteristics of the scene model, it can be divided into dynamic entity model and static entity model.

After obtaining the corresponding texture pictures and information data, the scene model is modeled according to the composition of the virtual campus system. The three-dimensional model of the system is shown in figure 4.

Figure 4.

System 3D model

Construction of static entity model of virtual campus

Static entities on campus include teaching buildings, libraries, playground, roads, greening, etc. Static modeling is mainly composed of geometric model modeling of entity's own characteristics and physical model modeling caused by entity's external environment. The physical modeling of static solid model is mainly about different solid texture caused by external environment. The geometric model of static solid model is mainly the solid model of its own shape. In the geometric modeling process of static solid model, according to the complexity of collected solid model data, 3d Max can be used to build the model for regular static solid model. For irregular solid model, AutoCAD is first used to modify the model contour[9] and then 3d Max is used to build the model.

Construction of dynamic physical model of virtual campus

In the process of constructing campus system model, building motion part may change the internal structure of the original model organization. Therefore, the degree of freedom node linked to the motion part should be added in the model file formed by modeling and setting the corresponding position coordinates. The motion part of the model is analyzed on the basis of the degree of freedom so as to determine the motion relations.

As an important part of the virtual campus system model, the modeling process of the dynamic physical model[10] is as follows.The first step is using the modeling tool 3d Max to build the static and dynamic two parts of the campus system model.In the process of building dynamic model, the degree of freedom is increased.The second is to enhance the animation display effect of dynamic entity model.The last step is displaying the flash effect of the dynamic model in the model of the virtual campus system.

Design of sky scene

The realization of the sky in this system uses the node called Background of the sky box algorithm[11].It can increase the effect of blue sky and white cloud for the virtual campus scene which can improve the authenticity of the three-dimensional virtual campus scene. In this node, skyColor and skyAngle field values are used to control the color of the sky. The skyColor field specifies the color of the sky of stereo space background, which values are a combination of red, green, and blue. The skyAngle field is the angle of the sky that specifies the position to be colored on the stereo space background. In this process, all the colors of the sky are set to produce a corresponding excessive and gradual action, which makes the sky look more realistic.

The realization of the sky box is to use the rectangular square box as the sky perspective map to generate the corresponding cube. First, create a large enough cube, then paste the different sky textures that represent the weather effect on different faces of the cube box to avoid too flat and rough, and use a light blue sky to further eliminate the background. In this way, the degree of fidelity of the simulation effect is expressed. The picture as the sky background should meet the conditions that the graphic format should be BMP, that is the bitmap. Set the image size according to the size parameter. The size has strict requirements and preferably a multiple of 2. Set the save path and name, render It must be ensured that the four sides should be connected to the position of the relevant image.

The optimization design of three-dimensional model

Building geometry model optimization

In the virtual campus system, the model is built based on the classification of the school buildings' appearance and structure.The same model is adopted for the buildings with the same appearance or structure and the relatively complex model is simplified through the splitting methodbefore modeling is carried out.

In building modeling, the geometric model includes both the top and the wall. The wall can be realized by means of picture or line segment.The top contains all vertex coordinates.Because different buildings have different roof shapes during the modeling process which cannot take the general model to express the situation of the roof.It needs to push different information in accordance with the relevant rules of coordinate data and all buildings need to be classifiedin accordance with the shape characteristics of the top. The geometric model data for different buildings are also completely different.

Texture mapping of 3d virtual campus buildings

Line segment are usually used to model building windows. Describe and render the frame of the windows. Due to the large number of buildings on the campus, it takes too much time to model one by one and the system speed will be affected. So windows on the surface of different buildings can be modeled with texture mapping.The specific measure of implementation is making the data texture of all buildings in the school processed and converted to the corresponding format and expand the corresponding mapping for all faces.

Blanking process

In the realistic graphics generation, the scene model needs to be blanked to improve the loading efficiency of the model. The scene blanking is to give the point of view and sight, which further determines the visibility of the object surface in the scene as well as the invisible characteristics after being sheltered. Blanking can be regarded as a sorting problem and the efficiency of sorting can affect the efficiency of the blanking algorithm.

At a certain point of view to observe the 3 d object.You can see the point, line and plane distributedon the surface of the object while other part may be keep out by these parts. If you want to make 3 d objects to display, you should eliminate points and lines that are not visible on the surface after determining the appropriate line of sight. This method is called blanking algorithm.

The backside algorithm, Z buffer algorithm, painter algorithm and scan line algorithm are more common in the blanking algorithm we have studied by far.

The Z buffer algorithm mainly compares the surface depth of each pixel on the projection plane. It does not need the geometric data of the whole scene. So it is the relatively simple blanking content of all the image space algorithm. Facing the same pixel, the Z buffer is usually saved on the surface. If the new surface depth is closer to the point of view than the buffer surface depth, then the new content is saved.

The code of Z buffer algorithm implementation is as follows:

{for (x=0; x<vmax; x++)

For (y=0; y<vmax; y++)

{ The depth value of the (x, y) element of the Z cache is -1, the minimum depth value;

The color of the (x, y) element of the Z cache is the background color;}

for(each polygon plane)

for(each pixel on the projection plane)

{Calculates the depth d of the polygon at that pixel (x, y); If(d>Z cache the value at (x, y))

{The depth value of the (x, y) element of the Z cache is d;

The color value of the (x, y) element of the Z cache is the current polygon color value;}}}

Use LOD algorithm to optimize polygon detail model in campus scene, library, etc. All the details of the space modelling is oftenreflected in the process of setting up distance in space. The LOD node can achieve this function. Different scenes depictdifferent level of details. Drawing detailed geometric details for the nearby geometry fully whilethe geometry in the distance is greatly simplified and draw coarse geometric details relatively. Close shot and distant view have fine and rough drawing respectively. Comparison of test results of the system before and after optimization is shown in table 1.

COMPARISON OF TEST RESULTS BEFORE AND AFTER OPTIMIZATION

Optimization Algorithm
DEF.USESet the domain value of visibility Limituse LODInitial loading speed (second)Degree of fluency
Falsefalsefalse11.2unfluency
Truefalsefalse10.1less fluency
Falsetruefalse8.9commonly
Falsefalsetrue7.6commonly
Truetruetrue6.5more fluency
THE REALIZATION OF VIRTUAL CAMPUS ROAMING SYSTEM INTERACTIVITY

One of the most important features of virtual reality technology is real-time interactivity, which is the core of virtual campus construction. After the 3d scene has been built, it needs to be debugged interactively. The system can realize virtual campus roaming on the computer through real-time interactive control and external input devices such as mouse and keyboard.

Implementation of interactive roaming

The system is mainly controlled by the "NavigationInfo" node and the "Viewpoint" node in interactive roaming. Browse speed and way are set by the "NavigationInfo" node. When browsing the corresponding scene in the campus, its position and orientation are set through the "Viewpoint" node. The realization of the direct interaction function of the campus scene with the help of external devices such as keyboard and mouse. It is shown in figure 5.

Figure 5.

Direct interaction function diagram

Control of interactive roaming

In the process of the realization of this system, the objects in the scene should be unfolded and painted again when the user standing angle changes. The specific methods include using the change of three-dimensional object in the scene or changing the interactive scene picture through visual transformation.

The "gluLookAt" function is the method of view transformation proposed in OpenGL, which encapsulates the command of rotation and translation. It changes the matrix through eye position, reference and vector, and maps the target point to the corresponding z-axis. The origin point belongs to its observation point. In the process of projection matrix, the scene is mapped to the center of the relevant visual area and the upward vector is mapped to the y-axis. But the upward vector is not necessarily balanced between the line of sight to the line of reference in the process.

The interactive roaming control model is as follows:

Initial Model

static t3DMode1 g_3DMode1[2];

CLoad3DS* m_3DS=new CLoad3DS; m_3DS->Init(“neicum2 3DS”,0); gLoadName(0);

m_3DS>show3DS(0,g_3DModel[0].Position x,g_3DModel[0].Position y,g_3DModel[0].Position z,0. 6,g_3DModel[0].bngitude,g_3DModel[0].latitude);

Selection function

hits=glRendeMode(GL_RENDER);

if(hits<=0)

return -1;

return selectBuf[(hits-1)*4+3];

Translation function

g_3DModel[hits].Position x+=m_xTranslation; g_3DModel[hits].Position y+=m_yTranslation;

Rotational function

g_3DMode[hits].longitude-=theta; g_3DMode[hits]. latitude+=phi;

Realization of collision detection function

Collision detection technology[12] is mainly carried out to objects in the internal scene. As the global "Camera" moves in the internal scene, it is equivalent to the observer's roaming in the scene. When the observer moves inside some scenes, it will pass through the object directlyif there is no collision detection whichdoes not conform to the actual situation. So collision detection technology needs to be introduced.

The module mainly uses the packaging box algorithm[13] to judge the object region in the three-dimensional scene. Establish the packaging box and determine which objects should be carried out to collision detection in the scene. In the process of happening collision of packing box level, further judgment is made on whether the collision of triangular level occurs.The packaging box levelsare divided into regional levels. Finally, the detection is usually carried out in the process of triangular level. Compared with common detection algorithms, the accuracy is usually higher and the performance loss caused by collision detection is avoided. There are the following means in the process of specific implementation. Firstly, calculate according to the laws of motionof the object or the position of user input in each frame. At this point, the collision detection problem is not considered.Then make the triangular to loop detection in the scene. The operation in the process of the cycle is as follow.

Find the plane of the current triangle, which is called plane S;

Judge the position of the object in the previous and current frame. Estimate the relation between it and plane S. If the previous frame is before the plane and this frame has reached the plane, then do the calculation in step one.

Because the movement of two objects in front and rear on both sides of the plane means that the object has passed through the plane. However, the plane has no boundary.At this point it cannot indicate the collision between the object and the triangle. It needs to judge the boundary within the limits of the three sides of a triangle. After passing the three edges, plane PS1, PS2, and PS3 are vertically set.Make the normal line point to the interior, and judge the position of the object.Judge whether it is in the corresponding plane. Yesturn to4 or not turn to 5.

After the object collides with the triangle, correct its position so that it can move in accordance with S.

Make the next triangle be the corresponding triangle and go back to step 1 when it is ensured that the object is not colliding.

CONCLUSION

Based on virtual reality technology, this paper introduces the development of OpenGL virtual campus roaming system and further studies the basic algorithm of developing virtual campus roaming system. And it introduces and realizes the modeling of scene model and interactive function of roaming. The design of 3d virtual campus roaming system has been completed.

Through the test, the optimized virtual campus roaming system runs smoothly. The virtual campus panoramic roaming system presents the 3d landscape in a real way. Users can Roam in the 3d virtual campus scene with the help of external devices such as mouse and keyboard so as to realize the interactive operation of campus reality.

eISSN:
2470-8038
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Informatik, andere