1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Otwarty dostęp

Influence analysis of piano music immersion virtual reality cooperation based on mapping equation

Data publikacji: 15 Jul 2022
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 13 Mar 2022
Przyjęty: 12 May 2022
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Introduction

Virtual reality technology is a technical mode based on simulation system. On the basis of simulating the real world, it injects the design elements of the three-dimensional world into it to form a virtual world. With the support of virtual reality technology, such a virtual world concretizes abstract objects, so as to improve people's experience in observation or use. As a technical implementation method of diversified information integration, virtual reality technology interacts three-dimensional dynamic information with entity behavior, so as to guide users to immerse themselves in the environment simulated by their technical simulation.

Based on the real-time dynamic information generated from the two dimensions of virtual and reality, seek people's visual effects from perception and weaken the differences between the real world and the virtual world, then the sensory effects on the virtual world will arise spontaneously, and then form an artistic form of poor integration of virtual and reality. Before studying how to apply virtual reality technology to art design, we must analyze the immersion effect. The immersion effect mentioned here is a special definition of immersion mode. From the perspective of psychology and physiology, the reason why virtual reality technology can be fascinating and bring people's thoughts and consciousness into the virtual world created by artists completely depends on the immersion effect that users can achieve in the virtual world. If we rationally perceive the artistic atmosphere of the virtual world from the time and space dimensions of the objective world, its immersion effect is also difficult to achieve. The real attraction of users' thinking consciousness in the virtual world is the attraction of art design itself to users and the artistic atmosphere created by virtual reality technology [1]. Even if the problem of how to guide users' creation to approach the real world does not exist at all. Then we should think about the artistic expression of virtual reality technology from the perspective of users, and users perceive the artistic expression of virtual reality technology in the two dimensions of psychology and physiology. Therefore, from the perspective of the use of virtual reality technology, the more immersed in the virtual world, the higher the degree of experience, and the more obvious the effect of artistic expression. The three characteristics of virtual reality technology are imagination, interaction and immersion. The three are relatively independent and harmonious, forming different immersion dimensions. In art design, only by giving play to the imagination space and improving the interaction effect can we increase the immersion effect of art works. This immersion effect is also the most fascinating expression of art works in virtual reality technology. In virtual reality technology, the emergence of VR video technology improves the immersion experience of users [2]. The fundamental factor is that physiological immersion brings users into a specific virtual scene in visual effect, and produces psychological resonance with the virtual scene, resulting in corresponding psychological immersion. Most studies have analyzed immersive communication. This immersive mode based on user experience is an unprecedented expression of art design. If it is applied to different fields, it will also improve the internal expression and value of art design. Under the influence of immersive communication, virtual reality technology extends to different immersion dimensions, and then stimulates the immersion effect of art works from the two dimensions of physiology and psychology.

Liu J pointed out that this basic skill may take several months to develop, but most review systems can not solve the problem of students' lack of interest and motivation. This issue is particularly important for children who are exposed to music education through school courses or parents rather than their own wishes. Virtual reality technology can create perceptual and cognitive connections between musical instruments (keyboards), instructions (notes) and music (sounds). The connection between visual effects and physical keys can enable users to quickly play specific tunes, so it is possible to improve the learning experience and increase motivation. After consulting the literature, the author found that many excellent computer music education systems have been developed [3]. Chakrabarty N and others developed the system together with two music teachers. The application uses a standard MIDI interface to connect the piano to the computer to obtain performance data. MIDI was selected because it conveys a lot of information related to performance, including the speed and strength of playing notes, including the use of pedals [4]. Muhammed and others developed piano forte, which focuses on the interpretation of music rather than the teaching of basic skills. Muhammed pointed out that music is neither a note printed on paper nor a motor skill to execute computer instructions. Music is an art form, and computers cannot teach or analyze emotional content. The system introduces more advanced analysis functions, such as pronunciation accuracy and chord synchronization. This function clearly describes how each note is played. For example, staccato indicates that a note is separated from adjacent notes, while legato indicates a smooth transition between notes without pause. Synchronization refers to whether the notes in a chord are played at the same time and whether the notes of the same length are played evenly. These characteristics form the basis of advanced music performance ability. In terms of technology used, piano forte uses hardware settings similar to piano tutor [5].

Based on the current research. This paper presents a method of VR image recognition based on mapping equation. Firstly, through the feature extraction of the image, the main color of the target image is obtained, and the extracted results are smoothed. Through the two steps of color space conversion and tone reverse mapping, the morphological transformation of the image is realized to obtain the transition image. The K-means clustering matching operation is performed on the transition image, and the optimal energy equation of color migration is obtained by hierarchical migration and global migration. The reverse mapping color migration algorithm is put into the virtual reality technology. A virtual piano is developed with HTC Vive suite and Leap Motion sensor fixed on the helmet as the hardware platform, and Unity3D and related Stream VR plug-ins and Leap Motion plug-ins as the software platform. The virtual piano forms a virtual keyboard with a Cube component. By compiling the script response function of the Cube component being approached and pressed or released, the performance events of the virtual piano keyboard are recorded, and the sound of the virtual piano is further realized with the help of MIDI.

Mapping equation based on VR image
Main color extraction of VR image

In order to realize the color migration between two uncorrelated images, it is necessary to establish a color channel in lab space. Therefore, it is necessary to establish a relationship between the color features of color images and shape images. Then first extract the main color descriptor of the two images, set the main color of the color image as Fy, and its expression is: Fy={pa,ca,va},a=1,2,,N {F_y} = \left\{{{p_a},{c_a},{v_a}} \right\},\,a = 1,2, \ldots,N

Where N represents the number of main color pixels in the color image, pa is the percentage of the a-th main color in the image pixels, and its calculation method is to divide the number of main color pixels and the number of pixels in the image. In formula (1), ca represents the vector of the a-th main color, and va represents the variable of the main color near the main color vector. Since the number of main color pixels extracted from the color image and the shape image is the same, which is convenient for mapping and matching, it is necessary to calculate the distance between the main colors of the two images. The distance calculation method is shown in equation (2): d(cs,cr)=mina=1Ndc(cφ(a)s,car) d\left({{c^s},{c^r}} \right) = \min \sum\limits_{a = 1}^N {{d_c}\left({c_{\varphi \left(a \right)}^s,\,c_a^r} \right)}

Where cs and cr represent the main color vector of color image and shape image respectively, and φ(a) belongs to a group of pixel arrangement in n. Through the obtained main color distance results, the color image and shape image are segmented according to the main color region, and the features are extracted in the region. According to the calculation results of the main color distance, calculate the color histograms of the two images in RGB space, merge the parts belonging to the same main color, and divide the whole merging process into multiple steps [6]. Color feature extraction is carried out in each segmented region. All pixels in a segmented region are represented by set SR1, which contains n pixels. Then the average brightness value in the segmented region can be calculated by equation (3) f1(a)=1n(x,y)SRaI(x,y) {f_1}\left(a \right) = {1 \over {{n_{\left({x,y} \right) \in S{R_a}}}}}\sum {I\left({x,\,y} \right)}

In equation (3), A (x, y) represents the brightness of the pixel with the position of (x, y) on the image, so the feature results about brightness can be extracted. Similarly, the feature values about transparency, chromaticity and so on can be extracted from the image.

Source image smoothing

Perform boundary smoothing on the source image to solve the error caused by segmentation, and use bilateral filter to smooth color and spatial information at the same time: D(A(x,y),A(x,y))=b×e((xx)2+(yy)2σ)+(1b)×e(|A(x,y)||A(x,y)|σ) \matrix{{D\left({A\left({x,y} \right),{A^{'}}\left({{x^{'}},\,{y^{'}}} \right)} \right) = b \times {e^ -}\left({{{\sqrt {{{\left({x - {x^{'}}} \right)}^2} + {{\left({y - {y^{'}}} \right)}^2}}} \over \sigma}} \right)} \hfill \cr {+ \left({1 - b} \right) \times {e^ -}\left({{{\left| {A\left({x,\,y} \right)} \right| - \left| {{A^{'}}\left({{x^{'}},\,{y^{'}}} \right)} \right|} \over \sigma}} \right)} \hfill \cr}

In the formula, parameter a represents the balanced color coefficient of the image, and σ is the dynamic selection coefficient of spatial structure. Generally, a and σ take constants. The spatial smoothing process of the source image ensures the smooth transition of the region boundary color after color migration.

Color space conversion

Generally, the format of the image is RGB format, which is the spatial standard of one color in the image, in which R represents red, G represents green and B represents blue. Other different colors can be obtained by changing these three colors and superimposing points. However, in the process of color migration, it is necessary to convert the format of target image and source image from RGB color space format to color space. Because the color transfer algorithm between images involves reverse mapping, it is easy to cross map in RGB color space. The correlation between the three channels of the image in lαβ space is small, so the problem of channel crossing will not occur when mapping in this space [7]. Since RGB cannot directly convert the image with lαβ space, to convert the image from RGB space to lαβ space, you first need to convert it to LMS color space through equation (5). [LMS]=[0.14780.5470.02400.24980.36770.03650.06420.17450.9854]=[RGB] \left[ {\matrix{L \cr M \cr S \cr}} \right] = \left[ {\matrix{{0.1478} & {0.547} & {0.0240} \cr {0.2498} & {0.3677} & {0.0365} \cr {0.0642} & {0.1745} & {0.9854} \cr}} \right] = \left[ {\matrix{R \cr G \cr B \cr}} \right]

Then, the image under LMS can be converted into lαβ space through equation (6). [laβ]=[1311116111132][000002010]=[LMS] \left[ {\matrix{l \cr a \cr \beta \cr}} \right] = \left[ {\matrix{{{1 \over {\sqrt 3}}} & 1 & 1 \cr 1 & {{1 \over {\sqrt 6}}} & 1 \cr 1 & 1 & {{1 \over {\sqrt {32}}}} \cr}} \right]\left[ {\matrix{0 & 0 & 0 \cr 0 & 0 & {- 2} \cr 0 & {- 1} & 0 \cr}} \right] = \left[ {\matrix{L \cr M \cr S \cr}} \right]

In lαβ color space, l represents gray scale, and α and β are color information.

Tone reverse mapping

In order to ensure the accuracy of color migration, it is necessary to reverse map the tone pixels in the image, and its processing flow is shown in Figure 1.

Figure 1

flow chart of tone reverse mapping

It can be seen from the figure that the color needs to be edited and defined before reverse mapping. The initial color feature of pixel I in the defined color image is represented by ac, and its color feature is represented as ac a_c^{'} after color feature transformation. The editing function of ac a_c^{'} can be expressed by equation (7): ac=f(ac,ec) a_c^{'} = f\left({{a_c},{e_c}} \right)

Where ec represents the color editing parameter of i pixel. The color editing intensity is represented by gc and wc respectively, and its value varies from 0 to 1. Then the frame color editing of color migration algorithm can be regarded as a mapping process [8].

Taking the color brightness between images as an example, the average value of logarithmic brightness is used as the scale factor of its characteristic value, and the input brightness value is linearly scaled, then the expression can be expressed as: fc=αFavgF0 {f_c} = {\alpha \over {{F_{avg}}}}{F_0}

In equation (8), F0 is the input brightness value. Then the inverse mapping formula of color migration about brightness between images can be expressed as: Fout=(fc)1γFmin1γFmax1γFmin1γ {F_{out}} = {{{{\left({{f_c}} \right)}^{{1 \over \gamma}}} - F_{\min}^{{1 \over \gamma}}} \over {F_{\max}^{{1 \over \gamma}} - F_{\min}^{{1 \over \gamma}}}}

Where Fout is the result of reverse mapping and γ is the coefficient of reverse mapping, which is often taken as 2.2.

According to the result of tone reverse mapping, the K-means clustering method is used to cluster the pixels of the same type, and the pixels of the same area between the two images are matched as the color migration range. Firstly, K pixels are selected as the clustering center in the color image, then the distance between each pixel in the color image and the clustering center is calculated, and the calculation results are compared to assign it to the nearest clustering center until each pixel in the image is assigned. Then, the region is allocated according to the clustering results, the closest region is matched first according to the reverse mapping method, and then the region of the remaining shape image and the region of the color image are matched in turn, so as to achieve the best matching result.

VR piano design based on unity 3D and Leap Motion
VR hardware platform construction

This paper uses HTC Vive virtual reality suite as the hardware platform of virtual piano. A complete set of HTC Vive virtual reality equipment is composed of a helmet, two handles and two locators.

Vive uses the indoor positioning technology of lighting house. The two rows of LED lights in the locator send out scanning beams six times per second and scan the positioning space of 15×15 feet in turn in the horizontal and vertical directions. The HTC Vive helmet and handle have more than 70 photosensitive sensors. When the beam is scanned, the helmet starts counting. After the sensor receives the scanning beam, the accurate position relative to the locator is calculated by using the relationship between the sensor position and the receiving laser time. With the help of positioning technology, users can walk around the virtual piano within the detection range, and even play the piano on the other side of the keyboard to further increase the sense of three-dimensional and immersion.

Under normal conditions, HTC Vive needs to operate the menu through the handle. However, holding the handle is not convenient for playing the virtual piano. After wearing the helmet, you can't observe the surrounding environment and pick up the handle. Therefore, the virtual piano developed in this paper does not use the handle, but controls the virtual piano through gesture recognition, such as timbre setting and rhythm adjustment. The Leap Motion sensor is directly installed on the Vive helmet.

HTC Vive helmet and Leap Motion sensor are connected to the computer through HDMI and USB interfaces respectively. The configuration properties of the computer and built-in independent graphics card used in this paper are shown in Table 1.

Computer configuration of virtual piano

GPU NVIDIA GeForce GTX 1060
CPU Intel (R) Xeon (R) E5-2630
RAM 16.0GB
Video output HDMA1.4
USB port USB3. 0 port
Operating system Windows 7 operating system
Construction of VR piano software development platform

Unity3D is a professional game engine developed by unity technologies, which allows game developers to easily complete various game creativity and 3D interactive development. Unity3D has rendering resources such as physical simulation, normal mapping, screen space environment masking and dynamic shadow. Compared with other game development tools, it has two main advantages: it provides extremely high visual workflow and multi-dimensional cross platform support. With the help of visual workflow, scene layout editing, resource binding and interactive object scripting language coding can be carried out conveniently. In terms of deployment targets, Unity3D spans multiple platforms and can run on 21 platforms, including current Windows, Mac, Wi i, iPhone, WebGL, Windows phone8 and Android. Unity3D is usually used for ordinary 3D game development. To develop virtual reality projects, you need to obtain the VR plug-in steamvt SDK. The steamvt SDK can be downloaded for free on the official website of Unity3D or HTC Vive [910].

From ordinary 3D game development to virtual game development, the key is to convert ordinary cameras to VR cameras. This can be achieved by setting up prefabricated parts. The preform is composed of parent and child objects. For example, the parent object [CameraRig] is a standard application combination of StreamVR, including VR camera, two handles and locator. When creating a scene, you need to drag the preform into the editor to change it into a VR scene. Sub objects include left hand, right hand and head display. There is a handle model under the left hand. In the actual project, the handle can be customized as the model 1 required by the user. Prefabricated parts play an important role in project development. Prefabricated parts are required for piano keyboard modeling and hand modeling in the next section.

Virtual piano keyboard design

An ordinary piano consists of 88 black-and-white keys separated by half tones. From left to right, there are bass area, midrange area and treble area respectively. For simplicity, the virtual piano developed in this paper sets 7 tones of do, re, mi, fa, suo, la and si in the bass area, midrange area and treble area respectively, with a total of 21 white keys and 15 black halftone keys. The piano interface in Unity3D uses canvas, which carries the area of all UI elements. This component is bound to the piano key object in the hierarchy panel. There are three canvas rendering modes: screen space overlay, world space and screenspace camera. Screen space - the canvas in the overlay mode will fill the entire screen space, and all UI elements under the canvas will be placed on the top layer of the screen, which will not be obscured by any object, and its contents can be seen without a camera; The world space mode is the world space mode. In this mode, the canvas will be located in the game scene like a 3D object, and its position and size can be set.

This paper adopts the screen space camera mode. The screen space camera mode is similar to the screen space overlay mode. The canvas also fills the whole screen space. If the screen size changes, the canvas will automatically change its size to match the screen. The difference is that in this mode, the canvas is placed in front of the camera. In this rendering mode, the canvas looks like it is drawn on a plane at a fixed distance from the camera. All UI elements are rendered by the camera. Canvas can also specify the camera for rendering, so the camera settings will affect the UI picture. Using this mode to display the piano model on the UI, the effect setting will be better than the other two.

Dynamically zoom according to the distance between the camera and the canvas and the size of the visual cone of the camera. When an object is closer to the camera than the canvas, the piano will be displayed in front of the canvas. Objects farther from the camera than the canvas will be blocked. Adjust the plane distance between the canvas and the camera to 100, set the reference resolution, select the scale with screen size / match width or height scaling method in UI scale mode to correspond to different size resolutions, and finally achieve the piano effect [11].

Gesture extraction

This section uses sliding window segmentation technology to extract a single gesture action. The data segmentation technology based on sliding window is to segment a fixed length window from a continuous motion signal. The window slides along the time axis and averages all the data in the window. Assuming that the average value represents the average speed of fingers in the window, if the average speed is very small and lower than a certain threshold, it is considered that there is no gesture movement in the time period of the window; If the average speed reaches the maximum, it indicates that there is gesture movement in the window time period, and the window time period just covers a single gesture movement. The instantaneous closing speed of the finger tip and the average closing speed in the whole window can be calculated by the following formula. vi=(xt+1xt)2+(yt+1yt)2+(zt+1zt)2Δt {v_i} = {{\sqrt {{{\left({{x_{t + 1}} - {x_t}} \right)}^2} + {{\left({{y_{t + 1}} - {y_t}} \right)}^2} + {{\left({{z_{t + 1}} - {z_t}} \right)}^2}}} \over {\Delta t}} Vt=argmaxt[1ktt+k1vt] {V_t} = \arg \mathop {\max}\limits_t \left[ {{1 \over k}\sum\limits_t^{t + k - 1} {{v_t}}} \right]

In formulas (10) and (11), vi represents the three-dimensional instantaneous closing velocity of the i-th sample point. Since the time interval between two adjacent sampling points is fixed, Δt = 1 and k is the window width. Because the data sampling rate of Leap Motion is 100fps / s and the general gesture action is no more than 1s, k = 100 can be taken. Set the minimum and Lang value to 1. If the average closing speed in the window is lower than this threshold, it is considered that there is no gesture movement [12].

Figure 2 shows a schematic diagram of segmentation and extraction of finger click action. As can be seen from the figure, through the sliding window segmentation technology, the window can be accurately positioned at each click action time, so as to segment and extract a single click action.

Figure 2

Schematic diagram of sliding window of click action

Experimental results and methods

Image equation image recognition is divided into two stages: training modeling and real-time recognition. Firstly, the data of each preset gesture is collected repeatedly, and the data is filtered and smoothed. Then the gesture action is extracted by frame and encoded by vector quantization, and each gesture action is modeled by using the temporal probability hidden Markov model, that is, the three main parameters of the hidden Markov model are obtained: initial probability, transition probability and observation probability. These parameters can take the vector coding observation sequence discussed in the previous section as input [13].

In the gesture recognition stage, after filtering and smoothing the current input gesture, the gesture action is extracted by frame, and then the velocity direction vector is encoded into the observation sequence as the input of each established gesture hidden Markov model. The occurrence probability of the best state sequence of the current gesture action under each model is obtained by using the verterbi algorithm, and the model with the largest output probability is the corresponding gesture action.

400 groups of data were collected for these four gestures, of which 300 groups were used for training modeling and 100 groups for recognition. The gesture recognition results based on HMM are shown in Figure 3, and the visible recognition rate can reach more than 88% [14–15].

Figure 3

Dynamic gesture recognition rate

Conclusion

Through reverse mapping, a better color migration algorithm can be obtained, and the obvious color features in color graphics and shape images can be presented in the final output composite image. Taking HTC Vive kit and Leap Motion sensor fixed on helmet as the hardware platform, and taking Unity3D and related StreamVR plug-ins and Leap Motion plug-ins as the software platform, a virtual piano is developed. Hidden Markov model is used to model, train and recognize gesture movements. This paper realizes four kinds of gesture recognition related to virtual piano attribute setting, and the recognition rate is more than 88%. The immersion of 3D virtual piano developed in this paper is much higher than that of 2D virtual piano displayed on ordinary computer screen; Moreover, it can be played naturally with both hands, and the sense of comfort and fluency is much stronger than the 2D virtual piano played with the help of mouse and keyboard.

Figure 1

flow chart of tone reverse mapping
flow chart of tone reverse mapping

Figure 2

Schematic diagram of sliding window of click action
Schematic diagram of sliding window of click action

Figure 3

Dynamic gesture recognition rate
Dynamic gesture recognition rate

Computer configuration of virtual piano

GPU NVIDIA GeForce GTX 1060
CPU Intel (R) Xeon (R) E5-2630
RAM 16.0GB
Video output HDMA1.4
USB port USB3. 0 port
Operating system Windows 7 operating system

Yan Z, Lv Z. The Influence of Immersive Virtual Reality Systems on Online Social Application[J]. Applied Sciences, 2020, 10(15):5058. YanZ LvZ The Influence of Immersive Virtual Reality Systems on Online Social Application[J] Applied Sciences 2020 10 15 5058 10.3390/app10155058 Search in Google Scholar

Izzati R R, Muntiah N S, Hidayah N. An Analysis of Factor That Influence the Interests in Behaviors of Using Accounting Information Systems Based on E-Commerce[J]. Jurnal AKSI (Akuntansi dan Sistem Informasi), 2020, 5(1):1–5. IzzatiR R MuntiahN S HidayahN An Analysis of Factor That Influence the Interests in Behaviors of Using Accounting Information Systems Based on E-Commerce[J] Jurnal AKSI (Akuntansi dan Sistem Informasi) 2020 5 1 1 5 10.32486/aksi.v5i1.424 Search in Google Scholar

Liu J, Chen Y. Research on Scene Fusion and Interaction Method Based on Virtual Reality Technology[J]. Journal of Physics: Conference Series, 2021, 1827(1):012010 (4pp). LiuJ ChenY Research on Scene Fusion and Interaction Method Based on Virtual Reality Technology[J] Journal of Physics: Conference Series 2021 1827 1 012010 (4pp) 10.1088/1742-6596/1827/1/012010 Search in Google Scholar

Chakrabarty N, Assi G S, Sumit K, et al. Influence of music on sensorimotor coordination and concentration among drivers in an Indian city[J]. Indian Journal of Social Psychiatry, 2020, 36(1):64. ChakrabartyN AssiG S SumitK Influence of music on sensorimotor coordination and concentration among drivers in an Indian city[J] Indian Journal of Social Psychiatry 2020 36 1 64 10.4103/ijsp.ijsp_94_18 Search in Google Scholar

Muhammed, Nur, gün, et al. Effect of Leap Motion-based 3D Immersive Virtual Reality Usage on Upper Extremity Function in Ischemic Stroke Patients.[J]. Arquivos de neuro-psiquiatria, 2019, 77(10):681–688. MuhammedNur, gün Effect of Leap Motion-based 3D Immersive Virtual Reality Usage on Upper Extremity Function in Ischemic Stroke Patients.[J] Arquivos de neuro-psiquiatria 2019 77 10 681 688 10.1590/0004-282x20190129 Search in Google Scholar

Lu J, Liao X. INFLUENCE ANALYSIS OF ROCK MECHANICAL PARAMETERS ON THE TBM PENETRATING RATE[J]. Stavební obzor - Civil Engineering Journal, 2019, 28(1):13–19. LuJ LiaoX INFLUENCE ANALYSIS OF ROCK MECHANICAL PARAMETERS ON THE TBM PENETRATING RATE[J] Stavební obzor - Civil Engineering Journal 2019 28 1 13 19 10.14311/CEJ.2019.01.0002 Search in Google Scholar

Wang S, Zhang W, Wang H, et al. How Does Income Inequality Influence Environmental Regulation in the Context of Corruption? A Panel Threshold Analysis Based on Chinese Provincial Data[J]. International Journal of Environmental Research and Public Health, 2021, 18(15):8050. WangS ZhangW WangH How Does Income Inequality Influence Environmental Regulation in the Context of Corruption? A Panel Threshold Analysis Based on Chinese Provincial Data[J] International Journal of Environmental Research and Public Health 2021 18 15 8050 10.3390/ijerph18158050834568834360341 Search in Google Scholar

Yao L, Shang T. Analysis of influence factors of embedded carbon in China's textile and garment export based on SDA model[J]. IOP Conference Series: Earth and Environmental Science, 2021, 675(1):012132 (7pp). YaoL ShangT Analysis of influence factors of embedded carbon in China's textile and garment export based on SDA model[J] IOP Conference Series: Earth and Environmental Science 2021 675 1 012132 (7pp) 10.1088/1755-1315/675/1/012132 Search in Google Scholar

Chaudhary S, Ninsawat S, Nakamura T. Influence of Altitude and Image Overlap on Minimum Mapping Size of Chemical in Non-Destructive Trace Detection Using Hyperspectral Remote Sensing[J]. Applied Sciences, 2021, 11(6):2586. ChaudharyS NinsawatS NakamuraT Influence of Altitude and Image Overlap on Minimum Mapping Size of Chemical in Non-Destructive Trace Detection Using Hyperspectral Remote Sensing[J] Applied Sciences 2021 11 6 2586 10.3390/app11062586 Search in Google Scholar

Song Y, Liu E. The influence evaluation of municipal government Website in Guangxi Zhuang Autonomous Region based on link analysis[J]. E3S Web of Conferences, 2021, 233(5):01161. SongY LiuE The influence evaluation of municipal government Website in Guangxi Zhuang Autonomous Region based on link analysis[J] E3S Web of Conferences 2021 233 5 01161 10.1051/e3sconf/202123301161 Search in Google Scholar

Khalyavkin A, Makeev S, Salamekh A, et al. Analysis of boundary conditions in design circuits influence on the shaft line operating state[J]. E3S Web of Conferences, 2020, 217(85):03006. KhalyavkinA MakeevS SalamekhA Analysis of boundary conditions in design circuits influence on the shaft line operating state[J] E3S Web of Conferences 2020 217 85 03006 10.1051/e3sconf/202021703006 Search in Google Scholar

A. Durán. On a model for internal waves in rotating fluids[J]. Applied Mathematics and Nonlinear Sciences, 2018, 3(2): 627–648. DuránA. On a model for internal waves in rotating fluids[J] Applied Mathematics and Nonlinear Sciences 2018 3 2 627 648 10.2478/AMNS.2018.2.00048 Search in Google Scholar

Dariusz W. Brzeziński. Review of numerical methods for NumILPT with computational accuracy assessment for fractional calculus[J]. Applied Mathematics and Nonlinear Sciences, 2018, 3(2): 487–502. BrzezińskiDariusz W. Review of numerical methods for NumILPT with computational accuracy assessment for fractional calculus[J] Applied Mathematics and Nonlinear Sciences 2018 3 2 487 502 10.2478/AMNS.2018.2.00038 Search in Google Scholar

Abdul Rauf Nizami et al. WALK POLYNOMIAL: A New Graph Invariant[J]. Applied Mathematics and Nonlinear Sciences, 2018, 3(1): 321–330. NizamiAbdul Rauf WALK POLYNOMIAL: A New Graph Invariant[J] Applied Mathematics and Nonlinear Sciences 2018 3 1 321 330 10.21042/AMNS.2018.1.00025 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo