Open Access

The design of an optimisation algorithm for musical performance guided by constructivist theory in musical theatre

  
Feb 05, 2025

Cite
Download Cover

Introduction

Voice and dance are not only an important factor in the performance of musicals but also have a significant role and value in the history of the development of art. With the help of traditional concepts and ideas through the ancient forms of artistic expression, to realize the wonderful presentation of the story content of the musical, relying on the combination of voice and dance, pay attention to the mutual coordination of the limbs and the music, and show the true meaning and charm of art [1-4].

The integration of voice and dance can enrich the performance form of musicals, can be in accordance with the character characteristics of different characters, cleverly adjusted and optimized, and then in the body dynamic performance of the character’s voice interpretation, through this method to enhance the quality of musical performances, fully embodies the artistic connotation of musicals, and arouses the public’s resonance and concern [5-6]. In order to realize the perfect integration of voice and dance in musical performance, we should start from the construction and cultivation of talents and do a good job of basic guarantee work in order to successfully accelerate the pace of progress [7-8].

In recent years, with the continuous improvement of China’s economic development level, people in the pursuit of spiritual enjoyment at the same time, the understanding of art has also gradually deepened, the audience group appreciation ability to a certain extent on the professionalism of the art workers put forward higher requirements. Therefore, China’s art workers’ team should go through training and education and further study and training and continue to improve their professionalism and ability [9-11]. As a form of opera performance, musical theater has a pivotal position in the level of Chinese literary and artistic performance. Although musical theater influences China’s economic development in some aspects, it still requires continuous exploration and research to realize the integration of dance and vocal music [12-13].

With the rapid development of science and technology and the promotion of globalization, music performance education in colleges and universities is also undergoing in-depth reform and innovation. As an important branch in the field of art, the teaching of music performance majors has also experienced a transformation from traditional to modern, especially the introduction of information technology teaching methods, which has brought revolutionary changes to the teaching of music performance majors, promoting the cultivation of high-quality music performance talents as well as better quality music performance works [14-16].

Musical theater is a kind of opera genre, which first originated in Britain and gradually flooded into China after decades of development. Musical theater is a form of expression that integrates drama, dance and vocal music, which has high requirements for the performers’ ability to grasp the stage and musical literacy. The fusion of vocal music and dance can realize the perfect image shaping and portrayal, promote the development of the storyline at the same time, promote the performers’ true feelings, and realize the resonance with the audience. Literature [17] analyzes the process of the brain’s neural response during the learning process of music and performing arts, which helps stakeholders related to music and performing arts to understand the logic of music and performing arts performance. Literature [18] objectively analyzes the practice of teaching classical singing based on the principles of kinesiology and points out that this method establishes an effective link between the education of music performance, the quality of teaching, and the assessment of teaching performance. Literature [19] describes the history of the development of music performance, as well as the changes in the paradigm of music performance, and finally analyzes the interdisciplinary approach to music research. Literature [20] combined physiological and cognitive research and analysis methods to examine resting state cardiac activity data of opera performers, and the final analysis showed that there is a correlation between high and low levels of cognition in opera performers and opera performance stress. Literature [21] proposes the use of research findings from CHARM’s Mazurka Project, while integrating textual considerations into the rhythms of opera performances, to support the performance and creation of opera performers, and validate the effectiveness of the proposed approach through feedback from opera performers. Literature [22] analyzes the role of data mining technology in the artistic positioning of opera performances and market selection, and the results of the study show that data mining technology effectively assists in the precise positioning of opera performances. Literature [23] attempted to examine the sustainable development path of the opera industry from the perspective of the life cycle, combined with a literature review, in-depth interviews and other strategies to discover the factors affecting the sustainable development of opera and made positive contributions to the sustainable development of the opera industry.

The informatization of music performance professional teaching is a process of continuous exploration, development and innovation. With the continuous development of information technology and the in-depth promotion of educational reform, the informatization teaching of music performance majors in colleges and universities will usher in a broader development space. Literature [24] discusses opera as a form of artistic expression as well as artistic characteristics, and focuses on the thinking and principles of characterization in opera in opera performance, which helps opera singers deepen their understanding and grasp of the role and then bring about a higher level of opera performance. Literature [25] used a qualitative research method to examine the study and rehearsal of opera performance, showing the process of developing knowledge and skills in the transitional stage of opera performers’ careers in opera performance in an all-round way. Literature [26] investigated the effect of online teaching practice for music performance learning based on the constructivist teaching paradigm and made some contributions to the informational network construction of music performance teaching. Literature [27] proposes a special training program for vocal performance, which includes online learning, skill practice and artistic skill development, and through research and investigation, it is learned that the special training program for vocal performance has gained the unanimous praise of teachers and students. Literature [28] examined the development of Cantonese opera teaching in Guangdong and Hong Kong, and found that Cantonese opera has appeared in school music education, community education, and higher education and is mainly disseminated in the above ways. The study fills some gaps in the teaching of Cantonese opera.

By setting up a panoramic camera on each of the six salient positions in the musical performance stage, is used to collect the lighting colour data in the performance scene, divide them into a training set and validation machine according to the principle of 8:2 ratio, and determine the colour evaluation function, which consists of three parts: the colour harmony factor, the colour emotion factor, and the colour contrast factor. Combined with the key frame and light intensity, the music lighting color mixing template is developed jointly to better serve the music performance. Determine the relevant parameters and select target samples to explore the optimization effect of musical performance based on constructivism and genetic algorithms.

Optimising music performance towards constructivist theory
Constructivist Theory
Basic Connotation

Constructivism, which is also known as structuralism, posits that children develop their cognitive structures by gradually building knowledge about the external world while interacting with their surroundings [29-30]. Children’s interaction with the environment involves two basic processes: “assimilation” and “adaptation”. Assimilation refers to the process of absorbing relevant information from the external environment and incorporating it into the child’s existing cognitive structure (also known as “schema”), i.e., the process by which an individual integrates information provided by external stimuli into their original cognitive structure. Conformity refers to the process of restructuring and reforming children’s cognitive structure caused by changes in the external environment and the inability of the original cognitive structure to assimilate the information provided by the new environment, i.e., the process in which an individual’s cognitive structure is altered as a result of the influence of external stimuli.

Components

Scenario setting.The broader context in which the complete learning experience (i.e., what is learnt, how it is learnt, and how it is used) exists is the external scenario that facilitates students’ understanding and construction of meaning and promotes the connection of knowledge, skills, and experiences. Second, it is not only the physical and conceptual structure of the problem, but also the social context in which the intention of the activity and the problem are embedded. Creating a context that facilitates the construction of meaning by the performer is the most important aspect or aspect in the music performance process.

Solidarity. Teamwork should be present throughout the music performance process. Collaboration between performers and performers plays a very important role in the collection and analysis of learning materials, the formulation and validation of hypotheses, the self-feedback of the learning process and the evaluation of the learning outcomes, as well as the final construction of meaning. Collaboration is a sense of negotiation. The main types of consultation are self-consultation and mutual consultation. Self-negotiation is the repeated negotiation with oneself about what makes more sense. Mutual consultation is the term used to describe the discussion and debate among the performance team.

Communication. Communication is the most basic way or link in the collaborative process. For example, members of the performance team must communicate with each other to discuss how to complete the required performance tasks to achieve the goal of constructing meaning, how to get more guidance and help from the old or others, and so on. In fact, the process of collaborative performance is a communication process, in which the entire performance team shares each performer’s ideas. Communication is a crucial tool for advancing the learning process of each performer.

Meaning construction. Meaning construction refers to the unique understanding of the nature and laws of things, as well as the intrinsic connections between things. In terms of constructivist teaching practice, the construction of meaning is the digestion and comprehension of the performer’s skills and theoretical knowledge of music performance, a process of transforming the knowledge imparted by the teacher, and the display of the final results of the music performance process. Helping performers to construct meaning in the process of music rehearsal is to help performers achieve a deeper understanding of the nature of the things reflected in the current learning content, the laws, and the intrinsic connection between those things and other things.

Genetic Algorithm Theory
Basic Ideas

The basic idea of genetic algorithms as evolutionary algorithms is based on Darwin’s theory of evolution and Mendel’s doctrine of heredity [31-32]. From Darwin’s theory of evolution, it can be seen that all kinds of organisms in the natural world in the continuous struggle for survival, through heredity and mutation to produce new individuals, the new individuals can be retained if they are able to adapt to the environment, while those who are not adapted to be eliminated, that is, the law of “survival of the fittest”. According to Mendel’s theory of heredity, heredity exists in the form of genes in the chromosomes, which are the command codes that control individual traits. Genes have a specific location and determine a particular trait. Through crossbreeding and genetic mutation, offspring are produced that are better adapted to the environment, and through natural selection (i.e., survival of the fittest), the better-adapted individuals are retained, while the less well-adapted are eliminated.

The flowchart of the genetic algorithm is depicted in Figure 1. Given a real problem, the genetic algorithm first encodes the problem as “chromosomes”, and according to “survival of the fittest”, selects chromosomes for replication, a process known as regeneration, and then through crossover, mutation and other operations to produce new chromosomes, which have greater adaptive capacity. This process is called regeneration and is followed by crossover, mutation, and other operations to produce new chromosomes that are more adaptable. This process is repeated until the most adaptable individual emerges, i.e., the optimal solution.

Figure 1.

Flowchart of genetic algorithm

Grounded theory

The underlying theory of genetic algorithms focuses on analysing the convergence of the problem to be solved, i.e., calculating the probability of the population converging to the global optimal solution. On the whole, it can be divided into stochastic model theory and evolutionary dynamics theory.

Stochastic model theory

The theory is based on stochastic processes, and if the coding space or population is finite, a Markov chain model can be used to represent the search process. However, it should be noted that the model is discrete-time. Finally, this search process is analysed based on the already existing related stochastic process theory.

Evolutionary Dynamics Theory

Unlike stochastic model theory, evolutionary dynamics theory is based on pattern theory. The pattern theory is split into the pattern theorem and the building block hypothesis. Among them, Professor Holland proposed the pattern theorem, which can ensure that the number of better patterns increases exponentially and is the basic theorem in the theory of evolutionary dynamics. The building block assumption, on the other hand, mainly describes the recombination function that genetic algorithms have.

When using a genetic algorithm to solve a problem, both the Pattern Theorem and the Block Assumption ensure that a globally optimal solution is found and also allow for a range of analyses of evolutionary behavior.

Optimisation of musical theatre performance effects

In this study, the optimization of musical theatre performance effects is divided into three steps, taking the contextual setting in constructivist theory as the starting point. First, a panoramic camera is set up at each of the six salient locations in the musical performance stage to capture the colour images of the lights in the performance scene, and the images from each panoramic camera form a panoramic cube image. Afterward, each generated panoramic image is evaluated by constructing a color evaluation function. Finally, this study will use a genetic algorithm to optimize the colour evaluation function to derive the fitness value of the colour evaluation function and also need to use the musi CNN model to classify the music labels, so that they can play a role in various types of stage performance scenes, so as to achieve the optimization operation of the visual effects of musical stage performances.

Stage image acquisition

To obtain a harmonious lighting configuration scheme for a musical performance stage, it is necessary to collect color images of the lighting from different angles in the scene. Considering the usual movement paths of actors on a musical stage scene, this study tries to set up several panoramic cameras around the movement paths to capture the lighting colour images in the musical scene, and a total of 3,000 sets of stage image data are captured. The blue trajectory represents the regular movement path of an actor on a musical theatre stage, and six cameras are set up around the stage, each of which captures images from six different viewpoints: up, down, left, right, front and back, and then forms a panoramic image cube.

Colour evaluation function

This study was conducted to evaluate the light colour images in the scene and to provide a harmony colour scheme for the light colours. The light colours in the scene are represented as a colour vector containing RGB colour information, each light colour vector can be represented as , and the full colour vector in the scene can be defined as V = (V1, V2Vn).

In this study, a colour evaluation function is constructed with the colour vectors in the scene as the evaluation parameters and three factors as the evaluation criteria, namely: the colour harmony factor Eh, the colour emotion factor Em, and the colour contrast factor Ee. The colour evaluation function can be expressed as follows: E(V)=λhEh(V)+λmEm(V)+λcEc(V)$E(V) = {\lambda _h}{E_h}(V) + {\lambda _m}{E_m}(V) + {\lambda _c}{E_c}(V)$

Here λh, λm, λe is the colour evaluation factor normalisation parameter, which is set to 100, 5, and 1 respectively, where the difference in λh compared to the other two factors is to enhance the weight of colour harmony in the overall colour scheme. The three different colour evaluation factors are defined as follows:

Colour harmony factor Eh

In this study a tonal template approach is used to evaluate the colour harmony of a 3-D scene by evaluating the colour harmony of each face in each panoramic cube image and aggregating it into a colour harmony formula with weights: Eh(V)=iλij=16λjEkt(fij(V))${E_h}(V) = \sum\limits_i {{\lambda _i}} \sum\limits_{j = 1}^6 {{\lambda _j}} {E_{kt}}({f_{ij}}(V))$

λi in Eq. (2) represents the weight of each panoramic camera cube image and λj represents the weight of each face in each cube image. In this study, the weights of the four cameras are set to 0.2, and the weights of the two cameras at the corners are set to 0.1. For each cube image, the weights of the top and bottom faces are set to 0.1, and the weights of the four faces at the front, back, left, and right are set to 0.2. The colour harmony factor of each face of each cube image is defined as follows: Eht(fij(V))=arg minm,αpH(p)ETm(a)(p)S(p)180${E_{ht}}({f_{ij}}(V)) = \arg {\min _{m,\alpha }}\sum\limits_p {||H(p) - {E_{{T_m}(a)}}(p)||} \cdot \frac{{S(p)}}{{180}}$

H in Eq. (3) is the colour channel, S denotes the saturation channel, and the equation in Eq. denotes the distance on the colour swap. where Tm is the mth colour template of Matsuda.

Colour emotion factor Em

This study proposes the method of colour emotion factor, which uses PCA algorithm to extract these three components. ψ1, ψ2, ψ3, ψ4, ψ5, ψ6 is used in this study to represent each of these six components. Through the emotion evaluation method and combined with the music classification result labels in this study, the expression can be shown as follows: Em=Emdefi=02Emi${E_m} = E_m^{def}\sum\limits_{i = 0}^2 {E_m^i}$

Emdϕf$E_m^{d\phi f}$ in Eq. (4) represents the user’s desired light colour emotion, which is preset in such a way that it affects the whole effect of the light colour setting. Where Emi$E_m^i$ is the emotion defined by ψi: Emi={ (2ψi)2/16, for active, heavy and warm (ψi+2)2/16, for passive, light and cool$E_m^i = \left\{ {\begin{array}{*{20}{l}} {{{(2 - {\psi _i})}^2}/16,}&{{\text{for active, heavy and warm}}} \\ {{{({\psi _i} + 2)}^2}/16,}&{{\text{for passive, light and cool}}} \end{array}} \right.$

Colour contrast factor Em

In order to optimise the lighting colours in a scene, colour contrast is an important evaluation factor and many different colour contrast methods have been defined in the literature. Using global light in this study, the colour contrast factor can be defined as follows: Ec=m,nωmn(1(LmLn/100)2)${E_c} = \sum\limits_{m,n} {{\omega _{mn}}} (1 - {(\parallel {L_m} - {L_n}\parallel /100)^2})$

Lm${\mathcal{L}_m}$ and Ln in Eq. (6) are two different light colours in the scene, and ωnm represents the intersection area of the two light ranges.

Genetic Algorithm Optimisation

The method used in this study is different from the traditional genetic algorithm in that the algorithm starts with the initialization of the population at the design point. The genetic algorithm calculates the fitness value of the population in the same way as the traditional method, and collects enough feature points for the optimisation model, which is thus constructed and applied to the subsequent evolutionary operations.

In this study, the genetic algorithm model is used as our optimisation model, the genetic algorithm is able to predict an unsampled point x as a global trend function fT(x)β and Gaussian treatment G(x). Namely: y(x)=fT(x)β+G(x),xRm$y(x) = {f^T}(x)\beta + G(x),x \in {R^m}$

f(x) = [fo(x)…fp − 1(x)]TRp in Equation (7), which is defined by a series of basic regression equations. β = [β0βp−1] ∈ Rp, denotes the correlation coefficient between the vectors, where T is the maximum number of iterations and m is the number of populations, which is optimized by genetic algorithms to derive the fitness value of the colour evaluation function.

Key frames

In order to set keyframes during the playback of the whole music file, this study outputs 50 tags for each music clip, each tag has the corresponding output sorting, to find out the “fast” and “slow” tags, and then keyframes are set. The calculation formula can be defined as follows: Tk=Tavg+ηspsηfpf+i=1k1Ti${T_k} = {T_{avg}} + {\eta _s}{p_s} - {\eta _f}{p_f} + \sum\limits_{i = 1}^{k - 1} {{T_i}}$

Tk in Eq. (8) is the Kth keyframe, which represents the sequence of keyframes from the beginning to the end of the music, and k ∈ [1, 2…N] .Tavg is the average time interval between all the keyframes, which is calculated from the total duration of the music and the total number of music clips. Ps and Pf are the user-defined speed of the light changes, which can be manipulated to control the light rate at fast and slow tempo. ns and ns are the weights of the labels “slow” and “fast”, the calculation of the label weights is defined below: ηj=rj/j=1Nj${\eta _j} = {r_j}/\sum\limits_{j = 1}^N j$

rj in Equation (9) is the output ordering of the label among a total of all N output labels, the higher the ordering of the label, the more weight it takes. The colour of the light is obtained by interpolating between keyframes.

Light intensity

In order to set the light intensity for the lights of the whole scene. In this study, the output weights of “loud” and “quiet” among the 50 classification labels of the output music are used for calculation. The calculation method is as follows: Ik=Idef+ηlflηqfq${I_k} = {I_{{\text{def}}}} + {\eta _l}{f_l} - {\eta _q}{f_q}$

Ik in Eq. (10) represents the light intensity of the kth keyframe, ldef represents the user-defined overall colour intensity tone, ηl and ηq represent the weights of the labels “loud” and “quiet” in the whole output label sequence, and fl and fq are the user-defined weights of the light intensity, which can manipulate the weights of bright and grey lights. ηl and ηq are the weights of the labels “loud” and “quiet” in the whole output label sequence, and fl and fq are the user-defined light intensity levels, which can be manipulated to give more weight to the light in bright and grey light.

Music Lighting Color Mixing Templates

In this study, we define the music light color mixing template by the output music labels, and we divide the music labels into two basic color templates: cool templates and warm color templates. Cool templates include “classical”, “techno”, “beats”, “ambient”, and warm templates include “rock”, “electronic”, “pop”, “metal”. For each color template, different color calculations are defined:

The cold colour template includes some cold colours, such as blue, purple, etc., which are calculated as follows: { C1=(λ1/255,1,0) C2=(0,1,λ2/255) C3=(0,λ3/255,1) C4=(λ4/255,0,1)$\left\{ {\begin{array}{*{20}{l}} {{C_1} = ({\lambda _1}/255,1,0)} \\ {{C_2} = (0,1,{\lambda _2}/255)} \\ {{C_3} = (0,{\lambda _3}/255,1)} \\ {{C_4} = ({\lambda _4}/255,0,1)} \end{array}} \right.$

In Equation (11), each expression in the λ1 ∈ [0, 194], λ2 ∈ [0, 255], λ3 ∈ [0, 255], λ4 ∈ [0, 102] formula represents an RGB colour component, and the cool light colour will be randomly selected from C1, C2, C3, C4. The warm colour template includes some warm colours, including red, orange, etc., which are calculated as follows: { W1=(1,ε1/255,0) W2=(1,0,ε2/255)$\left\{ {\begin{array}{*{20}{l}} {{W_1} = (1,{\varepsilon _1}/255,0)} \\ {{W_2} = (1,0,{\varepsilon _2}/255)} \end{array}} \right.$

ε1 ∈ [0, 255], ε2 ∈ [0, 220] in equation (12), the warm light colour will change from {W1, W2}

In this study, each music clip is classified by musi CNN model, in which the first N classification labels of each first music clip will be output. So for the music clip K the light colour can be calculated as follows: Lk=j=1NηjCj${L_k} = \sum\limits_{j = 1}^N {{\eta _j}} {C_j}$

ηj in Equation (13) represents the weight size of each music clip label, Cj ∈ {Cc, Cv}, whose value is defined through the jth label. With each music colour template Lk, and the harmonic light colour Hk, the mixed light colour can be defined as follows: Mk=(LkHk)F+Hk${M_k} = ({L_k} - {H_k})F + {H_k}$

F in equation (14) represents the mixing coefficient of the two colours, F[0,1]$F \in \left[ {0,1} \right]$ .

Example Analysis of Optimisation of Musical Theatre Performance Effect
Colour Emotion Factor and Lighting Intensity Analysis
Colour emotion factor analysis

The coloured light of lamps is often divided into two categories: cold light and warm light. Cold light is mainly blue, while warm light is yellowish or reddish. Appropriate use of colored light can not only produce beautiful effects but also bring out different artistic atmospheres and ideas to be expressed in the works. Based on the colour factors mentioned in the previous section, the colour emotion factors of musical performances are statistically analysed, and the colour emotion factors of musical performances are analysed as shown in Table 1. It can be seen that among the colour emotion factors of musical performances, the red emotion factor has the highest number, with a number of 579, and the corresponding emotion types of red are passion, love, vitality, and positivity. The second color is orange, with a specific value of 483, and the corresponding emotional type is bright, spiritual, carefree, and excited. Other emotional factors related to colors are the same. Musical theatre performances are bound to have a tone, such as the use of red light to highlight the warmth of the fire and the use of blue light to set off quiet, gentle, and deep feelings. When this tone is set, the other lights are arranged for use. That is to say, the principle of each show’s hue should be a basic colour, supplemented by other colours, reflecting the sense of ups and downs, in the hue of the collocation of the hue should be just right, and the warm and cold flow of hue changes with the change in colour saturation should be appropriate. Choose the right lighting color to enhance the drama or atmosphere of the stage to achieve better stage expression.

Music stage lighting color emotion type

Serial number Color Affective connotation Quantity
1 Red Passion, love, energy, positive 579
2 Orange Bright, spiritual, carefree, excited 483
3 Yellow Happy, cheerful, bright, intelligent 259
4 Green Peace, peace, sound ,freshness 135
5 Blue Calm, honest, extensive and harmonious 122
6 Purple Mysterious, happy, elegant, romantic 192
Key Frame Lighting Intensity Analysis

Taking the musical “Cats” as an example, the musical “Cats” is divided into 20 keyframes, each keyframe corresponds to a “loud” and “quiet” labels, and by calculating the stage light intensity of each frame, the stage lighting effect can be intelligently adjusted. Lighting effects, ηl, ηq, fl, fq, ldef data are all from the source data set A. In order to explore the key frame lighting intensity of the musical, the key frame lighting intensity of the musical “Cats” is analysed based on the formula (10), and the analysis of the key frame lighting intensity is shown in Table 2.

Key frame lighting intensity analysis

Keyframe ldef ηl ηq fl fq Ik
1 34 0.023 0.037 10 16 33.638
2 39 0.062 0.058 27 25 39.224
3 38 0.046 0.065 20 28 37.1
4 28 0.068 0.065 30 28 28.22
5 34 0.027 0.035 12 15 33.799
6 22 0.048 0.042 21 18 22.252
7 20 0.064 0.054 28 23 20.55
8 33 0.034 0.049 15 21 32.481
9 28 0.055 0.042 24 18 28.564
10 39 0.030 0.047 13 20 38.45
11 31 0.037 0.030 16 13 31.202
12 28 0.066 0.065 29 28 28.094
13 29 0.059 0.068 26 29 28.562
14 40 0.053 0.063 23 27 39.518
15 34 0.057 0.042 25 18 34.669
16 31 0.046 0.042 20 18 31.164
17 29 0.062 0.068 27 29 28.702
18 21 0.057 0.033 25 14 21.963
19 30 0.062 0.040 27 17 30.994
20 30 0.046 0.056 20 24 29.576
Total 30.9 0.0501 0.05005 21.9 21.45 30.9361

In conclusion, the stage lighting art effect is an important component of music performance, but also a key factor that affects the effect of music performance, directly affecting the quality of music performance. Appropriate use of stage lighting can enhance the impression of music performance activities, resulting in endless reminiscence and thinking. A truly perfect musical performance not only relies on the performance of the actors but also attaches importance to the use of various choreographic techniques, of which the reasonable use of stage lighting is very important. Stage lighting should be used correctly to create the atmosphere, emotions, and service for the music performance, so that the stage art has more infectious force and attraction.

Colour evaluation function optimisation and music label classification
Optimisation of the colour evaluation function

For the optimization of the color evaluation function, this paper uses a genetic algorithm to search for the optimal parameters of the color evaluation function. Genetic algorithm in the optimisation problem solving The main idea of this method is to take the probability as a guide in a given range of space, set the initial population, and then introduce the hybridisation operator, focusing on searching for the expectation of the emergence of the part of the parameter that is most in line with the fitness, which overcomes the problem of the general algorithm of searching the space is too large, and the calculation is too complex. In order to maintain the diversity of the population, then introduce the variation operator. The parameter selection for ten-fold cross-validation is depicted in Fig. 2, and the experimental analysis of the colour evaluation function optimization is carried out using the matlab libsvmmat toolbox.

Figure 2.

Ten fold cross-validation parameter selection

Step 1: Initialisation settings, set parameters f(x)$f\left( x \right)$ G(x). This parameter is from equation (7), the value range of f(x)$f\left( x \right)$ is [0,2000], the value range of G(x) is [0,2000].

Step 2: Chromosome coding design, binary coding is used.

Step 3: Calculate the fitness value in the sense of crossover validation using matlab software.

Step 4: Design the selection operator, crossover operator and variation operator.

Step 5: Abort condition setting. Output the fitness value of the colour evaluation function if the conditions are satisfied and return to step four if they are not satisfied enough.

The fitness value of the colour evaluation function is obtained by using ten-fold cross-validation analysis:

Best f=2-9.83, Best G=2-9.93, Fitiness=0.897

The higher the fitness value of its colour evaluation function, the higher the convergence performance of the evaluation function, and the better the effect of its stage lighting.

Classification of music labels

Based on the proposed musi CNN model in subsection 2.3.6, music labels are classified and analysed in a music dataset A, for example, the number of which is 3,000, which is divided into (2,400 sets of data) as training samples and test samples (600) according to the ratio of 8:2, and the data is normalised to the interval of [0,1] through data preprocessing, and the confusion matrix of the training set is shown in Fig. 3, and the test The training set confusion matrix is shown in Fig. 3 and the test set confusion matrix is shown in Fig. 4. From the above results, it can be seen that by musi CNN model for music label classification, the correct rate of the training set is 97.58%, and the correct rate of the test set classification is 95.50%, which greatly improves the efficiency of music label classification, relying on the musi CNN model and diversified facilities and equipment for the audience to create and perform the content of the same lighting effects, and the integration of the artistic emotion and the lighting effects into a whole, to create a unique mood, and to create a unique mood. It also integrates artistic emotions and lighting effects into a whole, creating a unique mood and incorporating more modern colors. Musical theatre stage lighting can stop the performance at a certain time, thus prompting the audience to truly feel a moment of emotional outburst on the musical theatre stage.

Figure 3.

The training set confusion matrix

Figure 4.

Test set confusion matrix

Conclusion

The paper is guided by constructivist theory to optimize the musical performance of the musical. The help of high-definition camera to obtain the stage image data and then determine the colour evaluation function, followed by the use of the genetic algorithm to optimize the colour evaluation function and formulate the music colour mixing model. Among the six color factors, the red emotional factor has the highest number, with a number of 579, which is more likely to convey emotional values of passion, love, vitality, and positivity to the audience. The lighting intensity of the target musical stage is 30.9361, and the stage lighting is properly used to create the atmosphere, enhance the emotion, and enhance the musical performance. In addition, the fitness value of the colour evaluation function based on the genetic algorithm is 0.897, while the Musi CNN model classifies the music label training set and test set correctly at 97.58% and 95.50%, respectively. The fusion of genetic algorithms and musical stage lighting prompts the audience to truly feel the emotional effect of the musical stage in order to achieve the effect of optimizing musical performance.

Language:
English