Accesso libero

Research on the Optimization of Personalized Learning Paths and Teaching Practice Strategies of Deep Enhanced Learning for Dance Choreographers

  
26 set 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

With the development of the times and the improvement of people’s living standards, dance has become an important form of culture and art, and more and more people begin to learn and appreciate dance. In various colleges and universities, choreography has also become one of the popular majors [1-4]. Dance choreography is a comprehensive art discipline, which requires students to have solid basic skills and rich knowledge of dance, and to be able to skillfully use choreography skills to create and perform. However, the sustainable development of choreography in colleges and universities and the cultivation of professional talents cannot be separated from personalized professional teaching methods [5-8].

Dance choreography is an art form covering a variety of elements, which not only requires students to carry out skillful training and performance, but also needs to deeply explore and cultivate individual characteristics. Therefore, the teaching of choreography requires not only the grasp of commonality, but also the attention and cultivation of individuality [9-12]. With the development and change of education, more and more educators and students are inclined to personalized learning. Personalized curriculum design and implementation has become a very important educational task. For a course such as choreography, individualized design and implementation are more challenging [13-16].

Educators need to understand students’ backgrounds, interests, learning styles, and physical conditions. Based on students’ needs, appropriate choreography programs and activities are designed to meet students’ learning and developmental needs. In addition, when designing personalized dance choreography courses, educators need to emphasize the importance of students’ comprehensive literacy [17-20]. Choreography requires a great deal of physical training, cultural awareness, and aesthetic ability, and educators need to design challenging training and practice of choreography works according to students’ physical conditions and abilities [21-23] in order to help students improve their choreography level and quality. And deep reinforcement learning plays an optimizing role for personalized learning in order to promote the development of teaching practice [24-25].

Deep reinforcement learning combines the advantages of deep learning and reinforcement learning to provide an efficient decision optimization method. In this paper, we first review the basic principles of reinforcement learning and deep learning, and design an adaptive learning path recommendation model for dance choreographers based on the reinforcement learning algorithm that combines value and strategy. Learning goal features and domain knowledge features are added to the model, and LSTM and Transformer are utilized to predict the cognitive state and knowledge point coverage of the learner, respectively, while the change of the difficulty of the learning content is taken into account. Mathematical modeling of the learner’s state, action and reward values was performed using the Actor-Critic algorithm, and the D3ON algorithm was used to implement the choreography content recommendation function. In addition, the effect of learning path optimization was tested through experiments, and the practical effect of the new teaching strategy was verified through t-test.

Deep reinforcement learning algorithms
Enhanced Learning Algorithm

Reinforcement learning discusses the problem of how an intelligent body can maximize the rewards it can obtain in a complex, uncertain environment. Reinforcement learning consists of two parts: the intelligence and the environment. After an intelligent body acquires a state st in the environment, it uses that state to output an action at. The environment then outputs the next state st+l and the reward of the current action based on the action taken by the intelligent body rt+1. The purpose of the intelligent body is to acquire as many rewards as possible from the environment, so that it can continuously optimize its behavioral trajectory and ultimately learn the optimal behavioral strategy [26].

A strategy is a model by which an intelligent body chooses its next action. The intelligent body decides its subsequent actions according to a certain strategy. Strategies can be categorized into two types: stochastic and deterministic strategies.

The stochastic strategy is commonly represented by π(as) in reinforcement learning, as in Equation (1): π(a|s)=P(at=a|st=s)

The deterministic strategy is as in equation (2): a*=argmaxa π(a|s)

The value function is used to evaluate the goodness of the current state. The goodness of the state lies in the influence of the current state on the high or low rewards brought by the subsequent actions. The larger the value of the value function, the more considerable the future reward is, and the more favorable the current state of the intelligence is to the future reward. For all sS, the value function is defined as shown in equation (3): Vπ(s) = Eπ[Gt|st=s] = Eπ[k=0γkrt+k+1|st=s]

Where γ is the discount factor and the expected value of the subscript is given by the π function, the value of which indicates the level of reward that can be expected to be obtained when strategy π is adopted. Where Gt denotes the discounted reward and it is represented as in equation (4): Gt=E[k=0γkrt+k+1]

The next state an intelligent body is in is determined by the combination of its current state and the action it takes at this moment, a process that involves two key elements, the state transfer probability and the reward function. Define the transfer probability of taking action a from state s to state s′ at moment t as P, and for all s′ ∈ S, sS, aA(s), the state transfer probability is as in equation (5): Pssa=P(st+1=s|st=s,at=a)

The reward function, on the other hand, defines the extent to which the system can be rewarded for performing an action in a particular state: R(s,a)=E[rt+1|st=s,at=a]

Deep reinforcement learning algorithm

Deep reinforcement learning integrates the parsing power of deep learning for complex data and the intelligent decision-making skill of reinforcement learning, which is capable of generating optimal decision responses directly based on multidimensional input information, constructing a seamless end-to-end decision control system. In this system, the intelligent body inputs various information into the network generated by interacting with the environment as a means of accumulating experience and driving iterative updating of the decision network parameters, with a view to learning the most optimal decision strategy [27].

Value-based deep reinforcement learning algorithms

The deep DQN algorithm is a classic value-based deep reinforcement learning algorithm. DQN combines convolutional neural network with Q-learning algorithm in traditional reinforcement learning, and uses an empirical playback mechanism to store the transfer sample (st, at, rt, st+l) generated by moving the machine into the environment interaction within each time step t into the playback memory unit D, wherein st represents the machine into the current state, at represents the action that the machine may produce in the current state, rt represents the timely reward, and represents the immediate reward return obtained by the machine when it executes action a in the current state s, and r, st+l represents in st influences under the state of the next moment. In the training phase, a small number of batch transfer samples are randomly selected from the dataset D at certain intervals, and the stochastic gradient descent algorithm is subsequently used to iteratively learn from this batch of samples in order to adjust the settings of the parameters θ of the network.

The DQN model uses a deep convolutional neural network to approximate the optimal action-value function (7), where θt is the initial network parameters: Q(s,a,θt) Q*(s,a) = maxπ E[rt+γrt+1+γ2rt+2+|st=s,at=a,π]

DQN In addition to using a deep convolutional network to approximate the current value function, the DQN model uses another network Q(s, a, θt−) separately to generate the target Q values, where θt− is the updated network parameters. That is, Q(s, a, θt) is the output of the estimation network and Q(s, a, θt−) is the output of the target network. The parameters in the estimation network are updated in real time. After a number of steps, the parameters of the estimation network are replicated in the heliograph network and then the target network remains unchanged and the estimation network continues to be updated in real time. The update of the DQN network parameters is done by utilizing the TD error (temporal differencing) with the following equation: L(θt)=Es,a,r,s[(YtQ(s,a|θt))2]

At network initialization θt = θt, where Yt is an approximate representation of the optimization objective of the value function: Yt=r+γ maxat+1Q(st+1,at+1|θt)

The gradient formula is obtained by using SGD to obtain the partial derivatives for parameter θ: L(θt)=Es,a,r,s[(YtQ(s,a|θt)θtQ(s,a|θt)]

A significant drawback of the DQN algorithm is the Q-value overestimation problem, where the intelligent body selects the action with the highest Q-value based on a greedy strategy, when the action is not necessarily optimal [28].

Policy-based deep reinforcement learning algorithms

Policy-based methods are suitable for continuous or high-dimensional action spaces and have the advantages of simple policy parameterization and fast convergence.

The REINFORCE algorithm is a typical policy-based deep reinforcement learning algorithm. It is based on the idea of gradient ascent to maximize long-term returns by directly updating the parameters of the policy function. The core idea of the REINFORCE algorithm is to use the policy function to define the probability distribution of choosing an action in a given state, and then compute the gradient based on the trajectories obtained from sampling, and ultimately use the gradient ascent method to update the parameters of the policy function.

The advantage of the REINFORCE algorithm is that it is able to self-adjointly optimize the policy function without the need to estimate the value function, making it suitable for problems in both discrete and continuous action spaces. However, the REINFORCE algorithm also has some disadvantages, such as low sampling efficiency and high variance [29].

Algorithms based on the combination of value and strategy

The Actor-Critic algorithm is a reinforcement learning algorithm that combines a policy gradient approach and a value function approach. It consists of two parts:

Actor is responsible for tuning the parameters θ of strategy πθ(as).

Parameterized vectors w are used to estimate the value function Qw(st, at) ≈ Qπ(st, at) using a policy evaluation algorithm such as temporal difference learning.

Actor network can be described as a network that finds the probability of all available actions and selects the one with the highest output value, while Critic network can be described as a network that evaluates the selected action by estimating the value of the new state resulting from the execution of the action [30].

The Deterministic Policy Gradient (DPG) algorithm is a common Actor Critic algorithm, and the DPG models policies as deterministic policies μ(s). Deterministic policies are a special case of stochastic policies, where the objective function of the target policy is averaged over the distribution of states of the behavioral policy: J(μθ) = sρμ(s)Aπθ(s,a)r(s,a)dads = sρμ(s)r(s,μθ(s))ds = Esρμ[r(s,μθ(s))]

Compared to the stochastic strategy, the deterministic strategy gradient removes the integral over the action, the gradient only integrates over the state, so there is no need to sample the importance of the action, then the gradient becomes Eq. (12), which greatly improves the efficiency: θJ(μθ)=sρμ(s)θQμ(s,μθ(s))ds =sρμ(s)θμθ(s)aQμ(s,a)|a=μθ(s)ds =Esρμ[θμθ(s)aQμ(s,a)|a=μθ(s)]

Personalized learning path optimization model for choreographers
Adaptive learning path recommendation model construction

In this study, the ALPRM graph shown in Fig. 1 is constructed based on the deep reinforcement learning framework, and the model consists of two layers: dynamic learning environment characterization and adaptive learning path recommendation.

In the dynamic learning environment characterization layer, the core dynamic features in the learner personality traits and domain knowledge features are extracted to characterize the dynamic learning environment.

In the adaptive learning path recommendation layer, the main components of the MDP are redefined, with “state” defined as the representation model of the dynamic learning environment, “action space” as the candidate learning object, and “return value” as the relevant learning object. Return value” is defined as a function of the difficulty feature. The dynamic environment feature variables are used to train the policy network for deep reinforcement learning, and finally the trained model is used to recommend the learning object that best fits the current learning state of the learner.

Figure 1.

ALPRM diagram integrating domain knowledge features

Characterization and computation of dynamic learning environments
Characterization of dynamic learning environments

In this study, learning target features and domain knowledge features are added to characterize the dynamic learning environment as State = [et, Target, p(Kt), p′(Kt), Dif0], Statet specific descriptions are: et denotes the current learning object, Target is the target knowledge point concept, the learning target can be formulated by the teacher or freely decided by the learner according to his/her own situation before the start of the learning, p(Kt) is the cognitive state of the learner at the moment of t, p′(Kn) is the predicted next step of the knowledge point at the moment of t concept coverage, Dift is the difficulty value of the learning target.

Calculation of the eigenvalues of the dynamic learning environment

In this study, the LSTM model was used to predict the cognitive state of the learner, the Transformer model was used to predict the conceptual coverage of the next knowledge point, and the dynamic difficulty value of the learning object was calculated based on the cognitive state of the learner.

Suppose there is a C course containing a total of K knowledge point concept, denoted as K = {k1, k2, ⋯, k|K|}. The learner is denoted as S = {s1, s2, s3, ⋯, s||}, the exercise bank is denoted as EB = {e1(k), e2(k), ⋯, elenl(k)}, and the exercise is denoted as ej(k) = [ej(k1), ej(k2), ej(k3)⋯, ej(k|K|)], ej(ki) taking the value of 0 or 1 (0 means that the question does not contain the ith knowledge point concept, and 1 means that the question contains the ith knowledge point concept). The history of answer records of a learner si is denoted as Xsi = {x1, x2, x3, ⋯, x|si|}, and the answer of learner si to exercise ej at the tth moment is denoted as: xt={(si,ej(k),at)|siS,ej(k)EB}

The ISTM model is used to predict learners’ mastery of knowledge concepts and to track their cognitive state. The input to the LSTM model is xt={(si,ej(k),at)|siS,ej(k)EB} , the single-hot encoding of the knowledge point concepts of Exercise ej is denoted by ϕ(Kn), and at takes the value of either 0 or 1 (with 0 and 1 denoting an incorrect and a correct answer, respectively). The output of the model ht is a vector whose length is equal to the length of K. Each of its components represents the probability of answering the corresponding knowledge point concept correctly. In this study, this model is trained by constructing a loss function through binary cross-entropy, and the optimized loss function for a single learner is denoted as: Ls=t=0Tlb(htϕ(K1),at+1)

where · denotes dot product and lb denotes binary cross-entropy loss.

When the training of the LSTM model is finished, the historical answer records of a learner are inputted and the output of the model is his mastery of all the knowledge point concepts of the course, denoted as p(K)=[p(k1t),p(k2t),p(k3t),p(k|K|t)] .

The Transformer model is used to predict the knowledge point concept coverage to pinpoint the knowledge point concepts that the learner should learn next. The positional coding is embedded in the input of the Transformer model to characterize the sequential information in the historical learning record, and the input of the model is denoted as: ε(t)=Eet,kt+Pt

where Ee, k is the embedding vector connecting Et and Kt, and Pt denotes the positional embedding.

The Transformer model utilizes a decoder to predict the conceptual coverage of knowledge points for the next question. The decoder is connected to the encoder through a self-attentive mechanism and finally the output of the model is obtained through the full neural network vt. At the training moment t, v′ denotes the probability of occurrence of all the knowledge point concepts in the course i.e., Kt+l. The model is obtained by minimizing the loss function L′. The loss function of the training model is denoted as: Lss=t=0Tlb(vtϕ(kt+1),1)

When the training of the Transformer model is finished and a learner’s record of exercises is input, the output of the model is the probability of occurrence of all the Knowledge Point concepts in the course, denoted as C(Kt)=[c(k1t),c(k2t),c(k3),,c(k|K|)] . In order to discover new Knowledge Point concepts that have to be learnt, a weighting variable is added to the output of the Transformer model, which is equal in length to the length of the Knowledge Point concepts, denoted as ω(Kr)=[ω(k1t),ω(k2t),ω(k3t),,ω(k|K|t] . The computation of ω(kit) as follows: ω(kit)={ 1rici ci>0 1 ci=0

where ri is the number of times knowledge point concept ki is answered correctly and ci is the number of times ki occurs. Utilizing p(K)=c(Ki)ω(Ki) , the final knowledge point concept coverage for the next question is p(K)=[p(k1),p(k2),p(k3),p(k|K|] .

Difficulty is a core factor to be considered in the process of recommending study materials for choreography, and this study utilizes Equation (18) and Equation (19) to calculate the difficulty of the exercises: Re(K)=i=1n(p(kit)|e(ki)=1)

Re(K) is the probability of answering the exercise correctly, p(kit) is the degree of mastery of each knowledge concept in the exercise. t the moment exercise difficulty value Dift for: Difr=1Re(K)

Deep Reinforcement Learning Recommendation Algorithm

The Actor-Critic components designed in this paper include: states, actions, and return values.

State: this study considers the dynamic learning environment as the state of Actor-Critic, characterized as State = [et, Target, p(Kι), p′(Kι), Dift].

Action: the strategy network is a pre-trained neural network model, which accepts the learning environment state State, samples from the action space Ct according to the already saved model parameters θ, predicts and outputs the exercises that have the highest fitness with the current learning environment, and finally updates the learning environment to Statet . In this study, we obtain the conceptual coverage of the knowledge points for the next prediction from the dynamic learning environment, and retrieve the relevant exercises from the exercise library to form a candidate. Relevant exercises are retrieved to form a candidate exercise set Ct, which is calculated as follows: Ci={ ej(ki)|ejEB,kip(K)}

where ej(ki) is the candidate exercises at moment t, j is the number of exercises, and p′(K′) is the knowledge concept coverage predicted above.

Reward value: this study refines the method of calculating the reward value by giving a certain reward at each step of the intelligent body’s exploration and at the end of the exploration, and the reward value function is designed as follows: { RStatei=α*Rstepi+β*RCSEAL|α,β{0,1} RStepi=1|δDifnt|

RCSEAL is the design function for the reward value, and RStepi is the reward value given at each step. In the early stage of the intelligent body’s exploration, this study sets α = 1, β = 0, then RStatei=RStepi , which represents the reward value given to the intelligent body at each step of exploration. Where δ is the difficulty of the exercise desired by the learner, the smaller the value of |δDift|, the exercise is the one that best meets the needs of the learner. When the intelligent body completes the exploration, this study sets α = 0, β = 1, then RSatei=RCSEAL , which represents the return value of the whole adaptive learning path obtained by the intelligent body to reach the concept of the target knowledge point.

The D3ON algorithm is used to realize the choreography study material recommendation function. Two Q networks are set up as participants, i.e., the evaluation network Q is used to obtain the exercise corresponding to the maximum return value in state Statet+1, and then the target network Q is used to calculate the real return value of the exercise, so as to obtain the target value. The interaction of the two networks effectively avoids the problem of “overestimation” of the algorithm. Where θ and θ represent the parameters of the evaluation network and the target network, respectively. The target value is calculated as follows: yt=Rt+1+γQ(Statet+1,argmaxaQ(Statet+1,a,θ),θ)

Where, argmaxaQ(Statei+1, a, θ) denotes the Statei+1-state evaluation network Q selects the exercise with the largest payoff value based on its parameters θ, and the action selected for this exercise is again computed by the target network Q to obtain the final true payoff value yto On the basis of the computed yt, the mean squared error loss function is used, the Loss is computed, and the parameters are updated by back-propagation θ. The formulae are as follows: Loss=(1m)t=1n(ytQ(Statet,Ct,θ))2

After the algorithm is run for many iterations, the policy network is trained. When all the variables in the dynamic learning environment model constructed above are input into the neural network, the corresponding exercises can be output.

Personalized Learning Path Optimization and Teaching Practice Strategies
Learning path recommendation optimization experiment
Experimental program

Considering the characteristics of personalized learning path recommendation and the current research status, this paper adopts the “Dance Choreography” learning platform constructed with JSP+MySQL technology as the experimental object, and analyzes the experimental effect of the constructed personalized learning path recommendation model.

The “Dance Choreography” e-learning platform consists of four modules, namely, learning navigation, learning resources, problem solving and exploration, and learning interaction module. The knowledge items in the modules are categorized according to the chapters of the knowledge points. Among them, the resource navigation module consists of learning objectives, knowledge tree, key points and difficulties. The learning resources module consists of videos, e-lessons, and textbooks. The Problem Solving and Inquiry module consists of example problem analysis, exercises, and quizzes. The learning interaction module consists of discussion forums.

In order to improve the system’s extraction and programming of learning user access paths, it is necessary to further optimize the processing of the original log data, firstly, the knowledge items under the learning module are redefined, as shown in Table 1.

Knowledge item mapping

Learning module Knowledge item Mapping
Topic selection Overall design K1
Material selection and design K2
Movement design Structure design K3
Movement arrangement K4
Stage presentation Music selection K5
Creation and performance K6
Stage composition K7
Movement foundation Basic techniques K8
Basic dance steps K9
Common dance poses K10
Collection of experimental data

In this paper, a web page data collector was used to capture the learning data of 80 users in the log of the web platform. Considering the limitation of the learning materials of dance choreography on the scope of learning modules selected by users, the experiment chooses “Practice of Dance Choreography”, which has a comprehensive distribution of learning modules, as the experimental collection area. The number of visits to learning user nodes, learning paths, and test scores are obtained. Among them, the learning user node access volume refers to the click volume and time duration of the learning user for each knowledge item node, as shown in Table 2.

Learner’s node traffic

Knowledge item Node traffic
Clicks Duration(seconds)
K1 103 41856
K2 97 28965
K3 106 39604
K4 471 299521
K5 434 253799
K6 240 217802
K7 366 342194
K8 584 344995
K9 317 208394
K10 281 120122
Experimental data processing

Based on the similar learning user model building method, 8 groups of similar user clusters were established for 80 learning users, and the recommended TopN-1 was calculated. The parameters were initialized and calculated according to the ant colony algorithm, which mainly included the user and learning style similarity value Cj, the heuristic information value ηij obtained from the cognitive level of the learning user and the difficulty of the learning material dj, and the optimization information value of the learning user’s evaluation, τijnew, α pheromone factor, β heuristic function factor. Parameters are calculated as: ηij = 0.5, τijnew = 0.5, α = 3.5, β = 5. The calculation obtains the maximum probabilistic knowledge item recommendation TopN-2, and the personalized recommendation path is obtained after the ordered merger of TopN-1 and TopN-2, as shown in Table 3.

The learning path and personalized recommendation of similar user groups

Learning level Similar user groups Similar learning path TOPN-2
90-100 A1 K1,K3,K2,K4,K5,K6,K7,K8 K9(70%)
A2 K1,K2,K3,K5,K6,K4,K7,K8 K10(13%)
80-90 B1 K2,K4,K7,K6,K9,K10,K8 K3(71%)
B2 K2,K3,K4,K5,K7,K9 K6(80%) K8(45%)
70-80 C1 K5,K4,K6,K8,K9 K3(60%)
C2 K4,K6,K7,K10,K8 K2(50%) K3(65%)
60-70 D1 K5,K4,K10,K7 K2(55%) K3(35%) K9(60%)
D2 K6,K9,K10 K3(51%) K4(63%) K8(72%) K9(43%)
Experimental evaluation indicators

Starting from the target demand of personalized learning path recommendation, we introduce 2 performance indicators, learning efficiency and learning maze guide control effectiveness. Learning efficiency indicates the rate of improvement in learning performance after a period of continuous use of personalized learning path recommendation. Learning effectiveness is measured by the rate of increase of knowledge item check-in after learning users use the personalized learning path recommendation program compared with the previous one. The higher the concentration of knowledge item check-in, the higher the degree of the solution to the problem of learning lost in the process of students’ online learning.

To this end, a definition of knowledge quantity is first introduced: an e-learning platform is a network system composed of multiple knowledge nodes, each of which consists of n knowledge item. Knowledge nodes and knowledge items constitute a knowledge quantity, denoted as KI, where, for a certain knowledge point t the knowledge items that a student has checked in constitute a knowledge quantity of KI(t), and the personalized recommended knowledge items adopted by the learning user constitute a knowledge quantity of KI(t, s). Therefore, the effectiveness of the Lost Guide Control can be denoted as: Ef(Ucur)=(tUcurKI(t))+tUcurKI(t,s)×UallKIall×Ucur

Ucur = (t1 + t2, ⋯, tL) denotes the set of learned knowledge points, and Uall is the set of all knowledge points in the learning platform. KIall represents the total amount of knowledge in the whole learning platform. As shown by the learning maze guide effectiveness formula, the more recommended knowledge items an individual acquires when learning a certain knowledge, the more effective the learning maze solution is.

Analysis of experimental results

Five dance choreography learning users are randomly selected from each of the eight groups of similar user groups for personalized learning path recommendation of dance choreography, and the lost-guide-control rate of personalized learning path recommendation is obtained respectively, as shown in Table 4.

The misguidance control rate for personalized learning paths

Learning level Similar user groups Average misguidance control rate
90-100 A1 2.3
A2 1.1
80-90 B1 6.0
B2 6.9
70-80 C1 11.5
C2 11.7
60-70 D1 15.4
D2 16.7

Comparing the density of choreography knowledge item check-ins before and after personalized learning path recommendations, Figures 2 and 3 demonstrate the density of knowledge item check-ins before and after personalized learning path recommendations, respectively. Figure 4 shows the achievement trend after personalized learning path recommendation.

Figure 2.

The knowledge item check-in density before recommendation

Figure 3.

The knowledge item check-in density after recommendation

Figure 4.

The development trend of dance performance after recommendation

From the data, it can be seen that through the personalized learning path recommendation, the learning of choreography course learners have a certain degree of guidance and control, which is manifested in the acceptance of choreography recommended learning content recommendation, choreography learners check-in density is significantly higher than the recommended before the learning user check-in density. After receiving the path recommendation, the dance choreography performance of the dance choreography learning users are improved, especially for the dance choreography level between 60-70 points and 70-80 points of the learning users, their dance choreography level improved significantly.

Effectiveness of teaching strategies in practice

In order to test whether the teaching strategy of using reinforcement learning for personalized recommendation of learning paths in dance choreography is practically effective, two classes of dance majors in a university were randomly selected as an experimental class (N=43) and a control class (N=45). The new teaching strategy was applied in the experimental class, while the control class adopted the original teaching strategy. Before the experiment began, the two classes were pretested and compared in terms of their performance levels in dance choreography, and it was found that there was no significant difference between the pretest scores of the two classes (p>0.05), so it was considered that the two classes were homogeneous and fulfilled the requirements of the experiment.

Post-measurement data

The post-test data is the end-of-semester exam results at the end of the optimized and recommended teaching experiment of dance choreography learning path. The students chose the dance choreography test questions by drawing lots and completed the work choreography on the spot, and each student’s dance choreography exercise was scored and judged separately by the dance instructor of the preschool education major. The obtained post-test scores were obtained as shown in Table 5, and it was found that the average score of the experimental class was 0.54 points higher than that of the control class, and further T-test analysis was carried out to test whether the difference was statistically significant.

The performance of the students’ dance choreography skills

Items Class A Class B
Overall design 7.27 6.93
Material selection and design 6.88 6.45
Music selection 6.85 6.40
Structure design 6.60 5.79
Movement arrangement 6.48 5.88
Stage composition 6.82 6.38
Basic dance steps 6.64 6.19
Common dance poses 6.82 6.10
Basic techniques 6.79 6.26
Creation and performance 6.52 5.88
Mean 6.77 6.23
T-test results for experimental classes

The data of the pre and post-test scores of the experimental class of dance choreography were entered into SPSS19.0 software for paired samples t-test respectively, and the results obtained are shown in Table 6. The obtained Sig value is 0.000, which is smaller than the critical value of 0.05, so the original hypothesis that there is no significant difference between the overall means represented by the two samples can be rejected. In other words, the paired-sample t-test results of the achievement data measured before and after the implementation of the personalized learning path recommendation-oriented dance choreography teaching experiment for the students in the experimental class show a significant difference, with the post-test data showing a more pronounced increase than the pre-test data.

The result of paired sample t test

Pre-test data (N=43) Post-test data (N=43) t P(sig.)
Mean 6.23 6.77 13.871 0.000*

From this, it can be judged that the experimental class students’ performance in the dance choreography course improved more significantly at the end of the experiment applying the new teaching strategy than before the experiment began.

The t-test results for each sub-item of the experimental class are shown in Table 7. The p-values obtained from the paired samples t-tests of the pre- and post-test scores of each sub-item of the students in the experimental class A were all less than the critical value of 0.05.

T test results of each item of the experimental class

Items Pre-test Post-test P(sig.)
Overall design 6.95 7.27 0.007
Material selection and design 6.23 6.88 0.000
Music selection 6.51 6.85 0.013
Structure design 5.91 6.60 0.000
Movement arrangement 5.97 6.48 0.000
Stage composition 6.41 6.82 0.001
Basic dance steps 6.38 6.64 0.000
Common dance poses 6.02 6.82 0.000
Basic techniques 5.92 6.79 0.000
Creation and performance 6.01 6.52 0.000

After the implementation of the personalized learning path recommendation-oriented choreography teaching experiment, the students’ choreography scores (post-test data) in the experimental class, including the three weaknesses of “structural design”, “movement choreography”, and “basic skills”, have increased significantly compared with the choreography scores before the experiment (pre-test data), by 11.68% and 8.54% respectively. After the implementation of the experimental dance choreography teaching experiment, the choreography performance of the students in the experimental class, including the three weak areas of “structural design”, “movement arrangement” and “basic skills” (the post-test data), has increased significantly compared with that of the choreography performance before the experiment (the pre-test data), with an increase of 11.68%, 8.54% and 16.69%, respectively. This also means that the students in the experimental class had their weak links in dance choreography effectively improved in this learning path recommended teaching experiment, and their dance choreography scores improved more significantly compared to those before the experiment started.

T-test results for control classes

The data of the pre-test and post-test scores of the control class B students were entered into the SPSS19.0 software for paired samples t-test respectively. The results of the paired samples t-test for the control class students on the pre-test and post-test of the experiment are shown in Table 8.

The result of paired sample t test

Items Pre-test Post-test P(sig.)
Overall design 6.81 7.09 0.264
Material selection and design 6.30 6.31 0.452
Music selection 6.75 6.33 0.276
Structure design 5.50 5.63 0.166
Movement arrangement 5.91 5.68 0.395
Stage composition 6.42 6.33 0.381
Basic dance steps 6.24 6.60 0.213
Common dance poses 5.76 6.21 0.460
Basic techniques 6.73 6.50 0.509
Creation and performance 5.90 6.02 0.275
Mean 6.232 6.27 0.541

The Sig value of the t-test for the overall mean score is 0.541>0.05, so the original hypothesis that there is no significant difference in the overall level of dance choreography before and after the experiment in the control class is not rejected, i.e., the students in the control class before and after accepting the traditional mode of teaching do not show statistical significance despite the fact that there is a slight increase in the performance of dance choreography. In the t-tests for each subdimension of choreography achievement, the t-statistics for all subdimensions were greater than 0.05, indicating that the differences in pre- and post-test scores for each subdimension were not significant.

It is inferred that the students’ dance choreography level did not achieve significant improvement after the control class was taught using traditional teaching strategies, and its teaching effect was not as effective as the recommended teaching strategy of personalized learning path optimization proposed in this paper.

Conclusion

In this study, a learning path personalized recommendation model is designed mainly by deep reinforcement learning algorithm, which can dynamically provide students with appropriate choreography learning content recommendations according to the learning environment. The results show that the density of the dance choreography knowledge item check-in increases significantly after the recommendation using the method of this paper, and the learning users’ dance choreography scores all show an upward trend in the process of learning by accepting the method of this paper. In the comparative practice of teaching strategy practice, the post-test performance of the experimental class using the new teaching strategy is 0.54 points higher than that of the control class (p<0.05), and all the sub-projects of dance choreography have significant improvement. Among them, the optimization effects of “structural design”, “movement choreography” and “basic skills” were the most obvious, with improvement rates of 11.68%, 8.54% and 16.69% respectively. 16.69% respectively. On the other hand, the choreography level of the control class did not improve significantly. Accordingly, this paper concludes that the newly proposed deep reinforcement learning algorithm can effectively optimize the personalized learning path of dance choreography, and has reliable teaching practice effects.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro