Research on English Learning Content Rendering and Interactive Application Based on Multimedia Technology
Publicado en línea: 27 feb 2025
Recibido: 20 oct 2024
Aceptado: 21 ene 2025
DOI: https://doi.org/10.2478/amns-2025-0138
Palabras clave
© 2025 Xiaokai Duan, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
In the context of rapid advancements in technology and increasing global interconnectedness, English learning has evolved into a multifaceted domain that integrates linguistic proficiency with digital literacy and cultural understanding. Multimedia technology, as an essential tool in modern education, has brought transformative opportunities to English learning by enhancing content rendering and fostering interactive applications[1,2]. English learning plays a pivotal role in global education systems, particularly in higher education, where the demand for skilled, English-proficient graduates continues to grow. With the advent of multimedia technology, traditional learning paradigms have shifted toward more immersive, engaging, and personalized experiences. The integration of audio-visual elements, interactive platforms, and data-driven methodologies has opened new horizons for language acquisition, allowing learners to practice in simulated real-world scenarios, gain instant feedback, and tailor learning to individual needs[3,4,5].
China, in particular, has witnessed significant reforms in English learning since its recognition as a core component of comprehensive higher education. Over the past decades, the focus on traditional, rigid curricula has given way to interdisciplinary approaches, blending English education with visual communication, information technology, and cognitive sciences[6,7]. Despite these developments, significant gaps persist in curriculum design, resource allocation, and the deployment of modern teaching methodologies, which have impeded the full realization of multimedia's potential in English learning[8].
Despite its potential, multimedia-based English learning faces several critical challenges: Current educational models often fail to integrate multimedia tools cohesively, resulting in fragmented learning experiences. Curricula frequently prioritize theoretical knowledge over practical skills, neglecting opportunities for meaningful application through interactive media[9,10]. With increasing reliance on digital platforms, the issue of data privacy has emerged as a significant barrier. Students' personal data is often inadequately protected, and platforms lack transparent mechanisms for managing and safeguarding this information. While advanced multimedia tools are available, their implementation in classrooms is limited by technical barriers, insufficient teacher training, and lack of institutional support. The design of multimedia content often fails to consider cultural nuances and local contexts, making it less effective in fostering deep engagement among learners from diverse backgrounds.
Existing research on multimedia-based English learning has primarily focused on the technological dimensions, such as the development of interactive software, gamification techniques, and AI-driven personalization. However, these studies often overlook the pedagogical implications and the cognitive-emotional interplay involved in learning processes[11].
Moreover, while privacy-preserving algorithms like DPAdaMod have emerged to address data security concerns, their application remains largely confined to theoretical domains, with limited empirical validation in educational settings. Many current models also struggle to balance privacy and learning outcomes, leading to trade-offs in either data security or educational quality[12,13].
This study addresses these gaps by proposing a comprehensive framework that integrates multimedia technologies with curriculum design and data privacy considerations to enhance English learning experiences. Key contributions include:
Leveraging multimedia tools, this research introduces a curriculum framework that emphasizes interactivity, real-world applications, and learner-centered approaches. This includes the use of simulation environments, adaptive content rendering, and gamified assessments.
By adapting the DPAdaMod algorithm, this study ensures the secure processing of student data without compromising educational effectiveness. The proposed method reduces privacy loss by approximately 34%, striking an optimal balance between accuracy and security.
Drawing from cognitive science, education theory, and technology studies, this research provides a holistic understanding of the factors influencing English learning behavior.
The integration of multimedia technology in English learning is not merely an enhancement but a paradigm shift. By addressing systemic challenges and leveraging cutting-edge tools, this research contributes to the development of dynamic, secure, and inclusive educational models. Future work will expand on these findings, exploring emerging technologies like virtual reality, blockchain for secure data sharing, and advanced AI models to further revolutionize language education.
The basic idea of the differential privacy optimization algorithm is to first use the gradient pruning algorithm to limit the impact of each gradient on the output results of the neural network, and then add noise to the model training gradients to protect the privacy of the training data set. The privacy loss calculation of the differential privacy optimization algorithm is closely related to the number of model training iterations. The more iterations of the model, the more privacy loss and the greater the risk of data privacy leakage, so more noise needs to be introduced to protect the data privacy, and thus the performance of the model will be more affected. Therefore, it is effective to trade-off model privacy and accuracy by accelerating model convergence.
Suppose the data collected by node 1 and node 2 are denoted by X and Y, where m denotes the number of data collected by that node, which is shown in Equation (1).
If the data collected by two nodes are X-vector and Y-vector, the Marxist distance of the X-vector of the data collected by the nodes is expressed by Equation (2), and the Marxist distance between two nodes is calculated by Equation (3).
where Σ denotes the covariance matrix of the data
The cosine of the two vectors is calculated from Equation (4).
The vector version of the Plath noise has higher data availability than the independent values, see equation (5):
where
Using Bayes' theorem, the posterior density of the data
where
The term not related to
Then
where
As shown in Figure 1, after a high learning rate is determined, a large number of heavyweight updates will be carried out, and the training curve model will show serious jitter, with few fusion models. When the learning speed is very low, teachers will update slowly, and it is difficult to find the best value quickly, and the number of iterations will increase, which increases the privacy burden.

Effect of learning rate on gradient update
Therefore, when the initial learning rate is not chosen properly and the learning rate cannot be updated adaptively according to the actual situation, more privacy budget is eventually required to find the optimal value of the model, which is harmless to the problem of weighing model privacy and accuracy.
The basis of the differential privacy optimization algorithm is the neural network optimization algorithm, and the loss statistics of differential privacy is closely related to the number of iterations. This section designs an optimization algorithm that satisfies the definition of differential privacy, the DPAdaMod algorithm, based on the AdaMod algorithm, which brings many benefits to scenarios that use differential privacy algorithms to protect the privacy of neural network model training datasets: adaptively updating the learning rate, speeding up the convergence of the model, reducing the number of iterations for model optimization, and reduced privacy loss. With the same accuracy, the DPAdaMod algorithm requires fewer training iterations than other non-adaptive differential privacy optimization algorithms because of its adaptive learning rate update feature, i.e., the DPAdaMod algorithm has less privacy loss. Figure 2 shows the optimization diagram of the DPAdaMod algorithm.

Optimization diagram of DPAdaMod algorithm
Experimental simulations were performed via MATLAB on a machine equipped with a 3.4 GHz processor and 8 GB of RAM. Unless otherwise stated, the default parameters are summarized in Table 1.
Parameter | Default |
---|---|
0.9 | |
0.01 | |
10–6 | |
5000 | |
∈ | 0.1 |
Q | 100 |
R | 10 |
L | 288 |
Table 2 compares the privacy budget boundary values for the same number of training rounds for the three methods: the simple superposition method, the strong combination theorem method, and the MomentAccountant method. From Table 2, it can be seen that the privacy budget boundary value of the model is tighter when using the MomentAccountant method for privacy statistics, i.e., the MomentAccountant mechanism allows more training rounds for the model at the same privacy level, which makes the model training better. According to the previous analysis of the three combination mechanisms, it can be concluded that the simple combination mechanism only considers the range of privacy loss values, the strong combination mechanism considers the range of privacy loss values and their first-order moments, and the MomentAccountant mechanism considers the range of all moments of privacy loss, so the privacy loss boundary calculated by the MomentAccountant method is tighter.
Method | Privacy budget boundary value |
---|---|
Common combination mechanism | ( |
Strong combination mechanism | |
Moment accountant mechanism |
The experimental simulation environment in this paper is shown in Table 3:
Name | Version model |
---|---|
GPU | GeForce GTX 1650 (8GB) |
GPU | Intel Core i7 |
Python | 3.8.5 |
Pytorch | 1.71 (GPU version) |
The role of gradient pruning in the differential privacy optimization algorithm is to limit the sensitivity of the model parameters and provide a basis for the noise addition process of the differential privacy algorithm. Therefore, in the differential privacy optimization algorithm, the updated gradient is trimmed first and then the noise addition process is performed, as shown in Figure 4.

Differential privacy optimization algorithm optimization
Each layer of the neural network has a different role and function, so the parametric number of updated gradients differs for each layer. Using a fixed pruning threshold to prune each gradient equally does not take into account the fact that different layers of the neural network have different gradient paradigms. In contrast, the layered gradient pruning method fully considers the above situation, and different gradient pruning thresholds are set for different layers of the neural network, i.e., the gradients of the same layer are pruned with the same gradient pruning threshold, and the gradients of different layers are pruned with different gradient pruning thresholds, as shown in Figure 5.

Differential privacy optimization algorithm based on hierarchical gradient pruning
All factors affecting privacy concerns are categorized as antecedents, including privacy experiences, personality differences, demographic variables, cultural climate, meanwhile, various changes caused by privacy concerns are categorized as outcomes, such as privacy behaviors, privacy trust, etc. The whole process is called APCO model, see Figure 6.

APCO model optimization
The behavior is the final outcome, and privacy concerns will be analyzed as an influencing factor of behavior. Therefore, this study focuses on the privacy worry-outcome in the APCO model, and replaces the outcome with the behavior. From the behavior itself, we explore the various factors that influence behavior. There are two types of privacy behaviors: privacy disclosure and privacy protection. Privacy disclosure is the behavioral intention, while privacy protection emphasizes more on the protection measures after the disclosure of personal information. In this study, the path in the APCO model regarding risk-benefit calculation is selected according to privacy calculation theory, as shown in Figure 7.

Privacy Concern - Protection Behavior Model Optimization
To further analyze the effect of hyperbolic discount factor θ on data availability, θ={0.01,0.1,1,10,100} was chosen to analyze the data availability of the θ-HDPRF algorithm. The average relative errors of the θ-HDPRF algorithm with different exponential discount factors θ are given in Figure 9 and Table 4. As shown in Figure 3.4, the average relative error of the θ-HDPRF algorithm decreases with increasing θ when ϵ = 0.01. At other privacy levels, the theta-HDPRF algorithm follows the same pattern. Therefore, a larger θ can provide higher data availability for the θ-HDPRF algorithm.

Effect of hyperbolic discount factor on the average relative error
∈= 0.01 | ∈= 0.03 | ∈= 0.05 | ∈= 0.07 | ∈= 0.09 | |
---|---|---|---|---|---|
0.01 | 9.5906 | 3.1894 | 3.1364 | 1.6202 | 1.3371 |
0.1 | 4.0158 | 2.2728 | 1.2470 | 1.0315 | 0.8132 |
1 | 2.9822 | 1.3099 | 0.8551 | 0.5707 | 0.3469 |
10 | 0.9605 | 0.2263 | 0.2120 | 0.2106 | 0.1672 |
100 | 0.2730 | 0.1590 | 0.1488 | 0.1346 | 0.1347 |
In summary, the above analysis revealed an inconsistency between students' data privacy concerns and data privacy protective behaviors, but unlike the general privacy paradox phenomenon. And reward seeking were used as moderating variables, they influenced students' data privacy protective behaviors. The factors of students' body must development law constrain the curriculum of higher education in China, and for visual communication english learning, it is highlighted that the curriculum arrangement should follow the basic order of foundation first and then major.
This section compares the spatial complexity and temporal complexity of ResNet-18 neural network and ResNet-WN-18 neural network. The comparison reveals that ResNet-WN-18 is not only more suitable for scenarios using differential privacy, but also its complexity is lower. The reason is that ResNet-WN-18 is a batch normalization layer in the original ResNet-18 neural network replaced by a weight normalization layer, which is a rewrite of parameters and does not introduce new parameters, while the batch normalization layer calculates the statistical features of each layer of input data in order to perform the normalization operation, which invariably increases the complexity. The spatial complexity is the total number of parameters of the model. It can be seen that the spatial complexity of ResNet-WN-18 is smaller than that of ResNet-18, as is shown in Table 5.
Layer name | ResNet-18 | Res Net-WN-18 |
---|---|---|
Convolution | 1728 | 1728 |
Normalization layer | 128 | 0 |
Layer 1 | 147968 | 147456 |
Layer 2 | 517120 | 516096 |
Layer 3 | 2066432 | 2064384 |
Layer 4 | 8261632 | 8257536 |
Linear | 5130 | 5130 |
Total | 11000138 | 10992330 |
Since the differences between the structures of ResNet-WN-18 and ResNet-18 neural networks only lie in the weight normalization layer and batch normalization, and the rest of the structures are the same, the time complexity differences between ResNet-WN-18 and ResNet-18 can be derived by simply comparing the number of floating-point operations in the weight normalization layer and the batch normalization layer, as shown in Table 6.
Batch normalization operation | Floating point number of operations |
---|---|
Moreover, the accuracy of DPAdaMod algorithm is higher than that of DPAdam algorithm when the privacy is high, medium and low. The moment-limiting feature of DPAdaMod algorithm can effectively prevent the situation that the learning rate is too large, which makes the value of each learning rate adaptively updated more fine and ensures that the neural network can still maintain better performance in the noise-added scenario, as shown in Figure 10.

Comparison of algorithm accuracy under different privacy levels
To further verify that the DPAdaMod algorithm can better weigh the relationship between privacy and accuracy, Table 7 shows the privacy loss of the three algorithms, DPSGD, DPAdam, and DPAdaMod, when they achieve accuracies of 88%, 90%, 92%, and 94%. From the table, it can be obtained that the privacy loss of DPAdam algorithm is on average about 22% lower than that of DPSGD algorithm, the privacy loss of DPAdaMod algorithm is on average about 34% lower than that of DPSGD algorithm, and the privacy loss of DPAdaMod is on average about 13% lower than that of DPAdam algorithm. This experimental result demonstrates that the DPAdaMod algorithm has fewer model iterations and lower privacy loss compared to the other two algorithms when the same accuracy is achieved.
Data set | Accuracy | Loss of privacy |
Loss of privacy |
Loss of privacy |
|
---|---|---|---|---|---|
MNIST | 88.00% | 10–5 | 0.71 | 0.615 | 0.56 |
90.00% | 1.28 | 1.09 | 0.921 | ||
92.00% | 1.78 | 1.32 | 1.23 | ||
94.00% | 5.73 | 3.68 | 2.98 |
According to Figure 11, it can be observed that the performance stenglishs to increase with increasing batch size, peaks at L600 and eventually decreases. The results are consistent with the above analysis, i.e., too high and too low sampling ratios can lead to a decrease in performance.

Impact of Batch Size on the Accuracy of DPAdaMod Algorithm
The influence of the knowledge factor on the curriculum is mainly reflected in the expansion of horizontal subject areas and the extension of vertical knowledge structure. On the one hand, as a specific product of the information age, visual communication design is no longer limited to the field of design, but has been expanded to the fields of communication, communication, science and marketing, etc., which is broader than in the past, thus putting forward the objective requirement of opening more subjects in the process of professional training. On the other hand, the knowledge structure has also developed, highlighted by the two-dimensional static space into a multi-dimensional dynamic space, so it is necessary to update the content of the curriculum, and overall, the influence of knowledge factors on the curriculum.
The WN-INIT-DP algorithm achieves the tradeoff between model privacy and accuracy by enhancing the stability of the neural network structure and reducing the privacy loss in two aspects, so this section first verifies the role of weight normalization and initialization methods on shallow neural networks in differential privacy scenarios, respectively, and then verifies the role of the weight normalization layer combined with the corresponding initialization methods for tradeoff between the deep residual neural network privacy and accuracy of the deep residual neural network. Table 8 verifies the usefulness of the weight matrix scaling invariance property of the normalization method for enhancing the stability of the neural network.
Model | Weighted noise level | ||||
---|---|---|---|---|---|
0 | 0.001 | 0.1 | 1 | 2 | |
LeNet-5 | 99.20% | 98.72% | 98.01% | Nonconvergence | Nonconvergence |
BN-LeNet-5 | 99.20% | 99.17% | 99.17% | 99.14% | 99.08% |
WN-LeNet-5 | 99.16% | 99.15% | 99.15% | 99.12% | 99.07% |
Since the batch normalization method involves the statistical properties of the data, further use of other methods is needed to protect the privacy of the statistical information of the data. Therefore, Table 9 only compares the accuracy of the LeNet-5 neural network after training with the DPSGD algorithm with and without the weight normalization layer. As can be seen from Table 9, the accuracy of the LeNet-5 neural network with the weight normalization layer added is higher than that of the LeNet-5 neural network without the weight normalization layer, both at high and low privacy levels. And when 0.1 and 7, the accuracy of LeNet-5 neural network differs by 10.02%, and the accuracy of WN-LeNet-5 neural network differs by 5.95%. It is shown through the experiments that the accuracy of LeNet-5 with the weight normalization layer added decreases less as the privacy level increases compared to LeNet-5 without the weight normalization layer, i.e., the weight scaling invariance property of the weight normalization method enhances the stability of the neural network structure and mitigates the effect of the differential privacy algorithm on the accuracy of the neural network.
Algorithm | Privacy Level( |
||||
---|---|---|---|---|---|
7 | 3 | 1 | 0.5 | 0.1 | |
DPSGD | 93.12% | 92.65% | 91.15% | 90.05% | 83.10 |
WN-DPSGD | 94.63% | 93.06% | 92.77% | 91.63% | 88.68 |
As can be seen from Figure 12, when using the initialization method, the neural network can be trained with higher accuracy from the beginning, and the convergence of the model will be faster using the initialization method compared to not using the initialization method. The neural network requires fewer iterations of neural network training when a certain accuracy is achieved, and using the He initialization method is better than using the xavier initialization method.

Accuracy Comparison of Different Initialization Methods
To sum up, the general courses of college visual communication english learning are mostly based on english history theory, the basic courses are mostly based on modeling training and morphological composition, and the professional courses are mostly based on the design specifications, forms, contents and media application in various fields of modern visual communication design in design practice. On the whole, the course content is mainly focused on the field of modern visual communication design, and has a high practicality.
This study investigates the integration of multimedia technology into English learning, focusing on curriculum design, interactive applications, and data privacy. By addressing key challenges such as resource constraints, outdated content, and students' insufficient data privacy awareness, we propose a comprehensive framework that bridges the gap between traditional learning methodologies and modern technological advancements.
The introduction of the DPAdaMod algorithm represents a significant contribution to the field. This privacy-preserving optimization algorithm not only accelerates model convergence but also achieves a balance between privacy protection and learning accuracy, reducing privacy loss by 34% compared to traditional methods. Simulation experiments validate the efficacy of DPAdaMod in optimizing neural network training while ensuring robust data protection.
Additionally, the study underscores the importance of curriculum reform in enhancing the quality of English learning. By incorporating personalized learning strategies supported by multimedia technologies, this research aligns curriculum development with the evolving demands of digital education and global market trends. The analysis of privacy concerns and protective behaviors further enriches the understanding of student interactions with educational platforms, paving the way for more secure and user-centric learning environments.