Open Access

Research on English Learning Content Rendering and Interactive Application Based on Multimedia Technology

  
Feb 27, 2025

Cite
Download Cover

Figure 1.

Effect of learning rate on gradient update
Effect of learning rate on gradient update

Figure 2.

Optimization diagram of DPAdaMod algorithm
Optimization diagram of DPAdaMod algorithm

Figure 4.

Differential privacy optimization algorithm optimization
Differential privacy optimization algorithm optimization

Figure 5.

Differential privacy optimization algorithm based on hierarchical gradient pruning
Differential privacy optimization algorithm based on hierarchical gradient pruning

Figure 6.

APCO model optimization
APCO model optimization

Figure 7.

Privacy Concern - Protection Behavior Model Optimization
Privacy Concern - Protection Behavior Model Optimization

Figure 9.

Effect of hyperbolic discount factor on the average relative error
Effect of hyperbolic discount factor on the average relative error

Figure 10.

Comparison of algorithm accuracy under different privacy levels
Comparison of algorithm accuracy under different privacy levels

Figure 11.

Impact of Batch Size on the Accuracy of DPAdaMod Algorithm
Impact of Batch Size on the Accuracy of DPAdaMod Algorithm

Figure 12.

Accuracy Comparison of Different Initialization Methods
Accuracy Comparison of Different Initialization Methods

Comparison of LeNet-5 and WN-LeNet-5 Neural Network Accuracy

Algorithm Privacy Level(ε)
7 3 1 0.5 0.1
DPSGD 93.12% 92.65% 91.15% 90.05% 83.10
WN-DPSGD 94.63% 93.06% 92.77% 91.63% 88.68

Privacy loss boundary value of different combination mechanisms

Method Privacy budget boundary value
Common combination mechanism (O(qTε),qTδ)–DP
Strong combination mechanism (O(qεTlog1/δ),qTδ)DP(O(q\varepsilon \sqrt{T\log 1/\delta }),qT\delta )-DP
Moment accountant mechanism (O(qεT),δ)DP(O(q\varepsilon \sqrt{T}),\delta )-DP

Comparison of parameters between ResNet-WN-18 and 5ResNet-18

Layer name ResNet-18 Res Net-WN-18
Convolution 1728 1728
Normalization layer 128 0
Layer 1 147968 147456
Layer 2 517120 516096
Layer 3 2066432 2064384
Layer 4 8261632 8257536
Linear 5130 5130
Total 11000138 10992330

Comparison of neural network accuracy with different weight noise levels

Model Weighted noise level
0 0.001 0.1 1 2
LeNet-5 99.20% 98.72% 98.01% Nonconvergence Nonconvergence
BN-LeNet-5 99.20% 99.17% 99.17% 99.14% 99.08%
WN-LeNet-5 99.16% 99.15% 99.15% 99.12% 99.07%

R Default Parameters

Parameter Default
η 0.9
θ 0.01
ζ 10–6
τ 5000
0.1
Q 100
R 10
L 288

Experimental Simulation Environment

Name Version model
GPU GeForce GTX 1650 (8GB)
GPU Intel Core i7
Python 3.8.5
Pytorch 1.71 (GPU version)

Effect of hyperbolic discount factor on average relative error

θ ∈= 0.01 ∈= 0.03 ∈= 0.05 ∈= 0.07 ∈= 0.09
0.01 9.5906 3.1894 3.1364 1.6202 1.3371
0.1 4.0158 2.2728 1.2470 1.0315 0.8132
1 2.9822 1.3099 0.8551 0.5707 0.3469
10 0.9605 0.2263 0.2120 0.2106 0.1672
100 0.2730 0.1590 0.1488 0.1346 0.1347

Floating point Calculation Times Statistics of Batch Normalization Operation

Batch normalization operation Floating point number of operations
μj=1mi=1mZj(i){{\mu }_{j}}=\frac{1}{m}\underset{m}{\overset{i=1}{\mathop{{{\mathop{\sum }^{}}^{}}}}}\,Z_{j}^{(i)} Hl–1 × Wl–1 × (B – 1) Secondary addition
σj2=1mi=1m(Zj(i)μj)2\sigma _{j}^{2}=\frac{1}{m}\underset{m}{\overset{i=1}{\mathop{{{\mathop{\sum }^{}}^{}}}}}\,{{(Z_{j}^{(i)}-{{\mu }_{j}})}^{2}} (B×Hl1×Wl1)+Hl1×Wl1×(B1) Subaddition B×Hl1×Wl1+1 Sub multiplication $\begin{matrix} (B\times {{H}_{l-1}}\times {{W}_{l-1}})+{{H}_{l-1}}\times {{W}_{l-1}}\times (B-1){{\quad }_{~\text{Subaddition }\!\!~\!\!\text{ }}} \\ B\times {{H}_{l-1}}\times {{W}_{l-1}}+1\quad {{~}_{\text{Sub }\!\!~\!\!\text{ multiplication}}}~ \\ \end{matrix}$
γkXkμkIσk2+ε+βkI{{\gamma }_{k}}\frac{{{X}_{k}}-{{\mu }_{k}}I}{\sqrt{\sigma _{k}^{2}+\varepsilon }}+{{\beta }_{k}}I Hl1×Wl1 Subaddition Hl1×Wl1 Sub multiplication \begin{matrix} {{H}_{l-1}}\times {{W}_{l-1}}{{~}_{\text{Subaddition}}}~ \\ {{H}_{l-1}}\times {{W}_{l-1}}\quad {{~}_{\text{Sub }\!\!~\!\!\text{ multiplication}}}~ \\ \end{matrix}

Comparison of algorithm privacy loss

Data set Accuracy δ Loss of privacyDPSGD Loss of privacyDpAdam Loss of privacyDPAdaMod
MNIST 88.00% 10–5 0.71 0.615 0.56
90.00% 1.28 1.09 0.921
92.00% 1.78 1.32 1.23
94.00% 5.73 3.68 2.98
Language:
English