Open Access

Optimization and Improvement of BP Decoding Algorithm for Polar Codes Based on Deep Learning

 and    | Aug 16, 2023

Cite

Figure 1.

Structure of polar code
Structure of polar code

Figure 2.

Multi-layer structure of deep neural network
Multi-layer structure of deep neural network

Figure 3.

Block diagram of neural network based decoder system
Block diagram of neural network based decoder system

Figure 4.

Performance of different network structures at N=8
Performance of different network structures at N=8

Figure 5.

Performance of different network structures at N=16
Performance of different network structures at N=16

Figure 6.

Performance of different network structures at N=32
Performance of different network structures at N=32

Figure 7.

Structure diagram of the proposed MLP-BP
Structure diagram of the proposed MLP-BP

Figure 8.

Interaction of BP and DNN blocks
Interaction of BP and DNN blocks

Figure 9.

Change of MLP-BP training loss value when N=128
Change of MLP-BP training loss value when N=128

Figure 10.

Evolution of MLP-BP and BP BER when N=128
Evolution of MLP-BP and BP BER when N=128

Figure 11.

BER performance comparison of two decoding methods at N=32
BER performance comparison of two decoding methods at N=32

Figure 12.

BER performance comparison of two decoding methods at N=128
BER performance comparison of two decoding methods at N=128

Decoding time delay

Algorithm BP MLP-BP
Decoding time delay 380 72

Proposed MLP-BP decoding algorithm

1: Enter. y0, y1, ⋯ yN−1
2: Output. u0, u1, ⋯ uN−1
3: Initialization: Initialization using (2) LLR(yj)
4. for iter ←1 to itermax do
5.  for i ←n + 1 to nNND do
6: Update using equation (3), Li,jiter {\rm{L}}_{i,j}^{iter}
7.  end for
8: After reaching NND use the sub-block NNDsub to calculate usub
9: usub After recoding to get xsub
10: if after encoding xsub by CRC checksum do
11: Using equation (7) yields, RnNND,subiter R_{{n_{NND}},sub}^{iter}
12.    end if
13: Retransmission
14.    for i ← nNND to n do
15: Update using equation (3) Ri+1,jiter R_{i + 1,j}^{iter}
16:    end for
17:   end for

Network Structure

32-16-8 128-64-32 512-256-128
N=8 1024 11752 169846
N=16 1352 13488 174992
N=32 1352 13488 174992

Parameter Setting

Set options Value
Test platform Tensorflow
Encoding Polar(32,16), (64,128)
Signal to noise ratio 1~5dB
loss function Cross Entropy Loss
Optimizer Adam

Parameters Settings

Parameters Value
code length 8, 16, 32
code rate 0.5
batchsize 512
learning rate 0.001
training set size 106
epoch 103
network structure 32-16-8, 128-64-32, 512-256-128

Polar(32,16) Divided into Four Parts

Partitioning Information bits Relative Location Code Rate
[0–7] None None 0
[8–15] {11,12,13,14} {3,5,6,7} 0.5
[16–23] {19,21,22,23} {3,5,6,7} 0.5
[24–31] {24,25,26,27,28,29,30,31} {0,1,2,3,4,5,6,7} 1

Polar(32,16) Divided into Two Parts

Partition Information bits Code Rate
[0–15] {11,12,14,15} 0.25
[16–31] {19,21,22,23,24,25,26,27,28,29,30,31} 0.75
eISSN:
2470-8038
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Computer Sciences, other