Designing Convolutional Neural Network Architecture Using Genetic Algorithms
et
22 févr. 2021
À propos de cet article
Publié en ligne: 22 févr. 2021
Pages: 26 - 35
DOI: https://doi.org/10.21307/ijanmc-2021-024
Mots clés
© 2021 Ashray Bhandare et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Figure 1.
![An example of CNN architecture [10]](https://sciendo-parsed.s3.eu-central-1.amazonaws.com/6471f610215d2f6c89db6db8/j_ijanmc-2021-024_fig_001.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA6AP2G7AKOUXAVR44%2F20250907%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20250907T182107Z&X-Amz-Expires=3600&X-Amz-Signature=baeba0cb095219c4f4e90c28ea520dd7626a805b3cb441e15804845657e9309d&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject)

Figure 2.

Figure 3.

Figure 4.

Figure 5.

Figure 6.

Figure 7.

Figure 8.

Parameters of the genetic operations
Parameters | Value |
---|---|
Tournament selection size | 2 |
Crossover Probability | 50% |
Mutation probability | 80% |
Genes Mutated | 10% |
Highest fitness values obtained during each of the 10 experiments
Exp. No. | Highest Fitness Value |
---|---|
1 | 0.984499992943 |
2 | 0.973899998105 |
3 | 0.988800008184 |
4 | 0.991900001359 |
5 | 0.947799991965 |
6 | 0.949000005102 |
7 | 0.983099997652 |
8 | 0.979799999475 |
9 | 0.956399999567 |
10 | 0.972350000068 |
The various hyper parameters in CNN with their ranges
Hyper parameter | Range |
---|---|
No. of Epoch | (0 – 127) |
Batch Size | (0 – 256) |
No. of Convolution Layers | (0 – 8) |
No. of Filters at each Convo layer | (0 – 64) |
Convo Filter Size at each Convo layer | (0 – 8) |
Activations used at each Convo layer | (sigmoid, tanh, relu, linear) |
Maxpool layer after each Convo layer | (true, false) |
Maxpool Pool Size for each Maxpool layer | (0 – 8) |
No. of Feed-Forward Hidden Layers | (0 – 8) |
No. of Feed-Forward Hidden Neurons at each layer | (0 – 64) |
Activations used at each Feed-Forward layer | (sigmoid, tanh, softmax, relu) |
Optimizer | (Adagrad, Adadelta, RMS, SGD) |