Chat-GPT Powered IoT devices using regularizing the data for an efficient management systems
y
24 feb 2025
Acerca de este artículo
Categoría del artículo: Article
Publicado en línea: 24 feb 2025
Páginas: 179 - 191
Recibido: 06 oct 2024
Aceptado: 04 nov 2024
DOI: https://doi.org/10.2478/jsiot-2024-0020
Palabras clave
© 2024 Shilpa Patil et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
Figure 1:

Figure 2:

Figure 3:

Figure 4 :

Figure 5:

Mathematical Formulas for the Evaluation Metrics’ Computation
SL.NO | Evaluation Metrics | Mathematical Expression |
---|---|---|
01 | Accuracy | |
02 | Sensitivity or recall | |
03 | Specificity | |
04 | Precision | |
05 | F1-Score |
Specification of the FECG Datasets
S1.No | Recording Characterization | Specification |
---|---|---|
1 | Recording Period | 38 to 41 Weeks of Gestation |
2 | Signals From Maternal Abdomen | 04 |
3 | Types of Electrodes | Ag-AgCl Electrode |
4 | Bandwidth | 1Hz-150Hz |
5 | Filtering Type | Digital filtering |
6 | Sampling Rate | 1KHz |
7 | Resolution | 16 bits |
8 | Total Number of Datasets | 5089 |
Parameters of T5 Model
Parameter | Description | Value |
---|---|---|
Model Size | Number of parameters in the model | T5-Small |
Input Length | Maximum sequence length for input text. | 512 tokens |
Output Length | Maximum sequence length for output text. | 128 tokens |
Vocabulary Size | Size of the token vocabulary (default: 32,000 for T5). | 32,000 |
Number of Layers | Number of encoder and decoder layers in the model. | 6 encoder, 6 decoder |
Hidden Size | Size of the hidden representation in the encoder/decoder (e.g., 512 for T5-Small). | 512 |
Feed-Forward Size | Size of the feed-forward network in each transformer block. | 2048 |
Number of Attention Heads | Number of attention heads in the self-attention framework. | 8 |
Dropout Rate | Dropout probability applied to attention weights and feed-forward layers. | 0.1 |
Positional Embeddings | Fixed sinusoidal embeddings used for positional information. | Yes |
Optimizer | Algorithm used for optimization (e.g., Adam, Adafactor). | Adafactor |
Learning Rate | Initial learning rate for training. | 0.001 |
Training Steps | Total number of steps for training. | ~10,000 steps |
Batch Size | Number of samples processed in one forward/backward pass. | 32 |
Weight Initialization | Method for initializing model weights (e.g., Xavier initialization). | Xavier Initialization |
Pre-trained Tasks | Text-to-text tasks the model has been trained on (e.g., translation, summarization). | Summarization, classification |
Fine-tuning Tasks | Downstream tasks for which the model can be fine-tuned. | FHR classification, FECG signal processing |