Accès libre

Chat-GPT Powered IoT devices using regularizing the data for an efficient management systems

 et   
24 févr. 2025
À propos de cet article

Citez
Télécharger la couverture

Figure 1:

Proposed Methodoloy
Proposed Methodoloy

Figure 2:

Sample Multi-Channel FECG Datasets utilised for Training and Testing the Module
Sample Multi-Channel FECG Datasets utilised for Training and Testing the Module

Figure 3:

T5 Architecture
T5 Architecture

Figure 4 :

performance metrics compared with other models
performance metrics compared with other models

Figure 5:

Comparative assessment of Distinct models
Comparative assessment of Distinct models

Mathematical Formulas for the Evaluation Metrics’ Computation

SL.NO Evaluation Metrics Mathematical Expression
01 Accuracy TP+TNTP+TN+FP+FN
02 Sensitivity or recall TPTP+FN×100
03 Specificity TNTN+FP
04 Precision TNTP+FP
05 F1-Score 2.Precison*RecallPrecision+Recall

Specification of the FECG Datasets

S1.No Recording Characterization Specification
1 Recording Period 38 to 41 Weeks of Gestation
2 Signals From Maternal Abdomen 04
3 Types of Electrodes Ag-AgCl Electrode
4 Bandwidth 1Hz-150Hz
5 Filtering Type Digital filtering
6 Sampling Rate 1KHz
7 Resolution 16 bits
8 Total Number of Datasets 5089

Parameters of T5 Model

Parameter Description Value
Model Size Number of parameters in the model T5-Small
Input Length Maximum sequence length for input text. 512 tokens
Output Length Maximum sequence length for output text. 128 tokens
Vocabulary Size Size of the token vocabulary (default: 32,000 for T5). 32,000
Number of Layers Number of encoder and decoder layers in the model. 6 encoder, 6 decoder
Hidden Size Size of the hidden representation in the encoder/decoder (e.g., 512 for T5-Small). 512
Feed-Forward Size Size of the feed-forward network in each transformer block. 2048
Number of Attention Heads Number of attention heads in the self-attention framework. 8
Dropout Rate Dropout probability applied to attention weights and feed-forward layers. 0.1
Positional Embeddings Fixed sinusoidal embeddings used for positional information. Yes
Optimizer Algorithm used for optimization (e.g., Adam, Adafactor). Adafactor
Learning Rate Initial learning rate for training. 0.001
Training Steps Total number of steps for training. ~10,000 steps
Batch Size Number of samples processed in one forward/backward pass. 32
Weight Initialization Method for initializing model weights (e.g., Xavier initialization). Xavier Initialization
Pre-trained Tasks Text-to-text tasks the model has been trained on (e.g., translation, summarization). Summarization, classification
Fine-tuning Tasks Downstream tasks for which the model can be fine-tuned. FHR classification, FECG signal processing