Multi-scale spatio-temporal data modelling and brain-like intelligent optimisation strategies in power equipment operation and inspection
Publicado en línea: 03 feb 2025
Recibido: 03 oct 2024
Aceptado: 07 ene 2025
DOI: https://doi.org/10.2478/amns-2025-0022
Palabras clave
© 2025 Guoliang Zhang et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
With the continuous development of China’s economy, the speed of industrialisation process continues to accelerate. Electricity consumption is increasing, and electricity plays a very important role in people’s daily production and life. Ensuring the safety and stability of power grid operation is the current development of the power industry to focus on the key issues [1–4], which requires the operation and maintenance of power equipment in order to ensure the reliability of the power equipment to deliver electricity. Power equipment operation and maintenance combined with integrated development [5–7] can effectively achieve the organic combination of power equipment operation and maintenance business, reduce maintenance and power cross, and overlap of work so as to enhance the efficiency of the work at the same time can also maximize the economic benefits, to protect the effective operation of power facilities [8–11].
Spatio-temporal data modelling and analysis refer to the process of modelling and analysing data that changes in time and space. With the development of science and technology and the progress of data collection technology, spatio-temporal data become more and more important, which can be applied in various fields, such as urban planning, environmental protection, traffic management and so on [12–15]. With the advancement of technology and the improvement of human cognitive ability, the research and application of artificial intelligence are becoming more and more extensive. Among them, brain-like intelligence is a field that has attracted much attention, which can simulate the operation of the human brain and achieve breakthroughs in the field of artificial intelligence, such as speech recognition, image recognition, natural language processing, etc. [16–19]. Brain-like intelligence is a bionic research method that simulates the structure and function of the human brain, and achieves the simulation of human intelligence level through AI algorithms, thus realising an intelligent system for autonomous learning and self-optimisation [20–22].
The article proposes a brain-like intelligent diagnosis model combined with multi-scale time-frequency feature fault sample generation technology, which is applied to intelligent operation and inspection of power equipment. Firstly, the time-frequency features of UHV converter equipment are extracted by the improved K-SVD dictionary learning algorithm and empirical wavelet singular entropy and combined with the KPCA algorithm for feature fusion in order to obtain the multiscale spatio-temporal data. The generative adversarial network is then introduced to learn spatio-temporal fault feature data, and the GPNN model is constructed by combining the nearest neighbour interpolation algorithm to obtain more realistic generated fault samples. Finally, based on the SNNs model of brain-like computing, the fault sample data generated by the GPNN model is used as the input, so as to achieve the intelligent diagnosis of the faults of the UHV converter equipment.
With the optimisation and upgrading of the national power grid structure, the degree of intelligence has been gradually improved, and the extra-high voltage power grid with high efficiency, wide coverage and low line loss has been put into use. To ensure the transmission capacity of the extra-high voltage power grid, the converter station must meet the requirements of stable power delivery through equipment operation and overhaul work. The number and types of equipments included in the converter station as part of the UHV power grid have increased the difficulty of maintenance work, and the lack of attention to maintenance work and insufficient failure prevention measures by some electric power enterprises have exacerbated the risk of equipment failures, resulting in converter station shutdown and economic losses. In this regard, an in-depth analysis of the multi-scale temporal and spatial fault characteristics of UHV converter equipment can provide support for the development of effective measures to enhance the fault resistance of the converter station.
Clarifying the equipment fault situation existing in the UHV converter station in the process of electric power operation and inspection helps to better achieve the identification and diagnosis of fault categories. As a result, the collection of vibration signals of the UHV converter station equipment, filtering out the noise and interference of the fault signal to ensure that the pre-processed signal is close to the real fault signal. The vibration sensor is selected to collect the dynamic signal of the equipment, and the noise part of the signal must be eliminated due to noise pollution caused by the vibration signal. [23]. Describe the equipment vibration signal containing noise, the expression is:
Where
Where
Where
Solve the system of linear equations, get the intercept
Where
For equipment vibration signals of a certain length, this paper introduces a learning dictionary for fault feature extraction. In the dictionary training stage, the improved K-SVD dictionary learning algorithm is firstly used to train the sample signal matrix
After matrixing the vibration signal
After finding the coefficient matrix
Where
The obtained
After the time-domain fault signal feature extraction of the UHV converter equipment is completed, it is necessary to extract the frequency-domain signals for the equipment faults, and perform the signal modal decomposition in its frequency domain. In order to obtain a suitable instantaneous amplitude, the modal component needs to satisfy the following conditions, i.e., the number of signal extreme points should be ensured to be the same as the number of over-zero points over the entire length of the data, and the calculation formula is:
Where
The modes are independent of each other, indicating that each vibration mode is actually effective, which provides a quantitative index for the fault type identification of UHV converter equipment through the independent vibration mode signal characteristics.
After obtaining the fault frequency domain signal after modal decomposition, the coefficient matrix composed of IMF components in each frequency band is decomposed by singular value to obtain the singular value that can reflect the basic characteristics of the original coefficient matrix. The complexity of this singular value is evaluated using the statistics of information entropy, so that there is a definite measure to reach the purpose of evaluating the complexity of the original signal [24].
Let EWT transform the original signal to obtain
Matrix
The diagonal matrix
Δ
The empirical wavelet singular entropy evaluates the complexity of the analysed signal. The simpler the analysed signal is, the smaller the empirical wavelet singular entropy is. The more complex the signal, the larger the empirical wavelet singular entropy is.
Combined with the boundary characteristics of the UHV multi-terminal hybrid DC line, it can be seen that when the in-area fault occurs, the voltage transient quantity signal contains rich high-frequency components, the signal is more complex, and the empirical wavelet singular entropy is larger. When an out-of-area fault occurs, the high-frequency components contained in the voltage transient signal after boundary attenuation are far less than those in the in-area fault, and the signal is simpler, and the empirical wavelet singular entropy is smaller. Therefore, the empirical wavelet singular entropy can be used to evaluate the complexity of the voltage transient signal, so as to realize the discrimination of in-zone and out-of-zone faults.
Due to the existence of two different categories of data in the time domain and frequency domain of the fault signal of UHV converter equipment, the signal data and the extracted features are more and thus cause redundancy of information, which makes some useful feature information can not be recognised, resulting in the fault identification of UHV converter equipment is not accurate enough. Therefore, this paper introduces kernel principal component analysis (KPCA) to carry out multi-scale time-frequency feature fusion and then complete the feature dimensionality reduction [25]. The specific steps are as follows:
The vibration signal of the UHV converter equipment is collected, and the vibration signal is recorded as Extract the Standardise each one-dimensional multidomain feature set Where Obtain the standardised multidomain feature set Use a nonlinear mapping function Define an 3 Centred the kernel matrix Solve for eigenvalues Finally solve the
The UHV converter equipment fault dataset used in this paper comes from the operation and inspection data of the GZ section of the NF grid from 2020 to 2023. The UHV converter equipment fault types mainly include normal (Nor), inner ring fault (IR), outer ring fault (OR), cage fault (Cage), inner and outer ring composite fault (IR-OR), and inner and outer ring + rolling body composite fault (TCF). 400 samples are taken for each mode, 200 data points for each sample, and 80,000 data points for each fault type.
In order to verify the effectiveness of the multiscale time-frequency feature fusion algorithm proposed in this paper, time-frequency feature extraction is performed based on the above dataset and multiscale feature fusion is performed by KPCA. Firstly, the parallel method is used for fusion to get the fused time-frequency feature data, which avoids the high dimensionality of features in the fusion process and reduces the computational difficulty. Then KPCA method is used to map the fused time-frequency features to the high-dimensional space for PCA operation to extract the principal components and to get the final fusion features with obvious fault information and low dimensionality. After many experiments, the classification effect is best when 6 principal components are extracted. Therefore, in this paper, the 6 principal components of the features are extracted and used for the next analysis, and the feature values of some samples after fusion are shown in Table 1.
Partial fusion eigenvectors
Mode | No. | PCA1 | PCA2 | PCA3 | PCA4 | PCA5 | PCA6 |
---|---|---|---|---|---|---|---|
Nor | 1 | 4218.51 | -425.69 | 469.62 | 65.24 | 22.37 | -9.58 |
2 | 4369.25 | -334.14 | 285.26 | 158.37 | -1.67 | -3.67 | |
… | … | … | … | … | … | … | |
400 | 4301.48 | -438.49 | 468.04 | 115.12 | 22.37 | -1.54 | |
IR | 1 | -1024.74 | -1135.24 | -421.56 | 356.37 | -2.85 | -2.68 |
2 | -1158.37 | -1166.34 | -357.08 | 285.47 | -28.74 | 16.76 | |
… | … | … | … | … | … | … | |
400 | -2368.45 | -1108.58 | 148.27 | 206.54 | -29.24 | 17.51 | |
OR | 1 | -246.37 | -583.24 | -390.54 | -108.03 | -78.51 | 32.28 |
2 | 241.15 | -556.13 | -198.47 | -283.51 | -62.35 | -1.19 | |
… | … | … | … | … | … | … | |
400 | -438.24 | -250.45 | -278.42 | -175.42 | -81.49 | 28.94 | |
Cage | 1 | -1232.74 | 4538.92 | 546.84 | 84.95 | -58.51 | 36.51 |
2 | -22.51 | 1847.37 | -661.52 | 105.24 | -38.42 | 4.47 | |
… | … | … | … | … | … | … | |
400 | -505.75 | 2347.78 | -88.18 | -233.51 | -56.43 | 5.74 | |
IR-OR | 1 | 885.42 | -173.27 | -685.52 | -80.35 | 48.53 | -65.28 |
2 | 336.74 | -246.81 | -473.51 | -37.24 | 135.82 | -13.27 | |
… | … | … | … | … | … | … | |
400 | -303.16 | 312.33 | -228.09 | -33.08 | 162.68 | -71.18 | |
TCF | 1 | -2551.34 | -1265.67 | 695.87 | -143.27 | 3.85 | -31.33 |
2 | -1247.12 | -1431.08 | 396.76 | -31.06 | -7.42 | -21.04 | |
… | … | … | … | … | … | … | |
400 | -2351.39 | -1435.87 | 485.27 | -17.24 | 7.38 | -21.28 |
The distribution plots of the samples are made for the fused multi-scale time-frequency eigenvalues of the UHV converter equipment, and the results are shown in Fig. 1, in which Figs. 1(a)~(f) is the distribution results of PCA1~PCA6, respectively, and the vertical axis of the plots is the eigenvalues, and the horizontal axis is the ordinal number of all the categories of samples.

The sample distribution of the fusion eigenvalue
Taking 400 samples as an interval, the 1st-1600 samples are normal, inner ring faults, outer ring faults, and cage faults in order, the 1601st-2000th samples are inner and outer ring composite faults, and the 2001th-2400th samples are inner and outer ring + rolling body composite faults. From the figure, it can be seen that the differences in eigenvalues of different failure modes are relatively large, and a single eigenvalue can not completely distinguish the different types of failures.PCA1 has relatively good differentiation for almost all failure modes, and the differences in PCA1 eigenvalues between normal samples and other failure types are so large that normal samples and fault samples can be well distinguished, and the inner ring and outer ring and other faults also have a relatively good differentiation. The PCA2 eigenvalues of the cage can be clearly distinguished from other fault types, and there is also a relatively obvious difference between the inner and outer ring composite faults as well as the inner and outer ring + rolling element composite faults, but the eigenvalues of the normal, inner and outer ring fault types overlap slightly. The outer ring faults PCA3 and PCA4 eigenvalues differ from normal and inner ring faults, but there is a relatively large overlap with the other types. The inner and outer ring composite faults of PCA5 and PCA6 are significantly different from the other fault types. However, the fault types cannot be accurately determined by these plots alone, and further classification algorithms are needed to differentiate between them. The PCA1-PCA6 obtained in this section are inputted into the classifier, which visualises the classification results and accurately classifies the fault patterns.
As an important node of the power grid, the safe and stable operation of the UHV converter station is of great significance. In order to strengthen the effective identification of equipment faults in the converter station, improve the intelligent and standardised management of grid operation and inspection, and accelerate the deep integration of advanced Internet of Things (IoT) technology and equipment management expertise. Brain-like computing as the basis, combined with the UHV converter fault generation model to build a brain-like intelligent diagnostic model, has become a trend to achieve intelligent operation, inspection, and management of the power grid.
Generative Adversarial Network (GAN) is a deep learning model that is unique in that it is trained through a game between the generator and the discriminator, enabling the generator to gradually improve its ability to generate realistic samples while the discriminator continuously improves its ability to discriminate between true and false samples. This dynamic equilibrium forms a zero-sum adversarial game between very small and very large entities, similar to the process of two intelligences improving each other in competition. [26].
The generator’s objective is to learn the data’s underlying distribution from random noise and generate samples that are similar to the actual data. During training, the generator tries to trick the discriminator into not being able to distinguish the generated samples from the real training data by receiving a random vector as input. During backpropagation, the generator adjusts its parameters based on feedback from the discriminator, gradually increasing the fidelity of the generated samples. The discriminator is trained to be a professional discriminator whose goal is to effectively distinguish between fake samples generated by the generator and real training data. The discriminator’s ability to classify fakes and forgeries is continuously improved by receiving samples from both sides, which increases the difficulty of adversarial training. In this process, the discriminator continuously improves its ability to discriminate more accurately between the generator’s output and the actual data. Throughout the training process, the generator and the discriminator fight against each other to form a dynamic equilibrium. The optimisation function of GAN can be expressed as:
The generator endeavours to improve the realism of the generated samples in order to deceive the discriminator, while the discriminator endeavours to improve its ability to recognise authentic samples in order to better discriminate between the generator’s outputs and the real data. The GAN’s zero-sum adversarial gaming process allows the generator to gradually understand the latent distribution of the data and generate more realistic samples.
Nearest Neighbour Interpolation (NNI) is a KD-tree based interpolation algorithm, where the main idea is to simply determine the “nearest” neighbour sample point and assume its value with a given “y” value, instead of calculating the mean value by some weighting criteria or generating an intermediate value according to complex Instead of calculating an average value by some weighting criteria or generating an intermediate value based on complex rules [27].
Suppose {( Use the known labels to create data points C as:
Calculate the distance between each point in C and each point in {(
Iterate steps (1) and (2) with different sample points in the interpolation grid.
We construct an interpolation grid in this paper and use NNI to extend the original dataset. It aims to fully capture a small range of information for each sample point. Due to the large difference in values between different descriptors, normalisation would make them lose their original physical meaning. Therefore, we assume that all interpolated points obey a normal distribution when interpolating
Where
In order to achieve effective identification of UHV faults, a nearest-neighbour generative fragment model (GPNN) combining production adversarial network and nearest-neighbour difference algorithm is introduced based on multi-scale time-frequency features of faults. The fusion of multi-scale time-frequency features and proximity sample similarity is used to generate UHV converter station equipment fault samples, which are applied to UHV converter station equipment fault identification, helping to enhance the diagnostic accuracy of brain-like intelligence.
Generator
The role of the generator in the GAN model is to learn to fit the data distribution and generate data. Generator input noise Z, through the entire model learning, the generator will learn the mapping relationship between the noise data and the real data, thus forcing the generator to generate more realistic data to achieve the purpose of generating fault samples.
In order to fully obtain the potential information of the fault sample. According to the principle of maximum similarity, the nearest-neighbour sample set of fault samples is selected, the mean value of the corresponding fault samples in the nearest-neighbour sample set is calculated, and the mean value is added to the input of the generator model so that the data generated by the model generator is closer to the real data values. The inputs to the GAN model generator are the noise data at the corresponding location of the fault sample, the complete fault sample data and the nearest neighbour attribute mean data at the corresponding location of the fault sample.
The inputs to the model generator are
Considering generator-generated data, a weighted reconstructed loss function is proposed. The losses of real samples and generator-generated samples are introduced into the generator loss function to speed up the training of the generator. In addition, to further improve the realism of the generated samples, a weight adaptive strategy is used for the loss function. The weighted reconstructed generator loss function is:
Where,
Where
Discriminators
The discriminator of the model is a binary classification model that is used to estimate the probability that the sample is from real data (rather than generated data). If the sample is from real data, the higher the discriminator output probability, otherwise the lower the output probability. The filled complete data
Brain-like computing
Brain-like computing refers to a computing paradigm that deeply draws on the information processing characteristics of the neural network of the human brain to process high-dimensional information with multi-granularity and high plasticity. Currently, mainstream brain-like computing platforms are categorized into two main types: one type is based on deep learning, while the development of deep learning special processors relies on the von Neumann architecture. The other type of platform breaks the traditional computer architecture limitations, adopts a new type of multicore distribution and storage integration architecture chip inspired by the spatio-temporal dynamics of the human brain, and is committed to providing general artificial intelligence solutions.
Impulse Neural Network Model
The impulse neural network model (SNNs) uses neurons to transmit information in the form of emitting tip pulses.SNNs simulate the flow process of charged particles in the nerve cells of the human brain, which can reflect the characteristics of the temporal dimension of the data, and it is an important application of brain-like computation [28]. The main components of the SNN model include neuron models, coding methods, and learning algorithms. Commonly used impulse neuron models are the integral ignition model (IAF), resonance ignition model (RAF) and impulse response model (SRM).
The IAF model is a very representative one of the “threshold ignition models”, which has a simple structure and is easy to implement. It uses the capacitive charge/discharge process of an RC circuit to simulate the ion transfer process of a biological neuron, and simulates the external input with an electric current. The IAF model uses an electric current to simulate the ion flow inside a cell, and replaces the cell body with a capacitor.
The current circuit current is
Where
The IAF model equation is:
Where
The learning rule of SNNs is to change the weights by the relative time difference of the impulses emitted by the neurons before and after the synapse. The learning rule of SNNs can be classified into three modes, namely supervised learning, semi-supervised learning and unsupervised learning, according to the supervision mode. In this paper, we mainly introduce the time-dependent plasticity (STDP) based algorithm, which is applied to the learning of SNNs.
The update of weights under STDP learning relies on the pre and post-synaptic pulse sequences, which can be expressed as:
The expression for the STDP learning rule is:
The response of the postsynaptic neuron occurs when the presynaptic pulse arrives, when
Where, according to the characteristics of the learning window, the STDP learning window
Where
With the support of the time-frequency characteristics of UHV converter equipment faults, GPNN is used to generate equipment fault time sequence samples, and the brain-like computing SNNs model is introduced to carry out the intelligent diagnosis of UHV converter equipment faults, which provides technical support for improving the efficiency of grid operation and inspection and intelligent management.
For the SNNs-based fault diagnosis model of UHV converter equipment, the mapping relationship between fault characteristics and fault modes is to be established. Namely:
In the mapping relationship
Therefore, let
Where
Where
In general, when performing model tuning, it is necessary to select the appropriate loss function for the current network model. In the SNNs in this paper, the loss function is selected using the cross-entropy loss function. When the training model produces a predicted output during the training process, the predicted output is compared with the true output, the resulting loss is calculated, and a penalty value in logarithmic form is set based on this loss. It follows that the cross-entropy loss function usually reflects the degree of similarity between the actual output data of the network model and that in the training data, let
In the formula of cross-entropy function,
Based on the loss function formula, the sensitivity change function between each connected neuron in the multilayer network is given
At this point, the weight update formula evolves into the following equation, such that
Combining the optimal learning of SNNs with the STDP learning algorithm given in the previous section, and then applying the model to the field of fault diagnosis of UHV converter equipment, the multi-scale spatio-temporal sample dataset is inputted into the SNNs model, so as to realise the intelligent diagnosis of faults of UHV converter equipment.
With the increase in power grid coverage, UHV converter stations are under more pressure, and the probability of equipment failure due to high load operation also shows a rising trend. UHV converter station contains a converter valve, AC filter, converter transformer, control and protection and other insulation components and other equipment, and has a large insulation interval, more rotating equipment, and is more closely linked with each other and other characteristics. These functions are different, and the number of complex equipment is connected in a variety of ways to form a whole. Maintenance work is both difficult and crucial. This chapter mainly focuses on the validation of the usability of the GPNN model constructed in the previous section and the fault-like brain intelligent diagnosis model, which provides a research basis for the intelligent identification of faults of extra-high voltage converter equipment in electric power operation and inspection.
Based on the multi-scale time-frequency characteristic data of UHV converter station equipment faults obtained in the previous paper, combined with the GPNN model constructed in this paper, it is applied to the multi-scale fault sample generation of UHV converter station equipment. The dataset is divided into a training set and test set according to 8:2, and FID is selected as the evaluation index of the quality of fault sample generation and the process data of FID for different categories of faults are shown in Table 2. The last column represents the minimum FID value in the entire training process, which saves the parameters of the GPNN model at this time and generates fault time-frequency samples based on it. The smaller the FID is, the smaller the distribution distance between the two datasets is, which can be regarded as higher quality fault sample generation. FID has better robustness to noise, and the computational complexity is not high. Only the first-order and second-order moments of the samples are considered. As can be seen from the table, as the number of iterations increases from 50 to 500, the generated FID values of the six different categories of fault samples will gradually decrease, in which the FID value of the inner and outer ring + rolling body composite fault (TCF) is the smallest 21.16. The above results show that the GPNN model can obtain generated samples closer to the real samples of faults, and the distribution of the generated data is becoming more and more similar to the real data distribution.
The process data of FID
Type | 50 | 100 | 200 | 500 | Min value |
---|---|---|---|---|---|
Nor | 263.08 | 104.23 | 33.43 | 37.24 | 33.43 |
IR | 285.31 | 112.58 | 36.04 | 35.89 | 35.89 |
OR | 272.64 | 80.21 | 51.71 | 31.27 | 31.27 |
Cage | 272.11 | 98.96 | 39.92 | 25.35 | 25.35 |
IR-OR | 301.17 | 116.64 | 30.58 | 32.58 | 30.58 |
TCF | 248.05 | 102.42 | 40.65 | 21.16 | 21.16 |
After the training of the GPNN model is completed, the network continues to perform 10,000 network iterations using the model and the multiscale time-frequency fault sample data for fault sample generation at the sampling time step, and its specific generation sequence is shown in Fig. 2 for the comparison with the real sample segments. The horizontal axis in the figure represents the time step, while the vertical axis indicates the normalized values of the corresponding multi-scale time-frequency fault types.

The GPNN generates the fault sample sequence
From the comparison of the real fault samples with the generated fault sample sequence data in terms of numerical size and changes, it can be seen that the two types of data do not fit numerically, but there is a certain temporal correlation within the time step, which indicates that the generated data retains the static and temporal characteristics of the real data to a certain extent, and on the whole, the fault sample sequences generated by using the GPNN model are in good agreement with the real fault sample data can be more than 90.57% consistent. In addition, the difference in the numerical size between the generated data and the real data also indicates that the generated data are not numerically close to the real data, and the use of such data for fault sample generation is conducive to increasing the diversity of fault samples.
Six different fault types generated by the GPNN model are used as data labels to validate the validity of the SNN model using the training and test sets, respectively. Each fault type contains 150 training samples and 50 test samples, and each sample consists of three types of vibration feature maps, corresponding to time-domain feature maps, frequency-domain feature maps and time-frequency-domain feature maps, respectively. In this paper, the cross-entropy function is mainly used as the loss function for model training, which aims to improve the generalization ability of the brain-like fault diagnosis model. The trend of model accuracy and loss value during training is shown in Fig. 3.

Change of accuracy rate and loss value during training
In order to reduce the volatility of the curve, the training process of the model is shown, and the horizontal coordinate is the iteration period, i.e., every traversal of the training set for 1 time, the value of the loss function, and the accuracy rate is recorded. It can be seen that the model converges quickly after the 6th iteration cycle, the loss function value drops to about 0.034, and the accuracy of both the training set and the test set reaches more than 94%. In addition, because this paper adopts the small batch stochastic gradient descent algorithm, the small batch test set taken out each time has a certain degree of randomness, so the test set accuracy rate fluctuates slightly, but the overall trend remains up. It can be seen that the brain-like intelligent fault diagnosis model can better fit the multi-scale time-frequency fault vibration data set of the UHV converter equipment, with high fault diagnosis speed and accuracy.
In order to verify the comprehensive fault diagnosis performance of the brain-like intelligent fault diagnosis model designed in this paper, XGBoost, SVM, CNN, LSTM, GRU, and CNN-GRU fault diagnosis models were constructed, respectively, and fault diagnosis was carried out on the test set to compare their test accuracies. In order to avoid chance errors and objectively evaluate the diagnostic ability of the models, multiple fault diagnosis experiments are required, and in this paper, a total of 50 times of fault diagnosis were conducted for each model, and the average of the 50 times of fault diagnosis accuracy was taken as the final diagnostic accuracy. Figure 4 displays a fault diagnosis accuracy rate of fifty times.

Comparison results of fault diagnosis model
In terms of multi-scale time-frequency fault diagnosis of UHV converter station equipment, the model in this paper has both multi-scale deep features and time-series feature extraction capability, and the accuracy rate is improved by 24.35%, 21.19%, 12.16%, 11.93% and 8.41% compared to SVM, XGBoost, LSTM, GRU and CNN models that have only single-feature extraction capability, respectively, and the accuracy rate is improved by 6.22% compared to the combined CNN-GRU model with a single branch. As for the fastness, the structurally simple machine learning models SVM, XGBoost and the single deep learning models LSTM, GRU and CNN perform better, but the models built as a whole satisfy the fastness requirement. Although the single deep learning model CNN, GRU, and LSTM have shorter diagnostic times, it can be seen from several experiments that its accuracy and stability are insufficient. From the diagnostic process of the fault diagnosis model, the diagnostic duration of this paper’s model is less than 2.42ms, which meets the requirement for rapid fault diagnosis.
The accuracy of this paper’s model in 50 times fault diagnosis has a significant improvement over other models, reaching 98.06%, and its training time is only 37.14 seconds, which is much lower than other models. Since the model in this paper is based on brain-like computing and extracts the multi-scale features of the faults of the UHV converter equipment through the GPNN model, it is more capable of extracting the features at different scales and has a faster computational speed. Due to the optimization of the key parameters in the model, the diagnostic model with the best parameter ratios is obtained, which further improves the computing performance and diagnostic accuracy of the model. In summary, the model in this paper has higher accuracy while still being fast.
To achieve intelligent management of power operation and inspection, this paper proposes a brain-like intelligent fault diagnosis model combining multi-scale spatio-temporal data. The KPCA algorithm is used to fuse the multi-scale time-frequency features of the UHV converter equipment, which is input into the GPNN model to generate fault samples in order to expand the sample data to improve the brain-like intelligent diagnosis effect.
The KPCA algorithm improves the differentiation effect of PCA1 when used for the fusion of UHV time-frequency fault features, and the distribution of the eigenvalues clarifies the eigenvalue intervals of various fault types, such as the normal type, which has an eigenvalue distribution between 4000 and 5200. After increasing the number of iterations from 50 to 500, the GPNN model gradually reduces the generated FID values of the six different categories of fault samples, with the inner/outer ring + rolling body composite fault (TCF) generating the smallest FID value at 21.16. In addition, the consistency of the fault sample sequences generated by using the GPNN model with the real fault sample sequences can reach 90.57%. The sequence can be consistent with more than 90.57%. The nearest neighbor generation fragment model effectively generates fault feature samples of UHV converter equipment, providing reliable data support to enhance the fault identification accuracy of UHV converter equipment. The loss value of the SNNs intelligent fault diagnosis model designed on the basis of brainlike computing converges stably at about 0.034 after 6 cycles of iteration, and the fault diagnosis accuracy of the SNNs model is improved by 24.35% compared with that of the SVM model with a single feature extraction capability. The model’s overall training time is only 37.14 seconds, which is significantly less than other comparison models.
Starting from the multi-scale spatio-temporal fault characteristics of the UHV converter equipment, we can use the nearest-neighbor generation segment to obtain more diverse fault samples. By combining this with a brain-like intelligent fault model, we can accurately identify power equipment faults and provide reliable data support, thereby improving the efficiency of power equipment operation and inspection management.