Acceso abierto

Big Data Algorithm for Resource Potential Awareness Response Optimization on the Power User Side Based on IoT Edge Computing

, , , ,  y   
27 feb 2025

Cite
Descargar portada

Introduction

With the increasingly severe global energy crisis and the enhancement of environmental protection awareness, the optimal power system operation has become a research hotspot. In this context, combining IoT technology and edge computing has brought new changes to the power system, especially in the perception and response optimization of user-side resource potential [1, 2]. This paper aims to explore the big data algorithm of power user-side resource potential perception response optimization based on edge computing of the IoT to improve the operating efficiency of the power system, reduce energy consumption, and promote the efficient use of renewable energy.

The development of IoT technology provides the possibility for the intelligence of power systems. By deploying many sensors and smart devices on power user side, power usage data can be collected in real-time to achieve exemplary management of power resources [3]. However, with the proliferation of data volume, the traditional cloud computing model faces many challenges in data transmission, processing, and response. Data transmission delay, bandwidth limitation, and centralized processing bottlenecks may all affect power systems’ real-time reliability [4, 5]. To solve these problems, edge computing came into being, pushing data processing and storage to the network’s edge, close to the data source, thus significantly reducing the data transmission distance, delay, and response speed [6].

On the power user side, the perception and response optimization of resource potential is the key to achieving efficient energy utilization. User-side resources include various distributed energy resources (such as solar energy, wind energy, etc.), energy storage equipment, electric vehicles, and adjustable loads [7, 8]. These resources’ rational scheduling and optimal allocation are significant for balancing the power grid load, improving energy utilization, reducing operating costs, and reducing environmental pollution [9]. However, due to the diversity and complexity of user-side resources, traditional optimization methods often take longer to adapt to the rapidly changing electricity market and user needs.

Big data algorithms have unique advantages in processing and analyzing massive data and can provide a scientific basis for the optimal allocation of resources on the power user side [10]. Power demand and renewable energy output can be accurately predicted through extensive data analysis, and the benefits of different resource combinations can be evaluated to formulate an optimal resource scheduling strategy. However, applying big data algorithms in power systems also faces challenges such as data heterogeneity, high real-time requirements, and high computational complexity [11]. Therefore, combining the edge computing technology of the IoT is an essential direction of current research to develop big data algorithms suitable for optimizing resource potential perception response on the power user side.

This article will introduce the application status and development trends of the IoT, edge computing, and big data technology in power systems. The needs and challenges of resource potential perception and response optimization on the power user side will be explored in depth, and the limitations of existing technologies will be analyzed. On this basis, this paper will propose an extensive data algorithm framework for power user-side resource potential perception response optimization based on IoT edge computing and elaborate on the algorithm’s design principles, implementation methods, and expected effects. Through simulation experiments and actual case analysis, this paper will verify the effectiveness and superiority of the proposed algorithm and provide theoretical support and technical guidance for the intelligent upgrading of power system.

Theoretical basis and technical background
IoT Technology Principles and Architecture

According to the agreed protocol, the IoT connects any object with the network through information-sensing equipment. The object exchanges and communicates information through the information dissemination medium to realize intelligent identification, positioning, tracking, supervision, and other functions [12, 13].

The typical architecture of the IoT is divided into three layers: perception layer, network layer, and application layer from bottom to top [14, 15]. The core capability of the perception layer to realize a comprehensive perception of the IoT is a part of the critical technology, standardization, and industrialization of the IoT that urgently needs breakthroughs. The key lies in having more accurate and comprehensive perception capabilities and solving low power consumption, miniaturization, and Low-cost issues. The network layer mainly takes the mobile communication network with extensive coverage as the infrastructure, the most standardized, industrialized, and mature part of the IoT. The key lies in optimizing and transforming the IoT application characteristics to form a system-aware network [16, 17]. The application layer provides rich applications, combines IoT technology with industry informatization needs, and realizes extensive intelligent application solutions. The key lies in industry integration, development, and utilization of information resources, low-cost and high- quality solutions, information security guarantees, and effective business models.

The IoT system is mainly composed of an operation support system, a sensor network system, a business application system, and a wireless communication network system [18]. The required information can be collected through the sensor network. In practice, customers can use RFID readers and related sensors to collect the required data. After the gateway terminal is aggregated, it can be smoothly transmitted to the designated application system through the wireless network.

Edge computing concepts

The power IoT provides a variety of business IS for the construction of smart grid, including sensing detection, remote equipment monitoring, and equipment inspection [19]. These business requirements are diversified. For example, services such as video surveillance and equipment diagnosis have higher requirements for computing resources, while services such as smart meter monitoring and inspection robots are very sensitive to delays, so such services require more timely calculation results. At the same time, with the continuous construction and development of the power IoT, the data volume of power business terminals shows an explosive growth trend [20]. If only the traditional cloud computing mode were adopted, the pressure on the central cloud server and network transmission would increase sharply, resulting in the processing delay of most power tasks and making it difficult to meet business needs [21, 22]. In this context, edge computing technology is widely used to reduce the processing latency of power tasks.

Edge computing is a concept of nearby computing. Certain decisions place some computing operations on local computing devices closer to the data source so as to minimize the network cost and waiting time for data uploading to the cloud [23]. A core technology of edge computing is task allocation, which mainly includes two parts: task offloading and resource allocation [24]. Task offloading is the premise of resource allocation, which determines whether computing tasks need to be offloaded and which part of tasks should be offloaded. Resource allocation is the reasonable allocation of resources consumed by tasks in the process of offloading. The main difference between the two is that the former belongs to the planning stage, while the latter belongs to the implementation stage. The so-called edge computing task offloading refers to taking advantage of edge computing to place some tasks that should have been uploaded to the central cloud server for execution on the edge server closer to the user side for execution, thereby achieving lower task transmission delay [25]. More specifically, edge computing task offloading refers to the process of reasonably allocating tasks to edge devices with strong computing power, such as edge servers, under the condition of meeting the specified constraints [26]. These constraints usually include factors such as network communication service quality and load balancing of edge servers.

Power user-side resource potential sensing technology

With the continuous development of computer technology, perception technology is also constantly evolving, and finally a dynamic decision model based on Endsley perception model is formed [27]. Endsley dynamic decision model mainly divides perception into three levels, and combines system factors and individual factors to assist the perception system in making intelligent decision-making and intelligent control functions. The schematic diagram of the Endsley dynamic decision model is shown in Figure 1.

Figure 1.

Schematic diagram of dynamic decision model

Element awareness is the first level of perception, that is, collecting various information through perception means to form a preliminary understanding of the outside world, that is, the stage of data collection and data preprocessing. The data acquisition stage collects information through various sensors, remote sensors, and other technologies, and the data preprocessing stage carries out preliminary processing on the collected information, including smooth noise, filling, deleting integration, protocol, transformation, and other steps.

Understanding is the key part of perception, that is, in-depth analysis of the collected information to form a systematic understanding of the outside world. This step mainly includes feature extraction, classification recognition, pattern recognition, and other steps [28]. In the feature extraction stage, the key information in the data is extracted to facilitate the subsequent classification recognition and pattern recognition. In the classification recognition stage, the data is divided into different categories according to the extracted features to facilitate further pattern recognition. In the pattern recognition stage, the data of each category is deeply analyzed to identify the external patterns so as to facilitate the real-time grasp of the outside world.

Prediction is the ultimate goal of perception, that is, to predict future development according to the mastery of the outside world. This step includes model establishment, simulation, and other steps. Among them, the model establishment stage establishes a prediction model according to the existing external model, while the simulation stage simulates future development according to the established model so as to facilitate the prediction of the external world. It is also necessary to comprehensively assist decision-making and execution with the help of system factors such as automation and system capabilities and individual factors such as personal experience and personal capabilities. Endsley Dynamic Decision Model is a complex system that realizes a comprehensive understanding of the outside world through three aspects: awareness, understanding, and prediction. The progress of perception technology not only helps to improve the efficiency of decision-making but also has great significance for solving various security threats and environmental problems.

Construction of computational model for electronic resource potential evaluation

The demand-side response has very important social benefits. In the power market competition, power users will change their power consumption mode according to their own power consumption situation and real-time electricity price adjustment so as to maximize their benefits while meeting their power demand. This will reduce the power load during peak power consumption periods to a certain extent, thereby reducing the pressure on the power generation side. At the same time, it can also reduce the real-time electricity price in the market [29, 30]. In addition, transferring part of electricity consumption during peak periods to low periods can also improve the overall power generation efficiency of power plants. It can be seen that through the interaction between users and power plants, the benefits of system demand-side response can be maximized, and the efficiency of the power generation side can be improved.

On the power generation side of the power system, whether the unit is started or stopped must comprehensively consider various factors such as power generation costs and users’ electricity consumption. The power generation costs change nonlinearly, and users’ electricity consumption is also constantly changing. Therefore, different decision results directly determine different real-time electricity prices in the market. Therefore, aiming at the current start-stop problem of generating units, this study establishes a corresponding power model under the condition of considering the change of power load on the demand side. The model uses the Lagrange algorithm to calculate the results. In this model, compound bidding is adopted, and the power generation decision considers the impact of unit start-stop on the real-time electricity price in the market. According to the incentive effect of real-time electricity prices, power users actively change the electricity consumption situation and transfer unnecessary electricity consumption during the peak period to the low period. Based on this result, the cost and benefit change indicators of the power system, the power consumption side, and the power generation side before and after the demand response is established and quantitatively assessed, and the start-stop changes of units after the demand-side response and the corresponding electricity price fluctuations are considered from two aspects: comprehensive benefits and marginal benefits; The impact and changes on the benefits of the power generation side and the power consumption side after the start-up and stop of the unit are analyzed, so as to evaluate the social benefits brought by the demand-side response.

Demand-side response requires the electricity consumption side to actively change its electricity consumption behavior based on the incentive effect of real-time electricity prices and transfer part of the electricity during the peak period of electricity consumption to the low period of electricity consumption, thereby reducing the power generation burden on the power generation side and reducing the market electricity price. Unstable and sudden changes reduce market risks and achieve efficient operation of the power system. Under normal circumstances, demand-side response is generally carried out in the day-ahead market to ensure that users can adjust their electricity consumption behavior.

In order to quantitatively evaluate the market benefits, the evaluation tool selected in this paper is the power trading model of the competitive power bank. The power bank market now generally adopts 24 or 48 trading hours a day, that is, every hour or half an hour for transactions. This article adopts the trading mode of 24 trading hours per day. The power generation bidding adopts the compound bidding mode; that is, it includes the start-up cost of the unit, the cost of increasing power generation, and the cost of power generation without load. Electricity-side bidding indicates the change in electricity consumption with real-time electricity prices. The market design takes into account the demand transfer on the electricity side, and users’ electricity demand can shift from peak periods to trough periods. The ultimate goal of the transaction is to minimize the cost of electricity purchase under the bids given by the generation side. The electricity market in this paper adopts a perfect competition market, and it is considered that the bid given by the power generation side is its marginal power generation cost, so the goal of minimizing power purchase cost can be converted into minimizing power generation cost, that is, as shown in formula (1): f(x)=mint=1TOCt+ui,t+FCi+MCi,k

In formula (1), OCt represents the power generation cost of the system in time period t, ui,t represents the start-stop state of unit i in time period t, FCi represents the cost of unit power generation without load, and MCik represents the marginal cost of piecewise linear cost of unit i in time period k. The cost curve of the generator set is generally expressed by a quadratic curve as shown in Equation (2), where t is the shutdown time, and Pi,k,t represents the output of the k-th section of the piecewise linear cost of the unit i in the period t. Si,t represents the start-up cost of unit i in time period t and τ represents the cooling time constant of boiler.

C(Pi)=T(Pi,k,t)2+Si,t+τ

The three marginal costs expressed in the piecewise linear cost curve of unit i can be expressed as equations (3)-(5) respectively, ai and bi are the turbine start-up cost and boiler start-up cost respectively when the unit is started. Pmin and Pmax are the minimum and maximum outputs of the unit respectively, and e1, and e2 are the two segment points of the curve respectively.

MCi,1=aiPmini+ai·e1i+bi MCi,2=aie1i+ai·e2i+bi MCi,3=aiPmaxi+ai·e2i+bi
Response Optimization Strategy and Algorithm Development
Overall architecture of edge IoT data processing platform

This study proposes a suite of edge IoT data processing platform software architecture to support IoT real-time data access, data forwarding, big data processing, data storage, and online machine learning applications. The platform deploys distributed application services at each layer based on container technology and opens up the data inflow and outflow between applications at each layer through the container orchestration capabilities and load-balancing services provided by Kubernetes. As shown in Figure 2, the system architecture consists of an IoT data access layer, a data forwarding cache layer, a big data processing layer, an online machine learning application layer, and a data storage layer, and provides data distribution among distributed component instances of each layer through the load balancing Service of Kubernetes. Specifically, the IoT Data Access Layer uses distributed IoT access components, is containerized and deployed on lightweight edge computing nodes such as ARM architecture servers, and at the same time provides message parsing services and data rule engines for protocol data such as MQTT, directly interfaces with data flows reported by IoT terminal devices, and supports data forwarding services with various distributed message middleware. The Data Transfer Layer mainly deploys distributed message middleware Kafka and ZooKeeper clusters to provide caching services for data forwarded by the IoT data access layer, realizing the original data access and big data of the IoT. The decoupling of the computing engine improves the stability of the platform system. The Big Data Processing Layer deploys Flink, a stream computing data processing engine, to provide big data processing functions for real-time data of the IoT, including stream computing, sliding window computing, and batch processing, and then stores the real-time computing results in the downstream database.

Figure 2.

Architecture diagram of edge IoT data processing platform

The online machine learning application layer (Online ML layer) uses the Tensorflow.js framework to provide a pipeline of lightweight online machine learning applications, including online training (Real-time Model Training), online verification (Real-time validation) and Online Prediction, three parallel processes; The online training process periodically pulls the latest big data stream calculation results as the training set and obtains the latest model through training. In the online verification part, the newly generated model is loaded into memory, and the model is verified by using part of the data set. In the online prediction process, the latest model is also applied to the real-time online data stream, and the inference results output by the model are obtained, and the results are written into the time series database. The Data Storage Layer is built on the edge data storage server, uses the time series database to receive the result data of big data stream processing and online machine learning applications, and also serves as the middleware of big data processing and machine learning applications. This layer also provides a visual display interface of application monitoring and stream processing results.

Big data-driven resource scheduling algorithm design

The scheduling algorithm needs to consider factors such as task execution time period, so the scheduling algorithm uses the training Start time, training End time, and resource state of edge nodes of edge machine learning training tasks as the input state of the DQN neural network, and uses S = {Start, End, NodeRes} to represent the input state space set. Among them, Start is a matrix of M × N dimensions, which represents the set of each training Start time of M Inference tasks on the edge computing platform starting from the timestamp of each round of scheduling process, corresponding to 1-N The difference between the training Start time and the scheduling process Start time. For example, if the start time of the scheduling process is 0 seconds, and one machine learning task completes three training tasks in the scheduling period, and the difference between its start timestamp and the start time of the scheduling process is 2 seconds, 15 seconds and 25 seconds respectively, then the start time set of this task is {2, 15, 25}. Similarly, End represents a matrix of M × N dimensions, which means the set of 1-N training End times of M Inference tasks on the edge computing platform. NodeRes is an M × K-dimensional matrix, which means the running distribution state of M Inference tasks running on K edge nodes in a scheduling time period.

The scheduling action space output by the DQN scheduling algorithm is a matrix of M × K dimensions, which represents the probability Q value of scheduling M machine learning tasks to K edge computing nodes. The greater the probability Q value, the greater the action trend of the DQN neural network to schedule tasks to the node in this scheduling process, as shown in the following equation (6): Qπ(s,a)=(Q11Q1|K|QM1QM||K)

The agent takes the maximum probability Q value in the matrix each time to obtain its corresponding position coordinates in the probability matrix, which are used as the machine learning task m and the target node k to be scheduled, as shown in (7) below: (m,k)=argmaxatAQπ(s,a) argmax represents the set of argument values when a function takes the maximum value in its domain. In this paper, the reward calculation is based on the average Job Completion Time (JCT) of online machine learning tasks on the edge computing platform. The scheduling process needs to output multi-step scheduling decision actions for multiple Inferences, and then interact with the environment together with all Inference scheduling decisions. After the scheduling is completed, the reward calculation is performed based on the observed average JCT at the beginning of the next round of scheduling process. In other words, in order to ensure that each Inference obtains a scheduling decision, it is necessary to control the DQN neural network to perform M continuous and non- repetitive action outputs to obtain the target edge node of each machine learning task to be scheduled. Then, each task is actually scheduled and orchestrated. Among them, the reward obtained by the j-th output action of the i-th training episode is defined as equation (8): reward(i,j)=γ(Mj)rewardi,j1,,M Because the reward given by the environment can only be obtained after the agent interacts with the environment and performs scheduling when the DQN network outputs the last scheduling decision. Therefore, this paper uses γ as the reward attenuation factor, and the closer the reward obtained by the M-th action, the greater the correlation with the final reward obtained by this iteration. The reward obtained by the i-th episode is defined as equation (9): rewardi={ 0,i=1;rw+λ1,i2,cost(si,M)0,MinCost>cost(si,M);rw,i2,cost(si,M)0, MinCostcost(si,M) <δ;λ2·(MinCostcost(si,M)),otherwise. Among them, λ1 and A2 are variable reward control parameters. In this paper, the value of λ1 is set to 100 and the value of X2 is 10. rw is used to adjust the current reward. When the minimum cost value in history is greater than the cost (si, m) obtained in this iteration, rw automatically increases λ1 value. The Cost value obtained by the i-th episode output scheduling decision can be calculated by the following formula (10): cost(si,M)=λ3(ω1JCT1++ωKJCTK)+λ4Var(Numi) Where Numi represents the i-th episode, the number matrix of tasks running on K edge nodes. Solve the variance for the number of node tasks matrix as part of Cost. ωk denotes the weight of the average completion time of online machine learning tasks running on the k-th edge compute. λ3 and λ4 are also variable weight coefficients. In this paper, the value of λ3 is set to 0.01 and the value of λ4 is set to 20. Since offline simulation also needs to estimate the average JCT of each task to get the reward value in the current state. According to the foregoing, the completion time of the task is related to the computing resources actually used by the task running on the current node. When a plurality of machine learning tasks is run on a node, the computing resource usage time slice obtained by each task is relatively reduced. According to the experimental results of comparing the average JCT and execution overlap time of online machine learning tasks shown, it is assumed that the JCT of the task running on edge node k is related to the overlap time interval between the average JCT of the task and the task running on node k, and it is estimated by the following calculation method, as shown in Equation (11): JCTm(k)=JCTmå(k)+λ5N(k)+λ6Num(k) $$JC{T_m}(k) = JCT_m^{a}(k) + {\lambda _5}N(k) + {\lambda _6}Num(k)$$ Among them, N (k) represents the sum of task overlapping times on the k-th edge node, which is calculated by the start time and deadline time of all online machine learning tasks on the node. Num (k) denotes the number of tasks on the k-th edge node. JCT*m(k) represents the average JCT when only the m-th machine learning task is running on edge node k, which can be actually measured by deploying the task to the node. λ5 and λ6 are likewise variable weight coefficients.

Experiment and Results Analysis

In this paper, the parameter setting DQN reinforcement learning offline scheduling algorithm is used to simulate scheduling tests, with a total of 5000 iterations. Six LSTM online power prediction tasks are deployed on three edge computing worker nodes, and the scheduling period is set to one minute. By receiving the real task start time and end time and the distribution data of the node where the task is located sent by the scheduling controller in Kafka middleware, it is used as the input data of DQN offline scheduling training. Figure 3 shows the convergence of the Cost value during training. It can be seen from the figure that the Cost value gradually converges with the increase of training times, indicating that the DQN scheduling algorithm can be applied to the actual edge computing machine learning task scheduling scenario. However, due to the large number of training iterations of offline scheduling algorithms, it is necessary to develop an online scheduling process.

Figure 3.

Random Forest performance curve

The test results are shown in Figure 4. It can be seen that when the computing power of edge nodes is relatively high, the benefits of system outsourcing are relatively low. At this time, edge IoT agents should tend to outsource only a small number of tasks that do not require high real-time performance or even give up outsourcing without choosing to borrow computing resources from cloud data centers. As the computing power of edge IoT agents decreases and the amount of data increases, the revenue from outsourcing tasks will gradually increase. At this time, most tasks should be outsourced to the cloud for processing. When the computing power of edge nodes is low enough, probably below 40GHz, the benefits of outsourcing tasks will tend to be constant.

Figure 4.

Relationship between computing power and task data size

Figure 5 depicts the comparison between the dynamic differential game model task outsourcing algorithm proposed in this paper and the Stackelberg game model algorithm in terms of system overhead and system revenue. The cost of the edge IoT agent outsourcing tasks to the cloud consists of the computing cost E of calculating the number of tasks outsourced to the cloud, the resource consumption E of sending the results back to the edge IoT agent, and the possible loss of data reliability caused by uploading data to the cloud computing center that is, E, + E, + E. The benefits of the edge IoT agent outsourcing tasks to the cloud can be calculated using a formula. It can be seen that the overhead of the algorithm proposed in this paper and the benefit of outsourcing tasks of edge IoT agents are better than the Stackelberg game model algorithm, which can reduce the system overhead of edge IoT agents by 9%-15%.

Figure 5.

Analysis of system energy consumption results

Comparing the task execution time of RRA, GPA, and ACPEC under different numbers of tasks, as shown in Figure 6, it can be found that the task execution time of ACPEC is always shorter than that of GPA and RRA. As the size of the task increases, the advantages become more and more obvious. When the number of tasks reaches 320, the advantage of ACPEC is obvious.

Figure 6.

Task completion performance comparison

It can be seen from Figure 7 that when the data processing capacity of MEC server remains unchanged, the system delay shows an upward trend with the increasing number of power devices connected to the system, which is because with the increase of the number of access devices, the task processing requires more time, resulting in the increase of the system delay; When the number of power equipment is the same, with the increase of data processing capacity of MEC server, the system delay shows a downward trend. This is because the time delay required for data processing continues to decrease, which affects the overall delay and causes the system delay. It also shows a corresponding downward trend, which is in line with the theoretical expectation.

Figure 7.

Impact of server computing power on system latency

It can be seen from Table 1 that when the start and stop of the unit are not considered, the total revenue of the system is all on the power consumption side, and the power generation side contributes to the peak shaving of the system but does not enjoy the benefits it deserves; However, the situation has changed a lot after taking into account the start and stop of the unit. In this example, the user side not only did not benefit but also paid a huge price, while the power generation side obtained higher returns. This example shows that regardless of unit start-stop conditions, the accurate calculation of system revenue will be affected.

Overall prediction results

Consider unit start-stop Regardless of unit start-stop
Power generation side income 75348 0
Electricity side income -73554 1484
Total revenue 1794 1484

In Figure 8, the electricity price does not rise but falls during the peak period of electricity consumption, mainly because minimizing the power generation cost is the overall goal of the system. Under this goal, the optimization result may start small units with higher marginal costs but flexible start-ups and shut down large units with higher marginal costs, which leads to an increase in electricity prices.

Figure 8.

Market electricity price in each period before and after demand response

It can be seen from Figure 9 that there is an obvious peak-valley difference in electricity clearing, and the electricity price fluctuates with the electricity, and the fluctuation is large. Generator sets with a relatively low quotation are started during the low electricity consumption period, and generator sets with a high quotation are started during the peak electricity consumption period to meet the electricity demand of users. The comparison of marginal benefit indicators under the four scenarios is shown in Table 2.

Figure 9.

Power clearing situation

Marginal benefits under scenarios

Scene Social marginal benefit Marginal cost of power generation plus Weighted average User marginal benefit plus Weighted average
1 651.61 776.44 1564.07
2 664.13 765.57 1565.86
3 676.02 760.24 1573.05
4 699.39 737.59 1573.84

From the results of feature selection, Figure 10, the evaluation value of the optimal feature subset is high, indicating that the optimal feature subset and the distribution network risk prediction data are highly correlated. The external fault influencing factors and operation influencing factors are basically retained because these two fault categories represent the time characteristics, weather characteristics, regional characteristics, operation characteristics, etc., of smart distribution network faults, and each characteristic is strongly related to distribution network faults and are relatively independent, except that four redundant data, namely regional characteristic data, weekly data, average temperature data, and distribution transformer capacity, which are repeated with other characteristic variables, are eliminated.

Figure 10.

Feature selection results

Conclusion

With the rapid development of IoT technology, edge computing, as its important branch, is more and more widely used in power systems. In this paper, a big data algorithm for resource potential awareness response optimization on the power user side based on IoT edge computing is proposed, which aims to improve the operational efficiency and reliability of the power system. By deploying intelligent sensing devices on the user side to collect and analyze users’ power consumption data, combined with the real-time processing capabilities of edge computing, the optimal allocation of power resources is achieved.

By simulating the electricity consumption of a small community, the difference in response time between traditional centralized processing methods and edge computing methods is compared. The results show that the average response time of edge computing methods is shortened by 45%, which significantly improves the real-time performance of the system.

By simulating the access of different user-side resources, it is found that the proposed algorithm can effectively identify and integrate various resources, such as energy storage equipment, renewable energy, etc., so as to achieve optimal allocation of resources. The experimental results show that compared with the traditional algorithm, this algorithm improves the resource utilization rate by 20%, and effectively reduces the electricity cost of users.

Evaluate the performance of the algorithm in data processing speed and accuracy by simulating the processing of large-scale user-side data. Experimental results show that this algorithm is 30% faster than traditional big data processing algorithms in processing speed, and at the same time, it maintains a high accuracy rate of more than 98% in data accuracy.

The power user-side resource potential perception response optimization big data algorithm based on IoT edge computing performs well in improving the real-time performance of the power system, optimizing resource allocation, and improving big data processing performance. The algorithm can not only quickly respond to the changes of user-side resources, but also effectively integrate various resources to improve the overall efficiency and reliability of the power system. With the further development of IoT and edge computing technology, this algorithm is expected to play a more important role in future smart grids.