Research on the optimal scheduling strategy of cloud computing resources based on genetic algorithm
Online veröffentlicht: 26. März 2025
Eingereicht: 09. Nov. 2024
Akzeptiert: 26. Feb. 2025
DOI: https://doi.org/10.2478/amns-2025-0815
Schlüsselwörter
© 2025 Yanan Cui et al., published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
With the rapid development of information technology, cloud computing, as an emerging computing model, has received widespread attention and application. The core idea of cloud computing is to provide computing resources, such as processors, storage and applications, to users in the form of services through the network to achieve centralized management and on-demand distribution of resources [1]. The rise of cloud computing enables users to obtain the required computing and storage resources via the Internet without investing a large amount of hardware equipment and maintenance costs, and is widely used in various fields, including enterprise IT, scientific research, entertainment, healthcare, etc [2-5]. The emergence of this model not only reduces users’ hardware and software costs, but also greatly improves resource utilization and system flexibility [6-7]. However, as cloud computing applications continue to expand, the management and scheduling of resources become more complex and critical [8]. In a typical cloud computing environment, there exists a large number of virtual machines, containers, and storage resources, which need to be efficiently allocated and managed to satisfy user requirements and provide high performance and availability [9-11]. How to reasonably and efficiently allocate these resources to meet the needs of different users while maximizing the overall performance of the system is a key problem to be solved in cloud computing [12-13].
The merits of resource scheduling algorithms directly affect the performance of cloud computing systems. A good scheduling algorithm can dynamically adjust the allocation of resources according to the characteristics of the tasks and the state of the system, so as to realize the balance of the system load, improve the execution efficiency of the tasks, and reduce the energy consumption of the system [14-17]. From the initial priority-based scheduling algorithms, to the later heuristic-based scheduling algorithms, to the current scheduling algorithms based on intelligent optimization algorithms, each advancement has pushed the development of cloud computing technology [18-20]. In particular, scheduling strategies based on intelligent optimization algorithms, such as genetic algorithm and particle swarm optimization algorithm, have achieved remarkable results in cloud computing resource scheduling due to their powerful global search capability and adaptivity [21-23]. Therefore, it is of great theoretical significance and application value to study the optimization algorithms for cloud computing resource scheduling, to improve the efficiency and adaptability of the algorithms, and to reduce the energy consumption of the system.
Priority-based scheduling algorithm is one of the earliest cloud computing resource scheduling algorithms. Literature [24] proposed a cloud resource scheduling algorithm considering dynamic priority of tasks, which can achieve minimum time and maximum utilization of task processing in heterogeneous cloud computing environments compared to static priority scheduling methods. Literature [25] designed a priority assignment algorithm structured on a waiting time matrix and considered the non-preemptability and preemptability of tasks to achieve parallel execution of task assignments, which is highly effective in dynamic cloud computing environments. Literature [26] proposes an efficient prioritized task scheduling algorithm MCPTS for the prioritization of user requests and provider resources in the cloud computing domain, which dynamically adjusts resource priorities in conjunction with a queuing model to achieve the goal of efficiently allocating cloud resources and guaranteeing the quality of user services. Literature [27] emphasizes that scheduling tasks in cloud environments should focus on process preemption and energy consumption, which leads to the proposal of a priority-based process scheduling algorithm PRIPSA, which makes use of burst time and lead time to make decisions on block-queue-based scheduling of preemptible tasks to maximize the benefits of cloud computing scheduling. Priority-based scheduling algorithms have the advantage of fast processing of high-priority tasks, but in practical applications may lead to low-priority users or tasks waiting for a long time, can not dynamically adjust the allocation of resources, the load is not balanced, affecting the overall performance.
Scheduling algorithm based on particle swarm optimization algorithm is a kind of optimization algorithm based on group intelligence, which is also widely used in the field of cloud computing resource scheduling. Literature [28] improves the particle swarm optimization algorithm using multiple adaptive learning strategies, which can significantly improve the computational efficiency and stability of resource allocation task in cloud computing resource scheduling task. Literature [29] introduced the application of particle swarm optimization algorithm in cloud computing task scheduling, by establishing the objective function of resource scheduling problem of cloud computing system and solving the optimal result, it can effectively improve the resource utilization and scheduling efficiency of cloud computing. Literature [30] establishes cloud computing resource scheduling objective function with virtual machine cloud service cost as constraints and solves it using stochastic matrix particle swarm optimization scheduling algorithm, combining with parallel RMPSO algorithm to reduce the time complexity of the model and make it applicable to cloud service scheduling problem. Literature [31] uses fog computing model to study the resource scheduling problem in cloud environment, and proposes particle swarm optimization algorithm incorporating additional gradient method as the scheduling strategy, which achieves better performance in terms of resource utilization efficiency and task execution time. However, the particle swarm optimization algorithm may also have the problem of weak local search ability and easy to fall into local optimal solutions.
And the scheduling algorithm based on genetic algorithm is an optimization algorithm based on the principle of biological evolution. Compared with other algorithms, it is able to dynamically adjust the resource allocation in cloud computing resource scheduling according to the characteristics of the task and the state of the system in order to maximize the overall scheduling performance of the system. Therefore, the study of cloud computing resource optimization scheduling strategy based on genetic algorithm has important theoretical significance and application value.
This research firstly reviews the basic concepts of cloud computing and resource scheduling, and analyzes the shortcomings of traditional genetic algorithms. Subsequently, an improved genetic algorithm framework is proposed to optimize six aspects, namely, gene encoding and decoding, population initialization, fitness function, selection operator, crossover operator, and variation operator, for solving the cloud computing resource scheduling problem. The optimization performance of the algorithm on cloud computing resource scheduling is evaluated based on the Cloudsim platform by deploying the experimental environment, data collection, and determination of the experimental methodology.
The term “cloud” in cloud computing [32] usually refers to a pool of resources running on an Internet platform that supports certain computing tasks on the premise of providing resources. From the user’s point of view, there is no limit to the resources on the cloud, which can be added at any time, and users can access and use them at any time according to their own requirements. The word “computing” does not refer to digital computing in the general sense, but to computing services that can provide sufficient computing power. Therefore, cloud computing is actually a large high-performance computer in the network to provide users with pay-as-you-go services. Cloud computing is characterized by large scale, virtualization, large scale, network-based, resource pooling, high reliability and pay-as-you-go.
The service models of cloud computing can be categorized into the following three types according to the type of service:
Infrastructure as a Service (IaaS) Hardware devices are the prerequisite for carrying out the construction of resource pools, and the goal of cloud computing is aimed at accomplishing tasks that require powerful computing power at a low cost.IaaS takes various IT infrastructures, as a service, and then enables users to access these services using the network. Platform as a Service (PaaS) PaaS is also one of the output results of the basic cloud computing, support users can use the developer to support the compilation language or development tools to deploy applications to the cloud computing on the side, and the user is not necessary to pay attention to the underlying infrastructure, from the cumbersome management of the bottom layer to free up, and pay more attention to how to reasonably carry out the deployment of the program. Software as a Service (SaaS) SaaS refers to the use of the Internet to provide users with relevant forms of software application services. In this case, the user only needs to connect to the network, you can directly use the services provided on the cloud like software, without spending a lot of money to invest in the development of hardware and software, only need to pay a certain amount of leasing costs to enjoy high-quality services.
The deployment model of cloud computing is not unchanging, but to cater to the varying needs of different users, cloud computing will undergo different changes.
Public Cloud Public cloud is as the most widely used way at present, as the name suggests, this service is for the public, for the public service of the cloud service deployment model is grand in scale and low cost, can provide a huge number of IT resources. Third-party service providers to provide hardware equipment, network deployment, a variety of development and distribution of application solutions, and bear the security management and maintenance work, and users only need to pay according to their own use of resources can be, do not need any preliminary investment in equipment. Private Cloud Private cloud refers to the cloud computing environment built specifically for users within an enterprise on a private network, which is the core feature of private cloud, but also private cloud is different from other important features of the cloud, and thus compared to the public cloud, private cloud to a certain extent more secure. However, under the private cloud model, companies need to be responsible for their own hardware equipment, network deployment, various development and distribution of application platforms, as well as IT resources, security, management, deployment and maintenance and other complicated work. Hybrid Cloud Literally understand, hybrid cloud is two or more different types of cloud fusion together in a deployment mode, is currently more common in a cloud service model. Currently more commonly used form is the private cloud and public cloud mixed together to form a hybrid cloud form, in order to ensure security and privacy under the premise of enhancing the applicability. The common hybrid clouds on the market today are Amazon VPC and VMware vCloud.
Although the scale of cloud computing has been continuously expanding, but compared with the increasing customer demand, the number of resources in cloud computing is still dwarfed, then there is a greater need for more efficient use of resources on cloud computing, which is the work of cloud computing resource scheduling.
At present, resource scheduling [33] in cloud computing is generally large-scale scheduling, and cloud computing resources are characterized by autonomy, openness and complexity, which makes it difficult to optimize. Heuristic algorithm is a concept based on the research of optimization algorithm. Where the core of the optimization algorithm is to find the optimal solution to the processing problem, the heuristic algorithm is to solve the problem within a certain acceptable overhead, and in general, the deviation of the feasible solution or the better solution from the optimal solution is found to be non-estimable. Heuristic algorithms can be divided into two categories according to the different principles of the algorithms, one of which is the heuristic algorithm that draws on the hereditary and evolutionary processes of living organisms, and the classical algorithms are mainly genetic algorithms and differential evolutionary algorithms. The second is the heuristic algorithm based on group intelligence, and the classic algorithms are ant colony optimization algorithm, bat algorithm, gray wolf optimization algorithm and particle swarm optimization algorithm. In the following section, the optimal scheduling strategy of cloud computing resources based on genetic algorithm will be studied in detail.
Although the genetic algorithm [34] has more excellent performance, but for the problem presented in this paper genetic algorithm has some defects. Specifically as follows:
For the selection operator, the poor design of the operator may lead to the algorithm is easy to “early maturity”, in the absence of the global optimal solution will output the local optimal solution. It may also cause the algorithm to fail to converge at a later stage. In the process of crossover operator, if each individual is crossover by the same probability calculation, the advantages and disadvantages of the individual will be ignored, which is obviously inappropriate. For the variation operator, the operator is designed to expand the search range of the solution space, but if the variation probability is calculated with the same probability, there may be a risk of non-convergence in the late stage of the algorithm.
Gene coding and gene decoding Suppose there is Population initialization In this paper, the initialization scheme of the population is adjusted as follows, a chromosome is randomly generated first, and if the chromosome fails to satisfy the constraints then the chromosome is discarded for re-randomization. Where the constraints are as follows: The physical machine memory size is not smaller than the virtual machine request memory resource size:

VM deployment results
where
The physical machine hard disk size is not smaller than the virtual machine requested hard disk resource size:
where
The physical machine CPU processing power is not weaker than any of the virtual machine request CPU processing power:
where
The amount of resources remaining in a physical machine is not less than the total amount of resources requested by all VMs deployed on that physical machine, and the formula can be expressed as:
Adaptation function based on load imbalance degree The optimization objective selected in this paper is to minimize the load balancing of the system [35], and the specific function is defined as follows. Define the load imbalance of physical machine
Where
The overall load imbalance of
The smaller load imbalance indicates the more balanced resources in the range.
Selection operator based on the juxtaposition selection method The juxtaposition selection method performs the process of sorting the population by the value of the load imbalance and retaining the top 5% of individuals. The selection probability is calculated as the sum of population fitness over the fitness value of the Crossover operator based on dynamic change probability Crossover operator is a kind of operation of changing chromosomes, which is helpful to expand the search range of solution space and find the optimal solution faster. The crossover probability Where Mutation operator based on decay probability The specific calculation of the variation probability is as follows: first generate a random number

Crossover operator execution process
Where
The process of cloud computing resource scheduling VM deployment algorithm based on improved genetic algorithm designed in this paper is as follows:
Step 1: Input the virtual machine array VM, physical machine array PM, and obtain the virtual machine and physical machine arrays.
Step 2: Initialize the population and generate the initial solution.
Step 3: After initializing the population, the load imbalance of each chromosome needs to be calculated to facilitate the subsequent algorithmic process.
Step 4: Execute the selection operator: screen the superior individuals for offspring inheritance by side-by-side selection method, and add the superior individuals into the result set.
Step 5: Execute crossover operator: perform crossover operation for offspring inheritance based on dynamic change probability crossover individuals.
Step 6: Execute mutation operator, select mutated individuals based on the decay probability to re-randomize the population inherited by the crossover operator and calculate the load imbalance degree.
Step 7: Determine whether the maximum number of iterations has been reached.
Step 8: The algorithm ends.
The specific algorithm flow chart is shown in Figure 3 below.

Flowchart of improved genetic algorithm
Cloudsim [36] is currently the most widely used open source platform for cloud computing simulation, launched in 2009. It can provide data center virtualization technology, modeling and simulation of virtualized clouds and other functions, based on the platform can complete a variety of scheduling strategies quantifiable measurement and evaluation.
Cloudsim core simulation engine provides a new event management framework to support the upper layer of software organization. The Cloudsim layer implements the modeling and simulation of the data center, providing virtual machine allocation to the host scheduling, management of each application running and monitoring the system environment functions. The user code layer is the uppermost layer, which provides basic entities such as hosts, virtual machines, number of users, number and type of tasks, scheduling policies, etc. Cloud-based scenarios can be built by extending the basic entities in this layer.
There are five core classes in Cloudsim: Cloudlet, DataCenter, DataCenterBroker, Host, and VirtualMachine, which work together to configure the cloud environment, set up and manage virtual machines and tasks, and implement custom task scheduling algorithms.
The Cloudlet class is a task class, which contains task attributes.DataCenter is a data center, which provides virtualization technology and completes VM query and allocation of resources.DataCenterBroker completes VM new creation, destruction, and task submission, and transparently manages VMs.The Host class extends the range of host parameter settings, such as CPU, memory, bandwidth, etc. VirtualMachine is a virtual machine, which can be used as a virtual machine, bandwidth, etc. The VirtualMachine class implements virtual machine emulation, runs based on the Host class, and rents and manages relevant cloud resources, including CPU, memory, and internal scheduling policies for virtual machines.
Twelve Aurora A620-G30 servers are selected as nodes for the experiment. The experimental environment is as follows:
Processor: AMD EPYC 7702
Memory: 256GB DDR4
Hard disk: 4TB SATA
NIC: Intel 82599ES 10-Gigabit
Operating system: Ubuntu 18.04.5
OpenStack: Ussuri
The server resource scheduling data is collected every 5 minutes by monitoring the servers under different loads. In the feature division, the features are divided into smoothing features and time features, the smoothing features means to do the smoothing of the data forward, and the time features are the average value per minute, average value per hour, average value on weekdays and average value on weekends. In order to verify the effectiveness of the algorithm, the experimental comparison algorithms include: traditional genetic algorithm, linear regression algorithm, ridge regression algorithm, and LASSO regression algorithm.
In order to verify the effect of resource scheduling method based on improved genetic algorithm in cloud computing, this paper compares the improved algorithm of this paper with several other algorithms for resource scheduling through CloudSim simulation platform. Under the same experimental conditions, genetic algorithm, linear regression algorithm, ridge regression algorithm, LASSO algorithm and improved genetic optimization algorithm are selected for experiments. Where the number of resources is 30, the number of tasks ranges from [1000, 5000], the crossover probability is set to 1, and the variance probability is set to 0.1.
The simulation results of task completion comparison of the algorithm under large data volume are shown in Figure 4. As can be seen from the figure, in the case of a larger number of tasks, the improved optimal scheduling algorithm has a little advantage over the traditional genetic scheduling algorithm as well as the other three comparison algorithms, and the execution time is shortened by 17.66% to 53.65% compared with the comparison algorithms.

The algorithm is compared with the number of algorithms
The next task is to study the relationship between the number of iterations of the algorithms and the execution time in the case of a specified large number of tasks with large data of 5000, and the five algorithms converge to compare the simulation results as shown in Figure 5. As can be seen from the figure, the effect of the comparison algorithm at the beginning of the iteration than the optimized algorithm in this paper will have a big difference, to the later stage with the increase in the number of iterations, the optimized algorithm is close to maturity after 60 generations, while the rest of the algorithms have anomalous adaptive value in the optimized improved resource scheduling algorithm is better than the other scheduling algorithm effect than several other scheduling algorithms.

The convergence of the five algorithms is the result
In summary, the improved genetic algorithm overcomes the shortcomings of the traditional genetic algorithm and obtains a better scheduling strategy for cloud computing, which both shortens the job completion time and improves the system resource utilization.
In order to verify the reasonableness of the cloud computing resource scheduling model optimized by the genetic algorithm proposed in this paper for resource allocation when processing user tasks. In this paper, validation experiments are conducted for the allocation of resource nodes in the zinc counting process. Let the number of customer tasks is 4000 and the number of resource nodes is 6, which are S1, S2, S3, S4, S5, S6. The processing capacity of each node is {200, 400, 500, 600, 700, 800}.
The experimental results of the load balancing comparison of the five algorithms are shown in Fig. 6. During the computation process, the computational capacity of the six resource nodes varies, resulting in different loads on each node. From the data in the figure, it can be seen that the load balancing degree of the cloud computing resource scheduling model based on the improved genetic algorithm is much higher than that of the other four algorithms on each node during the execution of computing tasks. For node S2, the number of tasks assigned to the node by the traditional genetic algorithm, linear regression algorithm and LASSO regression algorithm is greater than the maximum number of tasks processed by the node, resulting in the resource nodes with poorer processing capacity obtaining a larger amount of tasks. The number of tasks assigned to the node by the ridge regression algorithm is much smaller than the maximum number of processing tasks of the node, resulting in a waste of resources. For other nodes have the same problem arises, but the improved genetic algorithm in this paper on each node, the allocation of the amount of tasks is more reasonable, did not cause resource waste, the task allocation of each node is {200, 400, 500, 600, 700, 800}. Therefore, the cloud computing resource scheduling model optimized by the genetic algorithm is more balanced and reasonable in the allocation of resources in the process of computation compared with the cloud computing resource scheduling model optimized by the other four algorithms, which improves the computational efficiency to a large extent.

Five algorithms load equilibrium comparison results
In this section, the energy consumption of physical servers in IT equipment is studied and analyzed. The energy consumption of physical servers is generally determined by the load of several hardware such as CPU, memory, etc. In the existing research, the energy consumption model cannot be generalized to different models of physical hosts. In this paper, the real energy consumption data provided by standard performance evaluation companies is used, and the CPU utilization rate is used as the load indicator, after which simulation experiments are conducted on Cloudsim.
The comparison results of energy consumption and CPU and memory idle rate of different algorithms to implement the resource optimization scheduling strategy are shown in Fig. 7. The experimental results show that the use of the improved genetic algorithm improves in reducing the idle rate of hardware resources as well as the energy consumption of the whole data center, which reduces the energy consumption by about 14.52% compared to the traditional genetic algorithm, 22.06 compared to the linear regression algorithm, 28.38% compared to the ridge regression algorithm, and 11.67% compared to LASSO algorithm. 11.67%. Similarly, the improved genetic algorithm also shows enhanced CPU idle rate and memory idle rate compared to the comparison algorithm. Specifically, the CPU idle rate is reduced by about 28.93% to 46.64% compared to the comparison algorithm, while the memory idle rate is also reduced by nearly 28.00% to 49.34%.

The comparison results of energy consumption and CPU,memory idle rate
Experiments are conducted using the Cloudsim simulation platform, which supports cloud computing infrastructure and utilizes datacenterbroker to achieve matching of cloud tasks to virtual machines. The simulation uses 1000 virtual computing resources for scheduling and conducts experiments to compare the total utility of resource scheduling with different algorithms.
Fig. 8 shows the results of comparing the total utility values of different algorithms. As can be seen from the figure, with the increase of the number of resources, the total utility value of scheduling of different algorithms also increases, but the total utility value of scheduling cloud computing resource scheduling strategies based on the traditional genetic algorithm, linear regression algorithm, and ridge regression algorithm is always lower than the algorithm of this paper, and the total utility of the algorithm of scheduling of this paper is always the largest. Taking the number of resources as 1000 for example, the LASSO algorithm has the lowest total utility value for resource scheduling. The total resource scheduling utility value of the optimized genetic algorithm is increased by 19.18% to 90.46% compared to the comparison algorithm. This is due to the fact that the algorithm in this paper uses the genetic optimization algorithm to expand the range of selected nodes to avoid falling into the local optimal solution, which in turn can maximally satisfy the user’s resource scheduling needs.

The total utility value of different algorithms is compared
In this paper, a cloud computing resource optimization scheduling algorithm based on improved genetic algorithm is designed in terms of selection operator, crossover operator, and mutation operator, and the algorithm is implemented in CloudSim cloud computing platform. The algorithm is tested for multi-task execution time, task load balancing allocation, energy consumption, and total utility of resource optimization scheduling, and compared with various algorithms.
The algorithm in this paper shortens the multitask execution time by 17.66% to 53.65% compared with the comparison algorithms. The improved genetic algorithm is more reasonable for task allocation with {200, 400, 500, 600, 700, 800} task allocation for each node. The energy consumption, CPU idle rate, and memory idle rate of this algorithm for resource scheduling optimization are 1.06(kw/h), 31.99%, and 35.17%, respectively. The total resource scheduling utility values of the algorithm used in this paper increase by 19.18% to 90.46% compared to the comparison algorithm when the number of resources is 100.
In summary, it proves that the cloud computing resource optimization scheduling strategy based on improved genetic algorithm has significant advantages in improving the performance of cloud platform. This study not only provides a new perspective for cloud computing resource scheduling, but also provides theoretical support and practical guidance for the optimization of genetic algorithms in practical applications.