Open Access

Research on university laboratory management and maintenance framework based on computer aided technology

 and   
Feb 27, 2025

Cite
Download Cover

Introduction

With the continuous advancement of technology, computer laboratories, as important hubs for technical research and teaching, are shouldering increasingly heavy experimental and educational tasks. To meet the growing academic demands, the construction and management of laboratories are facing new challenges[28]. In traditional laboratory models, the management of hardware resources, configuration of experimental environments, and control of experimental progress often rely on manual operations, which are not only inefficient but also prone to resource wastage. As a result, many laboratories are actively exploring intelligent management solutions based on information technology, aiming to optimize resource allocation and improve management efficiency, thereby advancing laboratory construction into a new phase of development[13].

However, current laboratory management still faces several issues that need urgent resolution. Firstly, the utilization rate of hardware resources is generally low, and resource scheduling in laboratories often encounters bottlenecks, leading to time and space conflicts between different experiments[10]; [19]. Secondly, the configuration and management of experimental environments are singular and lack flexibility, failing to meet the needs of various disciplines and projects[1]. Furthermore, with the rise of cloud computing and virtualization technology, traditional management methods are also struggling to keep up with the rapidly evolving technological demands. How to enhance both management efficiency and the scalability and intelligence of laboratories has become a core challenge in laboratory construction[26].

To address these issues, an increasing number of studies are exploring how to use computer-assisted technologies to optimize laboratory management[30]. Some scholars have proposed laboratory management models based on virtualization technology, which realizes unified scheduling and flexible allocation of resources through virtualized desktops and cloud computing resources[16]. Additionally, using intelligent algorithms to optimize laboratory scheduling and management processes has become an effective way to improve management efficiency[4]. The integration of emerging technologies such as virtualization, cloud computing, and artificial intelligence provides a new perspective, aiming to help laboratories solve problems of resource shortages and management difficulties[15].

Nevertheless, current computer-assisted laboratory management systems still have some shortcomings. Most existing research focuses on the single application of virtualization technology, lacking multi-layered and multi-dimensional integrated solutions, making it difficult to fully realize its potential in practical applications[31]. Moreover, the complexity and dynamic nature of laboratories require management platforms to possess high flexibility and scalability, but existing systems often fall short in this regard, making it challenging to adapt to rapidly changing demands and environments[20].

To solve these problems, this paper proposes a novel computer laboratory management method based on virtualization technology. We have designed a multi-layer management platform architecture, consisting of the platform management layer, desktop virtualization service layer, and desktop virtualization infrastructure layer, working together to form a complete laboratory management system. On this basis, we have designed implementation methods for each functional layer, with the platform management layer designed using the SSH framework, and developed a web system that supports experimental management operations by different user roles. To further optimize resource scheduling, we have designed a cloud platform server resource pool model and used a column generation-based shared resource-constrained project scheduling algorithm (CGS), applying the column generation solving algorithm to achieve efficient resource allocation. This approach not only enhances the automation level of laboratory management but also significantly improves resource utilization and system flexibility.

This paper designs a computer laboratory management platform that integrates virtualization technology, improving resource utilization efficiency and enhancing the flexibility of laboratory management through a multi-level architecture.

A column generation-based scheduling algorithm is proposed to optimize the allocation of laboratory resources, improving scheduling efficiency under resource constraints.

A Web management system was developed using the SSH framework to support laboratory resource management and task scheduling, thereby enhancing the automation and intelligence of laboratory management.

Related Work
Research and Applications of Virtualization Technology

Virtualization technology has been widely applied in fields such as cloud computing, laboratory management, and data centers in recent years, providing efficient solutions for computer resource management[14]. The Virtual Machine Monitor (VMM), as a fundamental component of virtualization, sits between physical hardware and virtual machines, responsible for managing and scheduling the resources of virtual machines[8]. Its advantage is that it provides resource isolation and allows multiple virtual machines to run concurrently, greatly improving the utilization of hardware resources. However, because VMM relies on hardware virtualization support and has relatively high virtualization overhead, it may lead to performance degradation, especially in high- load computing scenarios where performance might not match that of physical machines. X86 virtualization, based on hardware virtualization extensions of the X86 architecture (such as Intel VT- x or AMD-V), significantly reduces virtualization overhead and improves multi-core utilization of processors, making it suitable for high-performance computing and multi-tasking[9]. However, it has higher hardware requirements and needs processors and motherboards that support virtualization, which may limit its application scenarios. Additionally, resource scheduling and management complexity in large-scale environments can also present challenges[23]. VMware is a representative enterprise-level virtualization platform, offering comprehensive solutions ranging from virtual machine monitors (ESXi) to virtual desktop infrastructures (VDI). VMware’s biggest advantage is its support for enterprise-level features such as high availability, load balancing, and virtual machine migration, enabling it to handle complex tasks in large-scale virtualized environments. However, VMware’s high licensing costs and management complexity are major drawbacks, particularly for small and medium-sized businesses with limited resources, where the cost may become a barrier to use. Furthermore, while VMware has strong resource scheduling capabilities, its virtualization performance is still limited by hardware support, and certain resource-intensive applications may still be affected[11]. In the field of desktop virtualization, Citrix XenDesktop provides highly customizable desktop environments and supports the remote delivery of user desktop operating systems and applications to various terminal devices through virtualization technology[17]. XenDesktop’s advantage lies in its ability to offer personalized desktop environments and robust centralized management, making it especially suitable for remote work or distributed teams. However, XenDesktop’s performance heavily depends on network bandwidth; if the network bandwidth is insufficient or latency is high, user desktop experience can be significantly impaired. Additionally, XenDesktop’s deployment and management are relatively complex, especially in large-scale environments, where the configuration and maintenance workload is large, increasing operational costs. VMware Horizon View offers a virtual desktop solution similar to XenDesktop, also supporting various terminal devices and simplifying the deployment and maintenance of virtual desktops through centralized management[29]. Horizon View’s advantage is its ability to provide better security and centralized management, making it suitable for environments that require strict data protection and unified management. However, like XenDesktop, Horizon View is also highly dependent on the network, and insufficient bandwidth during remote access may lead to performance bottlenecks, affecting user experience. At the same time, Horizon View has high deployment costs, especially in scenarios with a large number of users, where licensing and maintenance costs can be quite expensive.

Computer Lab Management Model

With the development of computer technology, the management model of computer labs has gradually shifted from traditional manual management to intelligent management[5]. The traditional computer lab management model primarily relies on manual operations, including equipment management, personnel allocation, and experiment records. Although this model met basic needs in its early stages, as the scale of the lab expanded, problems such as uneven resource scheduling, low equipment utilization, and delayed information feedback began to emerge[3]. To improve management efficiency and resource utilization, the smart lab management model has been developed. This model uses information technology to achieve real-time monitoring and dynamic scheduling of resources, automatically completing tasks such as experiment reservations and equipment maintenance reminders, greatly reducing the burden on manual labor and enhancing the operational efficiency of the lab[12]. However, despite solving many issues present in the traditional model to some extent, the smart management model still faces several challenges in practical application[27]. First, due to the limited equipment and resources in the lab, many administrators are responsible for managing multiple labs or tasks, leading to an excessive workload and an inability to address unforeseen problems in a timely manner. Second, under the traditional management model, administrators cannot obtain real-time data on lab usage and equipment status, making it difficult to identify and resolve problems promptly[6]. Even with a smart system, the real-time updating of information can still be affected by issues such as network delays or hardware failures[7]. Furthermore, the workload for software updates on lab equipment is substantial; the traditional manual update process is cumbersome and prone to errors, while the smart system, although simplifying the update process, still relies on efficient systems and hardware support[21]. Lastly, since lab usage often requires manual reservations and management, students and teachers tend to have lower enthusiasm for using the lab. If the smart management system is not fully functional or has poor user experience, it may lead to insufficient participation from users, thereby affecting the full utilization of lab resources.

Research Status of Resource Scheduling Algorithms

With the rapid development of computer technology, resource scheduling algorithms have been widely applied in fields such as computer lab management, cloud computing, and data centers. Traditional resource scheduling algorithms, such as Shortest Job First (SJF), optimize the system’s average waiting time by prioritizing the execution of the shortest tasks, improving resource utilization[25]. However, its drawback is that it requires knowledge of task execution times in advance, which is often difficult to achieve in practical applications and may lead to starvation of long-running tasks. Another common algorithm is Round Robin (RR), which ensures fairness by allocating a fixed time slice to each task, avoiding task starvation[18]. However, for short tasks, it may result in longer response times. Priority Scheduling allocates resources based on task priority, ensuring that higher-priority tasks are executed first. This is suitable for real-time computing environments but may lead to starvation of low-priority tasks[22]. Additionally, the priority settings need to be adjusted according to specific requirements. As scheduling problems become more complex, intelligent optimization algorithms such as Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) have been applied to resource scheduling problems. Genetic algorithms can find near-optimal solutions in complex solution spaces by simulating natural selection and evolution, but they are computationally expensive and converge slowly[2]. On the other hand, PSO simulates the cooperation and competition between particles to effectively find the global optimum and is easy to implement, though it may fall into local optima, requiring parameter adjustments to improve performance. Furthermore, load balancing scheduling algorithms dynamically or statically adjust resource distribution to balance the load across nodes, improving overall system performance[24]. However, load balancing may incur additional computational overhead. Therefore, choosing the appropriate scheduling algorithm that balances efficiency, resource utilization, and fairness remains a core challenge in current research.

Method

The overall network framework proposed in this paper consists of three parts: the platform management layer, the desktop virtualization service layer, and the desktop virtualization foundation layer, aiming to achieve efficient management of computer laboratories through virtualization technology (as shown in Figure 1). The platform management layer includes a portal management website, which centrally manages laboratory users through the personnel information management module. It is responsible for personnel information entry, permission settings, and user management, providing user data support for desktop virtualization. The desktop virtualization service layer, as the core of the system, consists of a desktop virtualization manager that handles command processing and data access for virtual desktops, connecting users to the underlying resources. This manager configures, distributes, and manages desktop environments, ensuring that each user receives the appropriate desktop resources. The desktop virtualization foundation layer is composed of cloud desktop servers, hardware resource pools, and storage devices, providing computing and storage resources to support the operation of virtual desktops. Cloud desktop servers provide desktop services to terminals, the hardware resource pool includes computing resources such as processors and memory, and storage devices are used to store the data and applications required by virtual desktops.

Figure 1.

Overall Framework Diagram.

Through the collaborative operation of these three layers, the framework enables dynamic allocation and centralized management of laboratory desktop resources, effectively improving resource utilization and management efficiency, and offering users efficient and convenient virtual desktop services.

Experiment Environment Management

The Experiment Environment Management module is the core functionality of the computer laboratory management platform(as shown in Figure 2). It is divided into four sub-modules based on actual workflow requirements, including experiment environment application, experiment environment review, experiment environment status monitoring, and software library management. First, the experiment environment application module allows users to submit requests for the use of the experiment environment according to different needs, such as course teaching, research and teaching reform, and disciplinary competitions, thereby standardizing the resource application process.Next, the experiment environment review module is used to review the submitted applications to ensure the rational allocation and effective use of experimental resources. The experiment environment status monitoring module provides real-time monitoring capabilities, allowing users and administrators to check the status of laboratory equipment and resource usage at any time. This ensures that the operational state of the experiment environment is promptly understood, thereby ensuring the smooth conduct of experiments. Finally, the software library management module is responsible for managing software resources in the experiment environment, including adding, deleting, and updating software versions. It simplifies the software management process and ensures that the software in the experiment environment always meets the latest requirements for teaching and experimentation.

Figure 2.

Experiment Environment Management Module.

Experiment Reservation Management

The Experiment Reservation Management Module is an important feature of the computer laboratory management platform, designed to standardize and optimize the laboratory reservation process. As shown in Figure 3, this module primarily includes three sub-functions: experiment application, experiment review, and experiment reservation status monitoring. First, the experiment application function allows users to submit laboratory usage requests based on various needs, including course teaching applications, research reform applications, and disciplinary competition applications, thereby accommodating the diverse demands for laboratory resources. Next, the experiment review function is responsible for reviewing the submitted experiment applications, ensuring the rational allocation and effective utilization of resources. Finally, the experiment reservation status monitoring function provides users with real-time information on reservation statuses, allowing both users and administrators to keep track of the laboratory’s reservation situation at any time, thereby facilitating the reasonable scheduling and utilization of laboratory resources. Through these functionalities, the experiment reservation management module effectively enhances the utilization efficiency and management level of the laboratory.

Figure 3.

Experiment Reservation Management Module.

Laboratory Information Management

The Laboratory Information Management Module is an essential component of the experiment management platform, comprising three sub-modules: Laboratory Information Management, Experiment Task Management, and Experiment Report Management. These sub-modules enable comprehensive management of laboratory resources, tasks, and reports. The Laboratory Information Management sub-module is responsible for managing basic laboratory information, supporting the addition, deletion, and updating of laboratories, allowing administrators to easily modify lab resources and ensure information is updated in real time. The Experiment Task Management sub-module provides functionality for adding, deleting, updating, and querying experiment tasks, enabling administrators to view existing tasks, create new tasks, update task details, or delete outdated tasks, thus supporting the dynamic management of tasks and flexible adjustments based on requirements. The Experiment Report Management sub-module covers functions for viewing, creating, updating, and deleting experiment reports and supports report submission and grading, greatly simplifying the management process of experiment reports. This feature allows users to submit experiment results and feedback in a timely manner, facilitating laboratory managers in tracking and evaluating the experiment process and outcomes. Through these functions, the Laboratory Information Management Module achieves efficient and systematic management of laboratory resources, tasks, and reports, enhancing laboratory usage efficiency and management quality, while providing users with a convenient management experience.

Experimental Equipment Management

The Experimental Equipment Management Module is an essential component of the laboratory management platform, primarily consisting of two sub-modules: Equipment Fault Information Management and Equipment Borrowing Management, to enable comprehensive management of experimental equipment. The Equipment Fault Information Management sub-module is responsible for recording and handling equipment fault information, supporting functions such as adding new fault records and processing equipment issues. This allows administrators to monitor equipment status in a timely manner and take appropriate actions to ensure the normal operation of experimental equipment. The Equipment Borrowing Management sub-module provides functions for equipment borrowing requests, scheduled returns, and handling of returned equipment. Through this module, users can apply for equipment borrowing and schedule return times, while administrators can manage the borrowing and return status of equipment, ensuring the rational allocation and effective utilization of resources. With the Experimental Equipment Management Module, laboratories can efficiently manage their equipment, enhance equipment utilization, and ensure the smooth progress of experimental activities.

Desktop Virtualization Infrastructure Layer

The Desktop Virtualization Infrastructure Layer is the foundational architecture of the entire virtualization system, responsible for providing underlying computing and network resources to ensure the smooth operation of cloud desktops and efficient resource allocation. This layer consists of a server pool, switches, and variously configured clients, creating a comprehensive virtual desktop support environment. Within the Desktop Virtualization Infrastructure Layer, the server pool includes multiple servers that provide computing and storage support for different types of clients. The servers in the pool can be flexibly expanded based on demand to meet high-concurrency user access and resource requirements. The switches handle data transmission between servers and clients, ensuring stable and efficient network communication. This layer supports clients with low, standard, and high configurations, allowing for flexible resource allocation based on user needs so that various terminal devices can receive suitable virtual desktop services.

The desktop virtualization management software is the core component of this layer, responsible for dynamic resource allocation and load balancing. Through the management software, the system can automatically allocate computing resources within the server pool based on real-time user demand, improving resource utilization and preventing server overload. Additionally, the management software supports diverse client access, ensuring that terminal devices with different configurations can connect and utilize virtual desktop services seamlessly. In summary, the Desktop Virtualization Infrastructure Layer, through the coordinated operation of the server pool, switches, and virtualization management software, provides users with a flexible, efficient, and stable cloud desktop environment. The design of this layer not only enhances resource utilization but also simplifies desktop maintenance and management, laying a solid foundation for efficient laboratory management.

Column Generation-based Shared Resource Constrained Project Scheduling Algorithm

This paper proposes a Column Generation-based Shared Resource Constrained Project Scheduling Algorithm (CGS) as a solution method for optimizing multi-task resource scheduling problems. This algorithm relaxes the constraints of the original problem and gradually generates appropriate columns to meet resource constraints, achieving optimal task scheduling. The key formulas and their variable meanings used in this algorithm are introduced below. Z=miniIcCVicxic+t>0λt(iIcCQicxicRmax)

where Z represents the total cost, V_{ic} denotes the execution cost of task i on resource c, Q_{ic} is the amount of resources required by the task, Rmax is the resource limit, and λt is the Lagrange multiplier.Under relaxed conditions, the objective function is updated as: Zia=minVic+t>0λtqitc

where Z_{ia} denotes the objective value under relaxation, and q_{itc} is the demand of task i for resource c at time t. By substituting equations (1) and (2) into the objective function, we obtain the updated objective function: Zia=minjJt>0cjt(zjtzj,t1)+jJt>0λtRj(zj,t+pjzjt) where c_{jt} is the cost of task j at time t, and R_j is the resource demand of task j.

The resource allocation constraint is given as: jJ(zj,t+pjzjt)1,t

This constraint ensures that at any time t, resource allocation does not exceed the maximum resource capacity.To ensure tasks execute in sequence, we define: zjtzj,t1,jJ,t

This constraint ensures that each task’s start time must not be earlier than the completion time of the preceding task.To meet project schedule requirements, we define the constraint for the final time: zjT=1,jJ

The resource usage at any time t must not exceed the limit: iIQicxicRmax,cC,t

This constraint ensures that the total resource demand does not exceed the maximum available resource. To further optimize resource scheduling, the iterative update of the column generation algorithm is as follows: Znew=Z+jJγj(t>0Rj(zjtzj,t1)) where γj is an adjustment parameter that controls the convergence speed of the iteration.

Experimental
Experimental Setup

In In this experiment, a total of 500 random problem instances were generated to evaluate the performance of the column generation-based resource-constrained project scheduling algorithm. Each instance contains 4 to 10 servers, with an average of 10 tasks per server. The experimental data is divided into multiple series, with each series containing 100 instances. Different parameter combinations were set for the instances in each series to simulate various complex scenarios that may be encountered in practical applications, thereby allowing a comprehensive assessment of the algorithm’s robustness and applicability. By optimizing the scheduling of these randomly generated instances, the experiment can test the algorithm’s performance under different resource constraints, observing its effects on resource utilization, task completion time, and load balancing. Details are shown in Table 1.

Table of Common Attributes for Experiment Sequence.

I 4 5 6 7 8 9 10
Rmax 10 13 15 18 20 23 25
T 100 120 120 150 150 150 180
Comparison Methods

To validate the effectiveness of the proposed Column Generation-based Shared Resource Constrained Project Scheduling Algorithm (CGS), we conducted comparative experiments with three algorithms: ILP (Integer Linear Programming, using the commercial solver CPLEX), LR (Lagrangian Relaxation), and GA (Genetic Algorithm). ILP is a mathematical optimization method that transforms the scheduling problem into an optimization problem with integer variables under linear constraints, which can be solved optimally using commercial solvers such as CPLEX. The objective function of the ILP model is to minimize the total cost or total completion time, as follows: Z=miniIjJcijxij

where Z represents the total cost, c_{ij} is the cost of assigning job i to machine j, and x_{ij} is a binary variable indicating whether job i is assigned to machine j. The constraints include that each job must be assigned to one and only one machine: jJxij=1,iI iIqijxijRj,jJ where q_{ij} is the resource requirement of job i on machine j, and R_j is the resource capacity of machine j.

The Lagrangian Relaxation (LR) method relaxes certain constraints, transforming the complex integer programming problem into subproblems that are easier to solve, and optimizes the solution by iteratively updating the Lagrange multipliers. The relaxed objective function is: ZLR=miniIjJcijxij+kKλk(iIaikxibk) where λk is the Lagrange multiplier, a_{ik} represents the resource consumption of job i, and b_k is the resource limit. The iterative update process for the Lagrange multiplier is: λknew=λk+θ(iIaikxibk) where θ is the step size parameter, controlling the update speed of the Lagrange multiplier. The LR method is suitable for large-scale problems with faster computation speed, though the solution accuracy may not be as high as ILP.

The Genetic Algorithm (GA) is a heuristic algorithm based on the theory of biological evolution, which searches for near-optimal solutions in the solution space through selection, crossover, and mutation operations. The goal of GA is to optimize the fitness function: f(x)=iIjJcijxij where f(x) represents the fitness of the solution, and x_{ij} is a binary variable indicating whether job i is assigned to machine j. GA evaluates the quality of individuals through the fitness function and updates the population via selection, crossover, and mutation operations. The selection operation chooses individuals with higher fitness values to enter the next generation; crossover operation generates new individuals by randomly exchanging genes between two selected individuals; mutation operation changes individual genes randomly, increasing population diversity and avoiding local optima. GA is suitable for nonlinear and complex problems, but it converges slowly and has a high computational cost.

Evaluation Matrix

In this paper, an evaluation matrix is used to measure the relative performance of different scheduling algorithms in resource scheduling, mainly by calculating the Relative Difference (RD) for comparative analysis. The specific calculation formula is as follows: RD=TWT(A)TWT(B)max{LBA,LBB}×100%

Here, TWT(A) and TWT(B) represent the optimal objectives obtained by methods A and B, respectively.

Result

In experiments with large-scale tasks, we compared the performance of the proposed algorithm with the Genetic Algorithm (GA). Figure 4 illustrates the scheduling effectiveness of both algorithms under different task loads. The experimental results show that the proposed algorithm consistently finds better solutions than GA under various load conditions, not only excelling in resource utilization but also demonstrating advantages in task completion time and load balancing. This indicates that the proposed algorithm possesses stronger optimization capabilities and stability in large-scale task scheduling, making it suitable for resource-constrained and complex scheduling environments. In contrast, GA tends to get trapped in local optima as task volume increases, leading to less ideal scheduling results compared to the proposed algorithm. Overall, the proposed algorithm demonstrates significant superiority in addressing complex resource scheduling problems.

Figure 4.

Comparison Experiment of CG and GA.

Additionally, the results in Figure 5 indicate that, compared to the Lagrangian Relaxation (LR) method, the proposed algorithm exhibits greater advantages across different numbers of servers. When the number of servers is four, the proposed algorithm’s Relative Difference (RD) is on average less than 5%, suggesting that under this condition, the scheduling effectiveness of the proposed algorithm is comparable to that of the LR method. However, when the number of servers is six or more, the average RD decreases significantly, further demonstrating that as the number of servers increases, the performance advantage of the proposed algorithm becomes more pronounced, yielding better scheduling results than the LR method. Overall, the proposed algorithm shows better robustness and adaptability in high-load and large-scale resource allocation scenarios.

Figure 5.

Comparison Experiment of CG and LR.

The results in Figure 6 indicate that, compared to the Integer Linear Programming (ILP) method, the Relative Difference (RD) of the proposed algorithm is negative when the number of servers is four. This suggests that at this stage, ILP achieves better scheduling performance than the proposed algorithm. ILP is capable of finding the global optimal solution in small-scale server configurations, thus showing higher scheduling efficiency with fewer servers. However, as the number of servers increases, the computational complexity of ILP rises sharply, leading to significantly longer solving times and making it difficult to obtain a solution within a reasonable time frame. In contrast, the RD of the proposed algorithm gradually approaches positive values with larger server configurations, demonstrating higher computational efficiency and better scheduling performance. This indicates that the proposed algorithm has strong adaptability and advantages in handling large-scale scheduling problems, whereas ILP becomes less suitable as the task scale and number of servers increase.

Figure 6.

Comparison Experiment of CG and ILP.

Limitations and Future Prospects

Despite the strong performance of the proposed algorithm in large-scale task scheduling and resource optimization, there are still some limitations. Firstly, the algorithm’s scheduling effectiveness in small-scale server configurations is not as competitive as traditional exact methods (such as ILP), showing weaker performance in low task volume and small resource environments. Additionally, although the algorithm demonstrates high adaptability under heavy loads, it still requires significant computation time and resource consumption when handling extremely complex task demands, which limits its applicability in real-time scenarios. Finally, the algorithm’s performance depends to some extent on precise parameter tuning, a process that can be complex and may hinder its adoption in practical applications.

Future research will focus on several improvements and extensions. First, we will explore automatic parameter optimization techniques to enhance the algorithm’s adaptability and reduce the effort required for parameter tuning. Secondly, we may integrate other intelligent optimization algorithms (such as deep reinforcement learning) with our approach to further improve computational efficiency and adaptability. Additionally, future work will consider applying this algorithm to more complex and diverse real-world scenarios, such as cloud resource scheduling and intelligent manufacturing systems, to verify its generalizability and stability across different application domains. Through these improvements, we hope to enhance the algorithm’s efficiency and scalability, making it more widely applicable to various resource-constrained complex scheduling problems.

Conclusions

This paper proposes a computer laboratory management method based on virtualization technology, building a comprehensive laboratory management system through a multi-layer platform architecture, which includes the platform management layer, desktop virtualization service layer, and desktop virtualization foundation layer. This approach not only improves the utilization efficiency of laboratory resources but also enhances the system’s flexibility and automation. To address the optimization of resource scheduling, we designed a Column Generation-based Shared Resource Constrained Project Scheduling Algorithm (CGS), which achieves efficient resource allocation through the column generation solution method, further optimizing scheduling efficiency. Additionally, a web-based management system for laboratory resource management and task scheduling was developed based on the SSH framework, providing a user-friendly interface for different user roles and enhancing the intelligence level of laboratory management.

Experimental results show that the proposed management platform outperforms traditional methods in terms of resource utilization, task completion time, and system flexibility, especially under high concurrency and resource-constrained environments. However, the method’s performance under small-scale task loads still has room for improvement, which can be addressed in the future by incorporating adaptive parameter optimization techniques. Overall, this study provides effective technical support for computer laboratory management in universities, laying a foundation for optimized resource utilization and intelligent management of laboratories.

Language:
English