Uneingeschränkter Zugang

Research on Vehicle Detection Method Based on Background Modeling


Zitieren

INTRODUCTION

In practical applications, a vehicle detection method with fast response, high accuracy, and good adaptability becomes a key part of intelligent traffic detection management. Dealing with moving object in video is affected by many complicated conditions. Commonly used methods for detecting three kinds of moving vehicles include: difference method, optical flow method, and background difference method.

Currently, the difference method includes the inter-frame difference method and the time difference method. The inter-frame difference detection method detectionspeed faster and the operation algorithm is simple, and can be detected under a scene with high real-time requirements. The time difference method is suitable for dynamically changing scenes and does not apply to completely segmenting moving objects. The optical flow method is poor in the context of real-time and practical, and it is difficult to meet the requirements for teal-time detection of moving vehicles. The background difference method has a good result on the speed and detection effect when the performance of the camera is relatively stable. The background difference method focuses on how to set up the background and dynamically update the system in real time. This article uses the background difference method.

COMMONLY USED BACKGROUND MODELING UPDATE MODELS

In the monitoring applications, the background difference method needs to establish a background reference frame. Establishing an accurate and robust background model is the key to the system, the accuracy of the frame directly affects the output. The commonly used background models are the statistical average method and the Gaussian distribution background model.

Statistical average method background model

The statistical average method, also called the mean method, is essentially a statistical filter idea. In a period of time, the collected images are added together, and the average value is taken as the reference background model. It is to obtain the gray average value ofN frames of images in the sequence image as the estimated value of the background image to weaken the interference of the moving object to the background. The specific calculation formula is shown in A v g k = 1 N ( f k + f k 1 + f k 2 + + f k N + 1 )

Avgk is the background model established for the acquisition of the Frame K image system; N is the average number of frames; fk, fk-1, …, fk-N+1 are the continuous sequences frame.

The statistical average method is simple and fast, but if easily causes noise accumulation and mixing. This method is more suitable for a small number of continuous motion objects in the scene, and the background is visible in the most of the time. While there are a large number of moving objects, especially when they are moving slowly, there will be a large deviation in the background of this situation.

Gaussian Distribute Background Model

The Gaussian distribution background model was first proposed by N. Friedman et.al and is divided into background models with single Gaussian distribution and mixed Gaussian distribution. The single Gaussian model method regard the change in the gray value of each pixel in the background image as a Gaussian random process, and establishes a Gaussian model for each pixel in the image, which is achieved by continuously updating the Gaussian model background image. The mixed Gaussian model uses K (basically 3~5) Gaussian models to characterize the features of each pixel in the image. After the new image is acquired, the mixed Gaussian model is updated. Each pixel in current image is mixed with a Gaussian mixture model. Matching to determine the pixel belongs to a background point or a front sight.

The section focuses on the hybrid Gaussian background modeling method. This modeling method uses the statistical information such as the probability density of a large number of sample values for a long time to represent the background. Use the statistical difference (such as 3σ principle) to judge the target pixel. This method can model complex dynamic backgrounds with a large amount of computation. Suppose that any pixel (x,y) in the background obeys a Gaussian model composed of K Gaussian distributions, as shown in follow.

P ( I x , y ) = j 1 k ω j η ( I x , y , μ x , y , j , σ x , y , j )

η(I(x, y), μ(x, y), σ(x,y,j)) means that the j-th Gaussian probability density, means value is μ(x,y,j), the variance is σ(x,y,j). And the ωj is the weight of the j-th Gaussian distribution.

The pixel values observed at the current moment are compared with the current K Gaussian distribution functions in descending order of weights to obtain the best match. If there is no match, the pixel is a foreground spot.Otherwise it is a background point. The Gaussian distribution background model has a large amount of calculation, many storage parameters, and takes a long time, which is not conducive to practical application.

IMPROVED BACKGROUND MODELING METHOD

This paper proposes an adaptive background update model based on the inter-frame difference method. This method uses the background of the current frame and the background of the previous frame in the video sequence to perform weighted averaging to update the background. The specific methods are as shown in formulas (3) and (4) as shown.

D i f f ( x , y , t ) = | I ( x , y , t ) B ( x , y , t ) | B O M ( x , y , t ) = { 0    o t h e r s w i s e 1    D i f f ( x , y , t ) T h

I(x, y, t) and B(x, y, t) are the current frame containing the moving object at t time and the updated current background image; Th is the threshold and the maximum peak right fade in the difference image histogram is used,and the gray level corresponding to 1/10 of the maximum peak.

Using equation (5) to obtain the motion template Stencil(x, y, t) at time t from two adjacent spaced images, it is used as a target factor to determine which pixels in the current frame are used to update the current background. Use the formula (6) to obtain the instantaneous background and update the background using the weighted average of the instantaneous background and the current background, as shown in Equation (7).

S t e n c i l ( x , y , t ) = B O M ( x , y , t ) & B O M ( x , y , t 1 ) B t e m p ( x , y , t ) = { I ( x , y , t )    S t e n c i l ( x , y , t ) = 1 B ( x , y , t 1 )    S t e n c i l ( x , y , t ) = 0 B ( x , y , t ) = α B t e m p ( x , y , t ) + ( 1 α ) B ( x , y , t 1 )

Among them, in order to update the coefficient, theαvalue has a positive correlation with the update speed. The larger theαvalue is, the faster the update speed is, and the change of the external light can be captured in time so that the current background is closer to the external condition of the current frame. The smaller the value is, the slower the update rate will be, and the current background acquired will have some deviation. After the background image is extracted, the current motion area is segmented using the background difference method. Using the threshold parameter in the expression (8), the image is binaries and segmented to obtain the foreground binary image.

D ( x , y , t ) = { 0    o t h e r 1    D i f f (x,y,t) >  t h r e s h o l d

Threshold in the formula should be properly selected, through the experimental results to filter the residual background.

EXPERIMENTAL TESTING

According to the flow of the vehicle detection algorithm, the original video frame→background update and extraction algorithm→motion region segmentation→frame image conversion to grayscale image→binary processing→morphological processing→obtain the detection result, and complete detection test.

Comparison of experimental results

By using the statistical average method, the background of the mixture Gaussian model and the method proposed in this paper, the background model is obtained by modeling the background image of a video of a traffic video surveillance video. The video of this video screen is 1080 * 960, the frequency is 25fps, and the time length is 10 seconds.The experimental results are shown are shown in Fig. 1 below.

Figure 1.

Comparison of experimental results with three methods

It can be seen that in the background frame extraction process, the average statistical method extracts the background [Fig. 1 (b-1)] and the blurred background is affected by the camera shake of the video camera, and the effect is worst.

The parameters of the mixed Gaussian model are set as follows: the number of pixel models is 5, the initial variance is 30, and the learning rate of model weights is α = 0.005, T = 0.7;

The resulting background [Fig. 1 (c-1)] is clearer, but the background extracted at the lower left corner of the screen is still somewhat blurred; the corresponding parameter setting of this algorithm is: Th = 15, α = 0.2, threshold = 12. The background [Fig. 1 (d-1)] obtained by the method proposed in this paper is of good quality and closer to the real background.

Finally, after the difference, binary processing, and morphological processing, the image is finally obtained. The statistical mean method [Fig. 1 (b-5)] is noisy, and the extraction result is not clear. The result graph obtained by using the Gaussian method [Fig. 1(c-5)] works well, but there is still noise. The result graph obtained by using this algorithm [Fig. 1(c-5)] has the best effect, the noise is very small, the extracted vehicle is clearer and the connectivity is better.

Comparison of Algorithm Performance

The final conclusion of the comparison experiment is not only drawn from the experimental results, but also from the performance side. Table 1 compares the performance ofthe three methods, including the time used for the entire inspection process, the memory footprint, and the CPU usage. It can be seen that the algorithm proposed in this paper compared with the other two methods, it consumes less time, consumes less memory, and has a small CPU occupancy rate.

TABLE I.

Comparison of performance of three modeling methods

CONCLUSION

This dissertation focuses on the vehicle detection method based on background difference in the intelligent traffic field, and proposes a background modeling method based on the adaptive difference method between frames.

Design experiments were compared with commonly used averaging methods and Gaussian distribution model methodsto compare the background images obtained by the three modeling methods.

At the same time, the running performance of the algorithm is analyzed from the aspects of time-consuming, memory occupancy and CPU occupancy, which verifies that the vehicle detection algorithm proposed in this paper can extract the background more accurately.

The morphological method is used to process the differential binary image to eliminate noise, fill in voids, etc. to complete the detection step. Experiments prove the effectiveness and real-time performance of the algorithm for video-based motion vehicle detection. However, the effect of this algorithm in complex weather conditions and complex road conditions needs to be further improved.

eISSN:
2470-8038
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Informatik, andere