Uneingeschränkter Zugang

Deep Learning in Product Manufacturing Record System

 und    | 22. Feb. 2022

Zitieren

introduction

Since the 21st century, the new generation of industrial revolution, mainly characterized by intelligence, information technology and automation, namely Industrial Revolution 4.0, has emerged and opened the curtain of the “Industry 4.0” era [1]. With the improvement of product quality and safety in the modern industrial production process, increasing amounts of attention is being paid to the quality control of the whole process of product production. In order to improve product quality and production efficiency, enterprises trace the quality of their products, pay attention to the use of products, and improve and control the existing product quality [2]. In recent years, with the development of manufacturing technology and increased supervision, the probability of accidents in industrial products is also decreasing year by year, and the overall safety level is gradually increasing [3-4].

The product production record system records production record data from processing to packaging, provides comprehensive analysis of product production data, and establishes a production record system that focuses on the production process. In the intelligent manufacturing mode, the operator uses the code scanning gun to enter the relevant data of the product. It greatly improves the efficiency of production data collection and facilitates the improvement of product production efficiency. Deep learning has taken off in recent years, making major breakthroughs not only in medical research, energy consumption in life, finance and communications. Features such as face recognition, speech processing and video object detection are derived based on algorithms and extensive training in deep learning. The learned features extracted through different deep networks perform well in prediction. Therefore, learning deep features for prediction is becoming Getting more and more popular. In this paper, inspection records in product manufacturing system are used as training samples, CNN, STACK LSTM, GRU, INCEPTION, ConvLSTM and CasualLSTM techniques are used to design network models and to study the processing of temporal data, after comparing and selecting a comprehensive and high network model to predict the results of the product to be inspected. According to the prediction results, the model can correct the operation of the workshop technicians who do not regulate the debugging of products in advance, and largely improve the efficiency of product testing at the early stage of production.

Related work

A wide variety of time series data exists for various industries, financial time series, electricity consumption time series, coal price time series. It is of great value and significance to spend effort on time series data to study and analyze them. Lots of work has been done by domestic and foreign scholars on time series data analysis and forecasting.

RNN is a traditional recurrent neural network [5], which is a model for processing sequential data. The traditional RNN in solving the association between long sequences, through practice, proved that the classical RNN performs poorly, the reason is that when backpropagation is performed, too long sequences lead to abnormal computation of gradients, and gradient disappearance or explosion occurs. LSTM [6] long short-term memory model is a special kind of RNN neural network and effectively deal with long-term dependence problem, it is compared with RNN in two aspects to do improvement The LSTM gating mechanism has three main gates which are forgetting gate, input gate and output gate respectively. The equation of LSTM network structure at moment t is shown below, ft, it, ot, Ct are forgetting gate, input gate, output gate and cell state respectively, Wt is each gating weight parameter, bt is each gating bias parameter respectively, σ are sigmoid activation function, and tanh is the hyperbolic tangent activation function.

{ f t = σ ( W f [ h t 1 , x t ] + b f ) i t = σ ( W i [ h t 1 , x t ] + b i ) C ~ t = tanh ( W c [ h t 1 , x t ] + b c ) o t = σ ( W o [ h t 1 , x t ] + b o ) h t = o t tanh ( C t ) C t = f t C t 1 + i t C ~

LSTM has the dominant advantage for the prediction of temporal data, but it is not effective in predicting data in spatio-temporal sequences, does not consider spatial correlation and carries redundancy, and cannot portray local features due to the strong local characteristics of spatial data. The ConvLSTM network structure was first proposed in 2015 for precipitation proximity prediction [7], and the experimental results found that ConvLSTM grasps the data spatio-temporal structure, and also proved that ConvLSTM works better than LSTM in obtaining spatio-temporal relationships. The core of ConvLSTM network structure is consistent with LSTM, which also takes the output information of the previous layer as the input information of the next layer. The difference is that the W-weight full link operation is changed to convolution operation, which not only can get the spatio-temporal relationships, but also can extract spatial features like convolution layer thus being able to obtain spatio-temporal features.

GRU [8] is a variant of LSTM network, it inherits the advantages of LSTM network and has a simple structure, LSTM has three gates are forget gate, input gate and output gate, while GRU has only two gates are update gate and reset gate, for memory information transfer, LSTM is passed to the next unit through the output gate, GRU is directly transferred to the next unit without control. t moment GRU unit structure of the equation is shown below, rt, zt, h~ t are reset gate, update gate and memory transfer information respectively.

{ r t = σ ( W r [ h t 1 , x t ] ) z t = σ ( W z [ h t 1 , x t ] ) h ~ t = tanh ( W [ r t h t 1 , x t ] ) h t = ( 1 z t ) h t 1 + z t h ~ t

Network model construction
CNN-STACK LSTM detection record prediction model construction
Local feature learning

The convolution layer (Conv) is a structure unique to convolutional neural networks and is used to extract local features of the data, the outline of people on pictures, the shape of cars, etc. Due to the specificity of the dataset, the convolution kernel performs the convolution operation on one-dimensional data according to the left-to-right direction, and the convolution operation is shown in Figure 1.

Figure 1.

Convolution operation

A BN layer (Batch Normalization) is added after the convolution layer, which sets the increasingly deviated distribution into a normalized distribution by means of normalization, BN layer can make the loss function smoother as well as facilitate gradient descent. to avoid the appearance of “Dead Neuron”. The final local feature learning module is (CBRP) shown in Figure 2, which is composed of a convolutional layer (Conv), a BN layer (Batch Normalization), an activation function (Leaky ReLU) and a pooling layer (Pooling).

Figure 2.

CBRP module structure

Global feature learning

LSTM has made great breakthroughs in speech recognition, data prediction and financial records, LSTM solves the drawbacks of RNN. Global feature learning is performed using a stacked long and short-term memory model (STACK LSTM), which is a multilayer LSTM network model stacked vertically, thus enhancing the abstraction capability of the network. Article adopt a stacked LSTM network model, as shown in Figure 3, where the first layer is used as the input LSTM network and the output information of the first layer is used as the input information of the second layer.

Figure 3.

Stacked LSTM structure

CNN-STACK LSTM Network Model

The CNN-STACK LSTM network model is constructed by three CRPs, a fully connected layer, a STACK LSTM layer and a Softmax layer, where the output results are normalized using the Softmax layer. The overall architecture of the CNN-STACK LSTM network model is shown in Figure 4.

Figure 4.

CNN-LSTM network model

INCEPTION-GRU detection record prediction model construction
Local feature learning

In 2012 AlexNet [10] first used new techniques such as Relu, Dropout in CNN and made a historical breakthrough, the mainstream breakthroughs in network structure are mainly divided into two categories, one is to increase the depth of the network model (number of layers) and the other is to expand the width of the network model (number of neurons). As the number and width of the network model increase, it will bring many negative factors, such as overfitting, gradient disappearance and gradient explosion. Until the emergence of GoogLeNet in 2014, GoogLeNet [11] is composed by inception module, inception is proposed to improve the training results from another perspective, which can use the CPU/GPU computational resources more efficiently and can extract more features with the same computational and weighting parameters, thus improving the training The result.

According to the training data characteristics, the inception basic unit is modified to reduce the convolution of one branch, and the convolution followed by the activation function is modified to add BN (Batch Normalization) layer calculation between the convolution and activation function, and the activation function uses Leaky ReLU, the basic structure of this inception is shown in Figure 5.

Figure 5.

Inception module structure

The use of the inception structure for local feature learning and the expansion and deepening of the local feature network model also increases the local feature network model nonlinearity as well as the fusion of local features of different sizes, both to improve accuracy.

INCEPTION-GRU network model

The INCEPTION-GRU network model is constructed by a convolutional layer, two INCEPTIONs, a fully connected layer, a GRU layer and a softmax layer, and the overall architecture of the INCEPTION-GRU network model is shown in Figure 6.

Figure 6.

INCEPTION-GRU network model

INCEPTION-Casual LSTM Detection Record Prediction Model Construction
Global feature learning

Deeper network models can improve nonlinear representation and also learn more complex transformations, which can fit more complex feature inputs [12]. The network layers in the network model each have their own role, and the features learned by each network layer are observed from a classical network ZFNET reverse convolution. The first layer extracts the edges, the second layer extracts the simple shapes, the third layer has extracted the shapes of the targets, and the deeper network layers are able to learn more complex features [13]. If there is only one layer, it means that the transformations to be learned are very complex.

The model uses Casual LSTM as global feature learning. Casual LSTM is a cascade operation of ConvLSTM cells. Casual LSTM adds more nonlinear operations so that the features will be amplified, which is beneficial to capture short-term dynamic changes and emergent situations. The design Casual LSTM is composed of four layers of ConvLSTM units, similar to the four-layer stacked LSTM structure, where the LSTM base unit is replaced with a ConvLSTM.

INCEPTION-Casual LSTM Network Model

The INCEPTION-Casual LSTM network model is constructed by a convolutional layer, two INCEPTION, a Casual LSTM layer and a softmax layer. The overall architecture of the network model is shown in Figure 7.

Figure 7.

INCEPTION-GRU network model

Expeeiments
Data set structure and processing

The production records of the product are derived from the system’s data from the relay shop from November 2018 to April 2019. The data of the production records amount to 260,000 records, of which the inspection records occupy 149,800 records. The data structure of the inspection records is detailed in Table 1.

Data structure of test records

Field Annotation Data Type Is The Pk
id Self-adding id Int (32) yes
txm barcode varchar (40) no
probbh_id Product version id Int (32) no
cpjcjd_id Product process id Int (32) no
lx types Int (2) no
xh Work serial number Int (2) no
czry debuggers varchar (20) no
jcry Testers varchar (20) no
wlh Material number varchar (20) no
wlms Material Description varchar (200) no
dd Order Number varchar (30) no
bbh version number varchar (50) no
mc Name of work process varchar (30) no
ms description varchar (60) no
create_date Creation time date no
create_username Create User varchar (60) no

Since the data trained by deep learning is the detection record, the fields “primary key identity”, “product process identity”, “product version identity”, “barcode”, “order number”, “creation time” and “creation user” are not considered, and the inspection record has corresponding redundant fields. The material description is an explanation of the material number, so it is also removed, and the process number, debugger, inspector, material number, version number, process name, and description fields are used as training features.

Since the feature values of the training data are limited, a data mapping approach is adopted to process the string type features in the detection records. First, all the different string values in each feature are counted, and these values are assigned according to the Arabic numerals from smallest to largest, for the feature “description”, normal is 0, and fault is 1. The data in Table 2 needs further feature normalization, and the processed data becomes the data with a mean value of 0 and standard deviation of 1. The data is processed in order to allow the model to learn quickly and iteratively optimize to improve the training efficiency.

Results of data processing of test records

Xh Czry Jcry Wlh Bbh Mc Ms
5 0 0 0 0 0 1
3 1 1 1 0 1 0
3 2 1 0 0 1 0
5 3 2 2 0 0 0
5 4 0 0 0 0 1
3 3 1 1 0 1 0
3 5 1 3 0 1 0
3 5 1 3 0 1 0
5 6 3 3 0 0 0
5 5 4 2 0 0 0
3 5 1 3 0 1 0
3 5 1 3 0 1 0
5 7 0 1 0 0 0
3 2 1 0 0 1 0
5 8 5 2 0 0 1
3 6 1 1 0 1 0
3 5 1 3 0 1 0
Model training

This article uses the Tensorflow framework to build a network model, accompanied by the Adam [14-15] optimization algorithm, which is an optimization algorithm for finding global optima, introducing a quadratic gradient correction. The learning rate is also continuously corrected during training in addition to the correction of the weight parameters and bias parameters using the back propagation algorithm. 140,000 data from the product inspection records were used as training data, 10,000 data were used as test data, and each of the three network models was trained by iterating 500 times.

Experimental results

The test data were verified in each of the three network models to verify the performance and prediction effectiveness of the network models. After three prediction experiments with different network models, Table 3 shows the content of the experimental results. The INCEPTION-GRU network model was compared with the other two models in terms of three aspects: time, stability and prediction effect, with short running time, high stability and accurate prediction effect.

Comparison of network model accuracy (%)

Network Model 1 2 3
CNN-STACK LSTM 92.33 92.50 92.78
INCEPTION-GRU 92.6 93.50 93.28
INCEPTION-Casual LSTM 83.83 79.67 94.25
Summary and Prospect

In conclusion, this article propose a research method based on deep learning for product inspection record prediction and design three network models to predict the dataset, each model will have different advantages between both the results and the process, and INCEPTION-GRU network model is considered comprehensively from all aspects as a product inspection record prediction model to provide technical support for new product detection. After studying the three network models, it is found that ConvLSTM is not as stable and efficient as LSTM and GRU in processing temporal data. Future work will focus on improving the GRU model to further improve the accuracy of product inspection pass rate.

eISSN:
2470-8038
Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
4 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Informatik, andere