Dettagli della rivista
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Accesso libero

# Construction and application of automobile user portrait based on k-mean clustering model

###### Accettato: 27 Apr 2022
Dettagli della rivista
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
K-means clustering algorithm

As an effective clustering method, K-means clustering needs to be combined with the minimum distance criterion to accurately classify research samples. It is usually used in the clustering analysis of data streams, and has the characteristics of convenience, speed and simplicity. The actual core idea is to randomly select K objects, and all objects represent a certain cluster center. For other objects, it is necessary to allocate them to the most similar cluster based on the distance formed between the objects and the cluster center, so as to ensure the maximum similarity between the objects in the cluster and the minimum similarity between the objects in the cluster.[1]

Definition 1. The similarity of a mounting condition represents the mean value of the object in the mounting state, which can also be regarded as the center of gravity or center of mass of the mounting state. At the same time, the new centers of all clusters are continuously calculated and the above operations are repeated until the convergent criterion function is obtained. Suppose that in a certain clustering result, there are Ni research samples in Ci of class I, and their mean value is MI, the following formula can be obtained[2]: $mi=1Ni∑x∈Cix$ {m_i} = {1 \over {{N_i}}}\sum\limits_{x \in {C_i}} x

Theorem 1. Combined with the above formula, the sum of squares of error between the sample and the corresponding cluster center can be obtained, as shown below: $Je=∑i=1K∑x∈Ci‖x−mi‖2$ {J_e} = \sum\limits_{i = 1}^K {\sum\limits_{x \in {C_i}} {{{\left\| {x - {m_i}} \right\|}^2}}}

Proposition 2. In the above formula, Je stands for the sum of squares of errors of the corresponding clustering results, which visually presents the application of K category centers M1, m2…, mk represents K sample subset C1, C2…, the total error squared constituted by Ck. If Je is different, the clustering result will be biased. Therefore, the result guaranteeing Je to obtain the minimum distance can be regarded as the optimal result under the sum of error squares criterion.

Lemma 3.means clustering algorithm are due to the traditional more defects during the application, so the k-means clustering algorithm are based on the accumulated experience, presents a new improved algorithm is mainly in order to prevent local optimal solution phenomenon in a criterion function, the first initial clustering center as furthest distance all the target average of data points, The data point with the largest distance from the first initial clustering center is regarded as the second initial clustering center.[3]

Corollary 4. Therefore, in comparison, the value range of K is determined according to the farthest distance. The corresponding algorithm is described as follows: First, there are n objects Cn={x1, x2… Xn}, the calculation formula of the average value of n objects is as follows: $μ=1n∑i=1nxi$ \mu = {1 \over n}\sum\limits_{i = 1}^n {{x_i}}

Conjecture 5. Second, assuming that XP and XQ represent the two initial clustering centers and ensure that xp and all target mean values reach the maximum, the DPQ distance between them can also reach the maximum. At this point, the corresponding calculation formula is as follows: $dpq=max{dij, i, j=1,2,…,n}$ {d_{pq}} = \max \left\{{{d_{ij}},\,i,\,j = 1,2, \ldots,n} \right\}

Third, calculate the distance between the remaining N-2 objects and XP and XQ respectively. Assuming that this condition is met, xi should be integrated into the corresponding category of XP, and vice versa. Their corresponding categories can be represented by Z1 and Z2.

Fourth, calculate the farthest distance between the target arrival contained in S1, the specific formula is as follows: $d1=max{|xs−xp|}, xs∈Z1$ {d_1} = \max \left\{{\left| {{x_s} - {x_p}} \right|} \right\},\,{x_s} \in {Z_1}

Example 6. Calculate the farthest distance between all targets in the second step, the specific formula is as follows: $d2=max{|xt−xp|}, xt∈Z2$ {d_2} = \max \left\{{\left| {{x_t} - {x_p}} \right|} \right\},\,{x_t} \in {Z_2}

And you can get dmax = max {d1, d2}

Fifthly, if the condition dmax > δ · dpq, (δ ∈ [0.5, 1]) is met, it means that Xi (I = S or t) belongs to the third clustering center, and the number of categories is K=K+1. Otherwise, the algorithm ends.

Sixth, repeat steps three to five.

The modeling flow chart of the overall algorithm is as FIG. 1 follows[4]:

Clustering analysis based on self-organizing feature mapping network is a new unsupervised clustering method in neural network algorithm research. The specific flow chart is as Figure 2 follows:

Note 7. On the one hand, the measurement of similarity should be made clear. The specific formula is as follows: $‖X−Y‖=(X−Y)T(X−Y)$ \left\| {X - Y} \right\| = \sqrt {{{\left({X - Y} \right)}^T}\left({X - Y} \right)}

In the formula above, X and Y represent vectors of two modes.

On the other hand, full-vector programming is required, and the specific formula is as follows, which meets the condition $‖Wj‖2=∑iwij2=1$ {\left\| {{W_j}} \right\|^2} = \sum\limits_i {w_{ij}^2 = 1} : $wij'=wij∑iwij2$ w_{ij}^{'} = {{{w_{ij}}} \over {\sqrt {\sum\limits_i {w_{ij}^2}}}}

During the learning of the self-organizing feature mapping network, the weight coefficients between the output neurons and the input phasor should be adjusted in the following ways: ${dwijdt=η(t)[Xj(t)−wij(t)]j∈NCdwijdt=0j∉NC$ \left\{{\matrix{{{{d{w_{ij}}} \over {dt}} = \eta \left(t \right)\left[{{X_j}\left(t \right) - {w_{ij}}\left(t \right)} \right]} \hfill & {j \in {N_C}} \hfill \cr {{{d{w_{ij}}} \over {dt}} = 0} \hfill & {j \notin {N_C}} \hfill \cr}} \right.

Open Problem 8. In the above equation, Xj (t) represents the input of input neuron, Wij (t) represents the weight coefficient of input neuron I and output neuron J at time T, and η (t) represents the learning rate. In a continuous self-organizing learning system, the following conditions can be set and T stands for learning times, which are usually controlled from 500 to 10000. Nc represents the neighborhood of neuron C[5]. $η(t)=1tη(t)=0.2(1−t10000)$ \matrix{{\eta \left(t \right) = {1 \over t}} \hfill \cr {\eta \left(t \right) = 0.2\left({1 - {t \over {10000}}} \right)} \hfill \cr}

Assume that the input vector is X = (x1, x2, Λ, xp)T, which contains K output neurons, then the weight vector of output neuron J is wj = (w1j, w2j, Λ, wpj)T, (j = 1, 2, Λ, k). At this point, the best matching point of input vector X can be regarded as the node of the competition layer with the highest similarity to X, specifically w*, which meets the condition of S (X, w*) ≥ S (X, wj), j = 1, 2, Λ, k. S (A, B) represents Euclidean distance, which can be used to calculate the similarity between vectors and nodes. The corresponding definition formula is as follows: $S(X,wj)=(x1−w1j)2+Λ+(xp−wpj)2$ S\left({X,{w_j}} \right) = \sqrt {{{\left({{x_1} - {w_{1j}}} \right)}^2} + \Lambda + {{\left({{x_p} - {w_{pj}}} \right)}^2}}

In the above formula, p represents the dimension of input vector X, and the maximum point of the value corresponding to S (X, wj) is the best matching unit. At the same time, the weight coefficient should be adjusted by using the second-place penalty rule, and the following equation can be obtained by combining the concept of density: $μj={1if =c−1if j=r0otherwise$ {\mu _j} = \left\{{\matrix{1 \hfill & {if\, = c} \hfill \cr {- 1} \hfill & {if\,j = r} \hfill \cr 0 \hfill & {otherwise} \hfill \cr}} \right.

In the above equation, C represents the winning unit, which can also be regarded as the best matching unit. R represents the second-best unit, which meets the following conditions: $rc*S(X, c)≥rj*S(X,wj)rr*S(X,r)≥rj*Sj≠b(X,wj)$ \matrix{{{r_c}*S\left({X,\,c} \right) \ge {r_j}*S\left({X,{w_j}} \right)} \hfill \cr {{r_r}*S\left({X,r} \right) \ge {r_j}*{S_{j \ne b}}\left({X,{w_j}} \right)} \hfill \cr}

And we can get $rj=mj∑i=1kmi$ {r_j} = {\raise3ex{{m_j}} \!\mathord{\left/ {\vphantom {{{m_j}} {\sum\limits_{i = 1}^k {{m_i}}}}}\right.}\!\lower0.7ex{\sum\limits_{i = 1}^k {{m_i}}}} , mi represents the number of times UI =1.

Finally, the weight value should be adjusted according to the following formula: ${dwijdt=α*d(xi)*[Xi(t)−wij(t)]if j=cdwijdt=−β*d(xi)*[Xi(t)−wij(t)]if j=rdwijdt=0 otherwise$ \left\{{\matrix{{{{d{w_{ij}}} \over {dt}} = \alpha *d\left({{x_i}} \right)*\left[{{X_i}\left(t \right) - {w_{ij}}\left(t \right)} \right]} \hfill & {if\,j = c} \hfill \cr {{{d{w_{ij}}} \over {dt}} = - \beta *d\left({{x_i}} \right)*\left[{{X_i}\left(t \right) - {w_{ij}}\left(t \right)} \right]} \hfill & {if\,j = r} \hfill \cr {{{d{w_{ij}}} \over {dt}} = 0 \,\,\,\,\,\,\,\,otherwise} \hfill & {} \hfill \cr}} \right.

In this case, d (xi) represents the density of XI, and the specific definition formula is: $d(xi)=∑j=1nd(xi,xj)∑l=1nd(x1,xj)$ d\left({{x_i}} \right) = \sum\limits_{j = 1}^n {{{d\left({{x_i},{x_j}} \right)} \over {\sum\limits_{l = 1}^n {d\left({{x_1},{x_j}} \right)}}}}

In the above formula, α represents the rate of learning and β represents the rate of forgetting, which meets the condition α >> β.

Driving attitude characteristics of car users

In order to construct the portrait of the car user, the driving posture of the car is described primarily, which involves the torso azimuth, the upper arm azimuth, the elbow Angle, the thigh Angle, the knee Angle and so on. At the same time, during the design of automobile driving layout, priority should be given to defining the main feature points of automobile users, including S point representing the center of the connecting line of the left and right joints of human body; Point P represents the center of the connecting line of left and right hand marking points during the symmetry of human upper limb posture; Point E represents the center of the connection line between the marked points of left and right ankle joints in the symmetry of human lower limb posture; Point H represents the intersection of the trunk line and the center line of the thigh in the two-dimensional human body sample. In the THREE-DIMENSIONAL model, it represents the center of the connection line of the marking point of the left and right hip joints of the human body. It can also be regarded as the Point H in car seat design[6].

Cluster analysis based on the k-means clustering algorithm proposed above shall be performed according to the following steps:

First, suppose that the domain U={x1, x2…, xm} represents the target waiting for classification, all targets contain N indicators belonging to the corresponding attribute features, then it can be concluded that Xi ={xi1, xi2… Xin}, where xil represents the data value of the l th indicator of the I th target. In practical studies, data with different attributes and characteristics usually have diverse dimensions. To make comparative analysis of data combinations with different dimensions, scientific transformation of corresponding data is required, which is also known as the process of standardization. The most common operation method is translation and range transformation, and the specific formula is as follows:[7] $xil'=xil−mini∈m{xil}maxi∈m{xil}−mini∈m{xil}$ x_{il}^{'} = {{{x_{il}} - \mathop {min}\limits_{i \in m} \left\{{{x_{il}}} \right\}} \over {\mathop {max}\limits_{i \in m} \left\{{{x_{il}}} \right\} - \mathop {min}\limits_{i \in m} \left\{{{x_{il}}} \right\}}}

In the above formula, $maxi∈m{xil}$ \mathop {max}\limits_{i \in m} \left\{{{x_{il}}} \right\} represents the maximum value of the l th index of m targets; $mini∈m{xil}$ \mathop {min}\limits_{i \in m} \left\{{{x_{il}}} \right\} represents the minimum value of the l th index of m targets included.

Secondly, combined with the set number of clustering k, k targets are selected as the initial clustering centers after standardized processing, specifically {xj (l)}, and j conforms to 1,2… K. The selection principle followed at this time is that the distance between each initial cluster center should reach the maximum.

Thirdly, for all standardization targets, the distance between them and the cluster center $d[xi', xj(1)]$ d\left[{x_i^{'},\,x_j^{\left(1 \right)}} \right] should be calculated respectively. The most commonly used method is Euclidean distance method, the specific formula is as follows: $d[xi', xj(1)]=∑k=1m(xil'−xjl(1))2$ d\left[{x_i^{'},\,x_j^{\left(1 \right)}} \right] = \sqrt {\sum\limits_{k = 1}^m {{{\left({x_{il}^{'} - x_{jl}^{\left(1 \right)}} \right)}^2}}}

Fourthly, the minimum distance between all m standardized targets and all K clustering centers is $mini∈m, j∈p{d[xi', xj(1)]}$ \mathop {min}\limits_{i \in m,\,j \in p} \left\{{d\left[{x_i^{'},\,x_j^{\left(1 \right)}} \right]} \right\} , and x ‘I is converted into the j-th class, so m targets are divided into K categories.

Fifth, calculate the centroid of k categories accurately. Assume that there are Q targets divided into j category, then the calculation formula of the centroid of j category is as follows: $xj(2)=1q∑u=1qxu'={1q∑u=1qxul'}$ x_j^{\left(2 \right)} = {1 \over q}\sum\limits_{u = 1}^q {x_u^{'} = \left\{{{1 \over q}\sum\limits_{u = 1}^q {x_{ul}^{'}}} \right\}}

In the above formula, u= 1,2…, the condition q, $xul'$ x_{ul}^{'} represents the l th index value of the standardization target divided into the j class.

Sixth, regard $xj(2)$ x_j^{\left(2 \right)} as a new distance center, go back to the third operation to reclassify the included targets, and gradually complete iterative clustering according to this method.

After the RTH iteration clustering, the formula of the sum of the squares of the distances between the standardized target and the distance center contained in class J is shown as follows: $D[xu', xj(r)]=∑u=1q(xul'−xjl(r))2$ D\left[{x_u^{'},\,x_j^{\left(r \right)}} \right] = \sum\limits_{u = 1}^q {{{\left({x_{ul}^{'} - x_{jl}^{\left(r \right)}} \right)}^2}}

The formula of the sum of the square of the distance between the standardized target in the included k class and the corresponding distance center is as follows: $Dr=∑j=1kD[xu', xj(r)]=∑j=1k∑u=1q(xul'−xjl(r))2$ {D_r} = \sum\limits_{j = 1}^k {D\left[{x_u^{'},\,x_j^{\left(r \right)}} \right] = \sum\limits_{j = 1}^k {\sum\limits_{u = 1}^q {{{\left({x_{ul}^{'} - x_{jl}^{\left(r \right)}} \right)}^2}}}}

Assume that the actual classification is not reasonable, so distance sum of squares (Dr) the calculation of the trip will continue to rise, and following the increase of the actual number of iterations, the corresponding numerical will continue to decline and tend to be stable, if the given sufficiently small amount is in line with $|Dr+1−Dr|Dr+1≤ε$ {{\left| {{D_{r + 1}} - {D_r}} \right|} \over {{D_{r + 1}}}} \le \varepsilon this condition, so in the case of meet the requirements, need to terminate the iteration, the final clustering analysis.

In addition, the determination of clustering quantity K directly affects the final result. If the value k is too small, the sum of the squares of the distance between the target and the corresponding distance center will become larger and larger, and the final distance result is not ideal. If the value is too small, the practical significance of cluster analysis cannot be demonstrated. Under normal circumstances, the minimum cluster number should reach 2, and with the continuous increase of cluster number, the corresponding Dr Will continue to decline. Therefore, the k value during the decreasing trend of Dr From fast to slow can be regarded as the optimal cluster number.

Result analysis

Combined with the above proposed k-means clustering analysis algorithm for driving attitude of automobile users, 50 volunteers were selected for experimental verification, including 30 males and 20 females, with actual driving age of more than two years. Before formal experimental analysis, height and body mass should be tested according to relevant standards, so as to obtain the corresponding positions of human marker points. The final results are shown in the following table 1:

Characteristic information of measurement

Statistic Mean (standard deviation) Minimum - Maximum

Male (30) Female (30) Male (30) Female (30)

Age 28.5(5.0) 28(4.1) 23–41 23–40
Height/CM 173.6(5.0) 164.5(3.8) 166.0–183.0 158.0–172.0
Quality/KG 71.6(7.8) 54.3(5.4) 55.0–91.0 45.0–65.0
The trunk hs/cm long 58.6(2.5) 52.4(3.1) 54.0–63.0 46.0–57.0
Arm length lu/cm 62.2(1.7) 57.1(2.8) 59.0–65.0 53.0–62.0

The driving sitting position test experiment will conduct simulated operation according to the layout design of ordinary cars, in which the seat distance, inclination Angle and steering wheel height contained in the platform can be adjusted independently, and the seats that participants sit on are adjustable seats of ordinary cars. During the test, all volunteers were required to scientifically adjust the height, inclination and distance in the platform seat according to their own preferences, so as to ensure that they could meet the grip posture used in daily driving. According to the analysis of the calculation results shown in the table above, this operation can directly omit the data standardization process and conduct clustering analysis directly. In this paper, S P S S software is used for cluster analysis, and a variety of clustering data is proposed to compare and analyze the actual results, so as to obtain the optimal cluster number. At the same time, 3d images are used to visually present the clustering results and finally obtain the effective distance center. According to the analysis of the set number of different clusters, it can be seen that the optimal cluster number is 5, and the variance results of each cluster result are shown in the following table:

Analysis of variance of clustering results

Indicators Mean square value of clustering Mean square of error F Sig
d1 / hs 1.17×102 1.05×103 11.093 2.40×106
d2 / lu 2.18×102 1.47×102 14.799 8.61×108
d3 / d1 5.64×102 1.01×102 55.761 7.21×1017

Based on the 3d scatter graph, the final cluster distribution characteristics can be studied and more feature information can be obtained on the basis of visual presentation. Each of the five types of mounting can be represented by hollow markers in various forms, and the final distance center of each type of mounting has its corresponding solid marker. Finally, the categories and sample numbers of the three index parameters corresponding to the five cluster centers proposed in this paper are shown in Table 3 below[5,6]:

Cluster centers and sample numbers

Indicators Clustering
1 2 3 4 4
d1 / hs 0.784 0.801 0.744 0.790 0.836
d2 / lu 0.814 0.732 0.842 0.804 0.863
d3 / d1 0.787 0.894 0.914 0.994 0.874
Number of samples 11 8 12 9 10

In the local distribution results, d1 represents the vertical distance between points S and H, d2 represents the horizontal distance between points P and S, d3 represents the vertical distance between points P and H, D4 represents the horizontal distance between points E and H, and D5 represents the vertical distance between points E and H. Hs stands for height when the human body sits up straight, and LU stands for leg length. D1 / HS represents relative sitting height, D2 / LU represents the horizontal distance of the steering wheel, and D3 / D1 represents the vertical distance of the steering wheel[9,10].

Combined with the above distribution characteristics and value range analysis, it can be seen that the driving sitting characteristics of the five types of car users are as follows: First, when the driver holds the steering wheel, the height of his hand is lower than that of his shoulder, and the other two characteristics are relatively moderate; Second, the distance between the driver and the steering wheel is relatively close, the degree of arm bending is too large, and the other two characteristics are relatively moderate; Third, the driver's upper part of the body tilt is too large, the relative distance between the steering wheel is too far, the degree of arm bending is too low, and other characteristics are relatively moderate; Fourth, the driver's hand lifting height when holding the steering wheel is higher than his own shoulder, and the other two characteristics are moderate; Fifth, the driver's upper part of the body tilt is too low, and the distance between the steering wheel is too far, the arm bending degree is too low, other features moderate. Analysis of driver anthropometric data based on the value of cluster center can not only provide effective basis for the layout design of car seat and steering wheel, but also construct a perfect portrait of car users. The method studied in this paper not only analyzes the driving posture preference of the target group, but also studies the characteristics of differences in individual physique. This analysis can provide effective basis for the current staff involved in vehicle design and satisfy the attitude preference of different drivers[11].

Conclusion

To sum up, by simply presenting the driving portraits and corresponding poses of automobile users with dimension parameters, and selecting 50 groups of sample data and k-means clustering model algorithm for cluster analysis, on the basis of defining the optimal number of clusters, the corresponding upper body pose characteristics of different classes can be obtained in the visualization operation. Finally, the empirical analysis shows that the clustering algorithm model can effectively obtain the upper body posture characteristics of car users during driving, which can provide effective basis for differentiated driving posture and comfort research. It should be noted that this paper did not analyze the sitting preference of drivers during the static period, nor did it consider the dynamic characteristics of limbs and the scientific range of activities. Therefore, in the continuous technological update in the future, researchers of various countries should strengthen the research on static sitting preference of automobile users according to existing technical algorithms on the basis of sorting out previous research experience, and conduct in-depth research according to the actual operating conditions and processes, so as to finally obtain more specific design information.

#### Cluster centers and sample numbers

Indicators Clustering
1 2 3 4 4
d1 / hs 0.784 0.801 0.744 0.790 0.836
d2 / lu 0.814 0.732 0.842 0.804 0.863
d3 / d1 0.787 0.894 0.914 0.994 0.874
Number of samples 11 8 12 9 10

#### Characteristic information of measurement

Statistic Mean (standard deviation) Minimum - Maximum

Male (30) Female (30) Male (30) Female (30)

Age 28.5(5.0) 28(4.1) 23–41 23–40
Height/CM 173.6(5.0) 164.5(3.8) 166.0–183.0 158.0–172.0
Quality/KG 71.6(7.8) 54.3(5.4) 55.0–91.0 45.0–65.0
The trunk hs/cm long 58.6(2.5) 52.4(3.1) 54.0–63.0 46.0–57.0
Arm length lu/cm 62.2(1.7) 57.1(2.8) 59.0–65.0 53.0–62.0

#### Analysis of variance of clustering results

Indicators Mean square value of clustering Mean square of error F Sig
d1 / hs 1.17×102 1.05×103 11.093 2.40×106
d2 / lu 2.18×102 1.47×102 14.799 8.61×108
d3 / d1 5.64×102 1.01×102 55.761 7.21×1017

Zhengxian zheng, Fan zhang, Liang nie. Application Mode and Function Design of Electric Vehicle User Card [J]. Zhejiang Electric Power, 2013(06):20–23. zhengZhengxian zhangFan nieLiang Application Mode and Function Design of Electric Vehicle User Card [J] Zhejiang Electric Power 2013 06 20 23 Search in Google Scholar

Yingjie huang, Jianghong zhao, Danhua zhao. Research and Application of cognitive Model of automotive interior Design [J]. Packaging Engineering, 2019, V. 40; No.398(08):302–307. huangYingjie zhaoJianghong zhaoDanhua Research and Application of cognitive Model of automotive interior Design [J] Packaging Engineering 2019 40 398 (08): 302 307 Search in Google Scholar

An li, Lihui yang, Tao wan. Commercial Automobile, 2015, No.303(11):53–54. liAn yangLihui wanTao Commercial Automobile 2015 303 11 53 54 Search in Google Scholar

Bingzhen zhao (compiled). As the forerunner of automotive Tool development: Sandvik Colman built “Automotive Industry Application Center” in the user's location [J]. Tool Outlook, 2007(1):15–16.] zhaoBingzhen (compiled). As the forerunner of automotive Tool development: Sandvik Colman built “Automotive Industry Application Center” in the user's location [J] Tool Outlook 2007 1 15 16 Search in Google Scholar

Min duan, Hongyu jiao, Jing shi, et al. User Defined Feature Based on Pro/E and Its Application in Automotive Parts Design [J]. Machine Tool & Hydraulics, 2008(04):152–153. duanMin jiaoHongyu shiJing User Defined Feature Based on Pro/E and Its Application in Automotive Parts Design [J] Machine Tool & Hydraulics 2008 04 152 153 Search in Google Scholar

Linhai hu, Si chen. Changan Automobile big data application innovation and practice [J]. Digital Users, 2019, 025(010):156–157. huLinhai chenSi Changan Automobile big data application innovation and practice [J] Digital Users 2019 025 010 156 157 Search in Google Scholar

Haiying lin, Zhengxiong guo, Baosong song, et al. Research and application of electric vehicle charging service integration platform [J]. Power Supply and Electricity, 2019, 36(03):37–41. linHaiying guoZhengxiong songBaosong Research and application of electric vehicle charging service integration platform [J] Power Supply and Electricity 2019 36 03 37 41 Search in Google Scholar

Hao teng, Jianmin wu, Ziyi zhao. Conception and Discussion on the application of Customer relationship Management in Automobile marketing Management [J]. China Foreign Investment, 2009, 000(012): p.136. tengHao wuJianmin zhaoZiyi Conception and Discussion on the application of Customer relationship Management in Automobile marketing Management [J] China Foreign Investment 2009 000 012 136 Search in Google Scholar

Ayech M W, Ziou D. Segmentation of Terahertz imaging using k-means clustering based on ranked set sampling[J]. Expert Systems with Applications, 2015, 42(6):2959–2974. AyechM W ZiouD Segmentation of Terahertz imaging using k-means clustering based on ranked set sampling[J] Expert Systems with Applications 2015 42 6 2959 2974 10.1016/j.eswa.2014.11.050 Search in Google Scholar

Çitil, Hülya Gültekin. “Important Notes for a Fuzzy Boundary Value Problem” Applied Mathematics and Nonlinear Sciences, vol.4, no.2, 2019, pp.305–314. https://doi.org/10.2478/AMNS.2019.2.00027 ÇitilHülya Gültekin “Important Notes for a Fuzzy Boundary Value Problem” Applied Mathematics and Nonlinear Sciences 4 2 2019 305 314 https://doi.org/10.2478/AMNS.2019.2.00027 10.2478/AMNS.2019.2.00027 Search in Google Scholar

Al-Ghafri, K. S. and Rezazadeh, Hadi. “Solitons and other solutions of (3 + 1)-dimensional space–time fractional modified KdV–Zakharov–Kuznetsov equation” Applied Mathematics and Nonlinear Sciences, vol.4, no.2, 2019, pp.289–304. Al-GhafriK. S. RezazadehHadi “Solitons and other solutions of (3 + 1)-dimensional space–time fractional modified KdV–Zakharov–Kuznetsov equation” Applied Mathematics and Nonlinear Sciences 4 2 2019 289 304 10.2478/AMNS.2019.2.00026 Search in Google Scholar

Articoli consigliati da Trend MD