Dettagli della rivista
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Accesso libero

# Multi-attribute decision-making methods based on normal random variables in supply chain risk management

###### Accettato: 24 Sep 2021
Dettagli della rivista
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese

Random multi-attribute decision-making is a finite option selection problem related to multiple attributes, and the attribute values are random variables. Its application and supply chain risk management can transform interval decision numbers and fuzzy decision numbers into standardised decisions. Based on this research background, the article provides a primary method to determine the randomness of standard random variables based on expectations and variance through theoretical analysis. Second, the article determines the range of the total utility value of each supply chain selection plan based on the 3σ principle. Experiments have proved that this method can solve unifying opinions due to different knowledge, experience, and preferences of evaluation experts. This provides a new method of supplier selection.

#### MSC 2010

Introduction

The complexity of decision-making issues leads to decision-making indicators, often including quantitative and qualitative indicators. The hybrid multi-attribute decision-making model can handle quantitative and qualitative indicators, which is more in line with actual decision-making situations. However, due to the complexity of the attributes and the bounded rationality of the decision-maker, it is difficult for the weight directly given by the decision maker's subjective judgement to be consistent with the actual situation [1]. The article presents a mathematical programming model that integrates decision-makers personal weight preference information and objective decision matrix information. At the same time, we propose a combined weight algorithm that can integrate all kinds of subjective weights and n−1 kinds of objective weights.

Mixed multi-attribute decision-making problems

Suppose S = {s1, s2,⋯ ,sm} is a set of solutions for a multi-attribute decision-making problem. U = {u1, u2,⋯ ,sn} is the indicator set. The weight vector of the indicator is W = {w1, w2,⋯ ,wn} which is unknown [2].

Definition 1

We define a = [aL, aU] as a closed interval number. Among them, aL, aUR and the total number of intervals on 0 ≤ aLaU, R are denoted as R.

Definition 2

We assume that [aL, aU] is an interval number. ρ: [0, 1] → [0, 1] is a function with the following properties: ρ: (0) − 0; ρ(1) = 1; if xγ then ρ(x) ≥ ρ(γ), and $fρ([aL,aU])=∫01dρ(γ)dγ(aU−γ(aU−aL))dγ$ {f_\rho}\left({[{a^L},{a^U}]} \right) = \int_0^1 {{d\rho (\gamma)} \over {d\gamma}}\left({{a^U} - \gamma ({a^U} - {a^L})} \right)d\gamma $fγ(aL,aU])=(aL+aU])/2$ {f_\gamma}({a^L},{a^U}]) = ({a^L} + {a^U}])/2

Definition 3

We assume that R is a set of real numbers. P(R) represents the set of all fuzzy subsets on R. A fuzzy set ÃP(R) is called a fuzzy number. If there is at least one x0R, even uA (x0) = 1, eA is standard [3].

Definition 4

The fuzzy maximum set is a fuzzy subset Smax = {(x,umax)|xR}, and its membership function is: $μmin(x)=(1−x,0≤x≤10otherwise$ {\mu _{\min}}(x) = \left({\matrix{{1 - x,} & {0 \le x \le 1} \cr 0 & {otherwise} \cr}} \right.

In this way, the fuzzy number Ã can be converted into the exact number b: $b=[μRA+1−μL(A)]/2μR(A)=sup[μA(x)Λμmax(x)]μL(A)=sup[μA(x)Λμmin(x)]$ \matrix{\hfill {b = [{\mu _R}A + 1 - {\mu _L}(A)]/2} \cr \hfill {{\mu _R}(A) = \sup [{\mu _A}(x)\Lambda {\mu _{\max}}(x)]} \cr \hfill {{\mu _L}(A) = \sup [{\mu _A}(x)\Lambda {\mu _{\min}}(x)]} \cr}

In this way, the mixed decision matrix A = (aij)m×n is transformed into an exact number matrix B = (bij)m×n through Eq. (2) and (4). $cij=bij/∑i=1mbij2$ {c_{ij}} = {b_{ij}}/\sqrt {\sum\limits_{i = 1}^m b_{ij}^2}

The positive ideal solution is A* = {c1*,⋯cj* ⋯ ,cn*}, where $cj*={maxicij,j∈J1;minicij,j∈J2$ {c_j}^* = \{\mathop {\max}\limits_i {c_{ij}},j \in {J_1};\mathop {\min}\limits_i {c_{ij}},j \in {J_2} ; the negative ideal solution is $A¯={c¯1,⋯,c¯j⋯,c¯n}$ \overline A = \left\{{{{\overline c}_1}, \cdots ,{{\overline c}_j} \cdots ,{{\overline c}_n}} \right\} . J1 is a profitable attribute index. J2 is the cost attribute index.

Algorithms for comprehensive weights
Insufficiency of the existing objective weight calculation model

After studying various methods of determining objective weights, some scholars have proposed mathematical optimisation models [4]. These models often use the following methods when solving objective weights. We transform the exact number decision matrix A = (aij)m×n into a standardised decision matrix B = (bij)m×n. $bij=aij−ajminajmax−ajmin, j∈J1;$ {b_{ij}} = {{{a_{ij}} - a_j^{\min}} \over {a_j^{\max} - a_j^{\min}}},\quad j \in {J_1}; $bij=ajmax−aijajmax−ajmin, j∈J2;$ {b_{ij}} = {{a_j^{\max} - {a_{ij}}} \over {a_j^{\max} - a_j^{\min}}},\quad j \in {J_2};

J1 is a profit-based indicator. J2 is a cost index. In this way, a solution model for objective weights is obtained (8) ${minZ1=∑j=1nwTHws.t.eTw=1, wj≥0$ \left\{{\matrix{{\min {Z_1} = \sum\limits_{j = 1}^n {w^T}Hw} \hfill \cr {s.t.{e^T}w = 1,\quad {w_j} \ge 0} \hfill \cr}} \right.

H is the diagonal matrix of n × n. Its diagonal element is $hij=∑i=1m(bij−bj*)2$ {h_{ij}} = \sum\limits_{i = 1}^m {\left({{b_{ij}} - {b_j}^*} \right)^2} , bj* = max {b1jbmj}. Solving model (8) can get: $w=H−1e/eTH−1e$ w = {H^{- 1}}e/{e^T}{H^{- 1}}e

Some scholars pointed out that the weight distribution mechanism as well as the meaning of model (8) is not precise, and it does not conform to the principle of entropy model weight distribution. Through case analysis, it is found that small changes in the decision matrix will lead to significant changes in weights, so the weight distribution mechanism of the model (8) is unreasonable. So, we proposed an entropy model to solve the objective weights [5]. The main methods are as follows: $wj=dj/∑j=1ndjdj=1−Ej, Ej=−(∑i=1npijlnpij)/lnnpij=aij/∑i=1maij$ \matrix{{{w_j} = {d_j}/\sum\limits_{j = 1}^n {d_j}} \hfill \cr {{d_j} = 1 - {E_j},\quad {E_j} = - \left({\sum\limits_{i = 1}^n {p_{ij}}\ln {p_{ij}}} \right)/\ln n{p_{ij}} = {a_{ij}}/\sum\limits_{i = 1}^m {a_{ij}}} \hfill \cr}

To assign weights, the entropy model is guided by the following principle. If the evaluation value of each scheme under the j attribute tends to be more consistent, then the weight of the j attribute will be smaller. The entropy model also has some unreasonable points in assigning weights as follows:

The weight distribution is not flexible. The entropy model defines dj = 1 − Ej, so can dj = 2 − Ej or other functions of Ej being set?

It is easy to cause too much weight difference. In actual decision-making, when an indicator is introduced into the evaluation system, it can generally be considered that it cannot exceed and equal to zero [6]. That is, the maximum weight cannot be >10 times the minimum weight.

The construction of a suitable mathematical model requires a deep understanding of the specific situation and rich mathematical experience of the issues involved in the decision-making problem. This isn’t easy. To judge the rationality of the objective weight model, we give Judgement Theorem 1.

Judgement Theorem 1

The objective weight obtained by this model can reflect the information of the decision matrix. When the decision matrix changes, the degree of weight change should be consistent with the degree of change of the decision matrix.

Entropy coefficient model

Based on the model (8) and entropy model (10), we transform the exact number decision matrix A = (aij)m×n into a standardised decision matrix C = (cij)m×n. Among them: $cij=aijajmax, j∈J1;$ {c_{ij}} = {{{a_{ij}}} \over {a_j^{\max}}},\quad j \in {J_1}; $cij=ajminaij, j∈J2;$ {c_{ij}} = {{a_j^{\min}} \over {{a_{ij}}}},\quad j \in {J_2};

J1 is a profit-based indicator and J2 is a cost index; $ajmax=max{a1j,a2j,⋯,amj}, j=1,⋯,n;ajmin=min{a1j,a2j,⋯,amj}, j=1,⋯,n;$ \matrix{{a_j^{\max} = \max \left\{{{a_{1j}},{a_{2j}}, \cdots ,{a_{mj}}} \right\},\quad j = 1, \cdots ,n;} \hfill \cr {a_j^{\min} = \min \left\{{{a_{1j}},{a_{2j}}, \cdots ,{a_{mj}}} \right\},\quad j = 1, \cdots ,n;} \hfill \cr}

Definition 5

For the normalised matrix C = (cij)m×n, the entropy of the j attribute is defined as: $hj=ρ−Ej$ {h_j} = \rho - {E_j}

$Ej=−(∑i=1ncijlncij)/lnn$ {E_j} = - \left({\sum\limits_{i = 1}^n {c_{ij}}\ln {c_{ij}}} \right)/\ln n , ρ is the system parameter (ρ ≥ max{E1,⋯Ej ⋯En}). Then the entropy coefficient model for solving the objective weight is: ${minZ2=wTKws.t.eTw=1w≥0$ \left\{{\matrix{{\min {Z_2} = {w^T}Kw} \hfill \cr {s.t.{e^T}w = 1} \hfill \cr {w \ge 0} \hfill \cr}} \right. where K is the diagonal matrix of n×n. Its diagonal elements are kij = ρEj, kij > 0, j = 1,⋯ ,n; the remaining elements are zero. We assume that L = wT Kwλ(eT w − 1), then $∂L∂wj=2Kw−λ=0$ {{\partial L} \over {\partial {w_j}}} = 2Kw - \lambda = 0 , $∂L∂λ=eTw−1=0$ {{\partial L} \over {\partial \lambda}} = {e^T}w - 1 = 0 . Calculate to get w = K−1e/eT K−1e.

Property 1. The weight distribution principle of the entropy coefficient model is the same as that of the entropy model. If the evaluation value of each scheme under the jth attribute tends to be more consistent, then the weight of the jth attribute will be smaller.

Property 2. The entropy coefficient model has certain flexibility [7]. The decision-maker can set the size of the system parameter ρ according to specific actual needs to adjust the degree of the weight difference between attributes. The larger the ρ, the smaller the system attribute weight difference.

Comparison between models (8), (10), and (14)

Here are two examples to illustrate the difference between the entropy coefficient model (14), the model (8), and the entropy model (10):

Example 1

Suppose there is a decision matrix $Am×n=[a11⋯a1n⋮⋮⋮am1⋯amn]$ {A_{m \times n}} = \left[ {\matrix{{{a_{11}}} & \cdots & {{a_{1n}}} \cr \vdots & \vdots & \vdots \cr {{a_{m1}}} & \cdots & {{a_{mn}}} \cr}} \right] . We use model (8) to normalise A to get matrix B, then $wj=(hjj)−1∑j=1n(hjj)−1$ {w_j} = {{{{\left({{h_{jj}}} \right)}^{- 1}}} \over {\sum\limits_{j = 1}^n {{\left({{h_{jj}}} \right)}^{- 1}}}} can be obtained according to formula (9). If the evaluation value of each scheme under the j attribute tends to be the same [8]. That is, bijbj* (i = 1, 2,⋯ ,m) is hij → 0, so $wj=limh→0jj(hjj)−1∑j=1n(hjj)−1=1$ {w_j} = \mathop {\lim}\limits_{\mathop {h \to 0}\limits_{jj}} {{{{\left({{h_{jj}}} \right)}^{- 1}}} \over {\sum\limits_{j = 1}^n {{\left({{h_{jj}}} \right)}^{- 1}}}} = 1 . Model (8) may cause the weight of the j attribute to be too large.

We use the entropy model (10) to calculate. If the evaluation value of each scheme under the jth attribute tends to be the same, that is, pij → 1/n(i = 1,2⋯m) is dj → 0, so $wj=dj∑j=1ndj→0$ {w_j} = {{{d_j}} \over {\sum\limits_{j = 1}^n {d_j}}} \to 0 . When assigning weights, the degree weight difference may be too large.

3. We use the entropy coefficient model (15) for solving objective weight. According to formula (15), we can get: $wj=(kjj)−1∑j=1n(kjj)−1$ {w_j} = {{{{\left({{k_{jj}}} \right)}^{- 1}}} \over {\sum\limits_{j = 1}^n {{\left({{k_{jj}}} \right)}^{- 1}}}} . If the evaluation value of each scheme under the jth attribute tends to be the same, that is, cij → 1(i = 1,2,⋯ ,m) is Ej → 0. So kjj = ρEjρ, then $wj=limk→ρjj(kjj)−1∑j=1n(kjj)−1=ρ−1∑i≠jn(kjj)−1+ρ−1$ {w_j} = \mathop {\lim}\limits_{\mathop {k \to \rho}\limits_{jj}} {{{{\left({{k_{jj}}} \right)}^{- 1}}} \over {\sum\limits_{j = 1}^n {{\left({{k_{jj}}} \right)}^{- 1}}}} = {{{\rho ^{- 1}}} \over {\sum\limits_{i \ne j}^n {{\left({{k_{jj}}} \right)}^{- 1}} + {\rho ^{- 1}}}}

In this way, we can set the system parameters ρ according to the specific decision-making situation so that the entropy coefficient model (15) has a certain degree of flexibility [9].

There is a decision matrix A4×4. To simplify, we assume that its attribute indicators are all income indicators $A=[P1P2P3P4S130303829.0S219548629.0S319158528.9S468706029.0]$ A = \left[ {\matrix{{} & {{P_1}} & {{P_2}} & {{P_3}} & {{P_4}} \cr {{S_1}} & {30} & {30} & {38} & {29.0} \cr {{S_2}} & {19} & {54} & {86} & {29.0} \cr {{S_3}} & {19} & {15} & {85} & {28.9} \cr {{S_4}} & {68} & {70} & {60} & {29.0} \cr}} \right]

Using model (8), we can get: w = (0.1384, 0.2232, 0.2783, 0.3601).

Using the entropy model (10), we can get: w = (0.4630, 0.3992, 0.1378, 0).

Using the entropy coefficient model (14), when the system parameter is ρ = 0.8, we can get w = (0.7875, 0.1296, 0.0576, 0.0253). When the system parameters are used, ρ = 1 can get w = (0.4404, 0.2795, 0.1806, 0.0996). When the element a34 of the matrix A changes from 28.9 to 29.1, we can get the matrix A1: $A1=[P1P2P3P4S130303829.0S219548629.0S319158528.9S468706029.0]$ {A_1} = \left[ {\matrix{{} & {{P_1}} & {{P_2}} & {{P_3}} & {{P_4}} \cr {{S_1}} & {30} & {30} & {38} & {29.0} \cr {{S_2}} & {19} & {54} & {86} & {29.0} \cr {{S_3}} & {19} & {15} & {85} & {28.9} \cr {{S_4}} & {68} & {70} & {60} & {29.0} \cr}} \right]

Using model (8), we can get w=(0.1821, 0.2937, 0.3662, 0.1579).

Using the entropy model (10), we can get w=(0.4630, 0.3992, 0.1378, 0).

Use the entropy coefficient model (14). When the system parameter is ρ = 0.8, w = (0.7874, 0.1296, 0.0576, 0.0254) can be obtained. When the system parameters are used, ρ = 1 can be w = (0.4402, 0.2793, 0.1805, 0.1).

When a34 undergoing a small change, the weight change obtained using model (8) is too large [10]. The weight of the particular attribute P4 has changed from 0.3601 to 0.1579. The weights obtained by using the entropy model (10) have not changed. This cannot reflect a slight change in the decision matrix. Using the entropy coefficient model (14), the weight change obtained is relatively small, consistent with the slight chance of the matrix. From the aforementioned two examples, the entropy coefficient model can adapt to different decision-making situations by adjusting the value of the system parameter ρ, and the weight obtained is more reasonable than the model (8) and the entropy model (10).

Comprehensive weight calculation method

Suppose that the decision-maker directly gives the emotional weight of the attribute as $W(0)=(w1(0),⋯,wj(0),⋯wn(0))$ {W^{(0)}} = \left({w_1^{(0)}, \cdots ,w_j^{(0)}} \right.,\left. {\cdots w_n^{(0)}} \right) , $0≤wj(0)≤1$ 0 \le w_j^{(0)} \le 1 and $∑j=1nwj(0)=1$ \sum\limits_{j = 1}^n w_j^{(0)} = 1 . The total weight of the attribute: $W*=(w1*,⋯,wj*,⋯wn*)wj*=βwj(0)+(1−β)wj∑j=1nwj*=β∑j=1nwj(0)+(1−β)∑j=1nwj=1$ \matrix{{{W^*} = \left({{w_1}^*, \cdots ,{w_j}^*, \cdots {w_n}^*} \right)} \hfill \cr {{w_j}^* = \beta w_j^{(0)} + \left({1 - \beta} \right){w_j}\sum\limits_{j = 1}^n {w_j}^* = \beta \sum\limits_{j = 1}^n w_j^{(0)} + \left({1 - \beta} \right)\sum\limits_{j = 1}^n {w_j} = 1} \hfill \cr}

β (0 ≤ β ≤ 1) is the weighted trade-off coefficient. If the ranking of the schemes is highly sensitive to weight changes, the reliability of the evaluation results is difficult to guarantee. It is also tricky for decision-makers to make choices [11]. To judge the rationality of the total weights, Judgement Theorem 2 is proposed.

Judgement Theorem 2

If the scheme ranking is less sensitive to changes in the total weight, then the total weight is relatively reasonable.

Scheme ordering steps

The distance from the first plan to the positive ideal plan is: $di*=∑j=1n(wj*)2(cij−cj*)2$ {d_i}^* = \sqrt {\sum\limits_{j = 1}^n {{\left({{w_j}^*} \right)}^2}{{\left({{c_{ij}} - {c_j}^*} \right)}^2}}

The distance from the i plan to the negative ideal plan is: $di−=∑j=1n(wj*)2(cij−cj−)2$ {d_i}^ - = \sqrt {\sum\limits_{j = 1}^n {{\left({{w_j}^*} \right)}^2}{{\left({{c_{ij}} - {c_j}^ -} \right)}^2}}

The relative closeness of the i scheme to the positive ideal scheme is: $Di=di−di−+di*$ {D_i} = {{d_i^ -} \over {d_i^ - + d_i^*}} The larger the i = 1⋯m, j = 1⋯n, Di better than i plan.

Arrange the pros and cons of the schemes in descending order of Di value.

Use the weighted trade-off coefficient β to perform sensitivity analysis on the ranking of the schemes.

Case study

A company's production line needs to choose robots among the four submitted models. Now four suppliers are providing four solutions: s1, s2, s3, s4. Each program has six attributes [12]. The specific data are shown in Table 1. u5 and u6 are qualitative indicators. According to the relationship between fuzzy numbers and language variables, we use fuzzy triangular numbers and trapezoidal fuzzy numbers to represent:

The six attribute values of the four robots

u1 u2 u3 u4 u5 u6

s1 2 2.5 [55,56] [94,114] Normal (0.4, 0.5, 0.6) Very high (0.85, 0.9, 0.95, 1)
s2 2.5 2.7 [30,40] [84,104] Low (0.2, 0.3, 0.4) Normal (0.3, 0.4, 0.6, 0.7)
s3 1.8 2.4 [50,60] [100,120] High (0.6, 0.7, 0.8) High (0.5, 0.6, 0.8, 0.9)
s4 2.2 2.6 [35,45] [90,110] Normal (0.4, 0.5, 0.6) Normal (0.3, 0.4, 0.6, 0.7)

Subjective weight W(0) = (0.2, 0.2, 0.1, 0.1, 0.2, 0.2). The weight compromise factor β is 0.4. We use formulas (1)(3) to standardise the evaluation matrix A to obtain a standardised matrix C, $CT=[0.46710.58390.42040.51390.48970.52890.47010.50930.58730.37040.58200.42330.50900.46000.53830.48940.48450.31010.65900.48450.65920.38330.52120.3833]$ {C^T} = \left[ {\matrix{{0.4671} & {0.5839} & {0.4204} & {0.5139} \cr {0.4897} & {0.5289} & {0.4701} & {0.5093} \cr {0.5873} & {0.3704} & {0.5820} & {0.4233} \cr {0.5090} & {0.4600} & {0.5383} & {0.4894} \cr {0.4845} & {0.3101} & {0.6590} & {0.4845} \cr {0.6592} & {0.3833} & {0.5212} & {0.3833} \cr}} \right]

The positive ideal solution is A* = (0.4204, 0.5289, 0.5873, 0.4600, 0.6590, 0.6592).

Use model (14). If we set the system parameter ρ = 1, we can get the objective weight W = (0.149, 0.1131, 0.1559, 0.1203, 0.2291, 0.2326). Then the comprehensive weight W* = β ×W(0) + (1 − β) × WW* = (0.1694, 0.1479, 0.1335, 0.1122, 0.2175, 0.2196).

The distance from each plan to the positive ideal plan is $d1*=0.0396d2*=0.1050d3*=0.0327d4*=0.0765$ \matrix{{d_1^* = 0.0396} & {d_2^* = 0.1050} \cr {d_3^* = 0.0327} & {d_4^* = 0.0765} \cr} , respectively.

The distance from each plan to the negative ideal plan is $d1−=0.0797d2−=0.0124d3−=0.0908d4−=0.0411$ \matrix{{d_1^ - = 0.0797} & {d_2^ - = 0.0124} \cr {d_3^ - = 0.0908} & {d_4^ - = 0.0411} \cr} , respectively.

The relative closeness of each scheme to the positive ideal scheme is $D1=0.6683D2=0.1053D3=0.7350D4=0.3496$ \matrix{{{D_1} = 0.6683} & {{D_2} = 0.1053} \cr {{D_3} = 0.7350} & {{D_4} = 0.3496} \cr} .

So, the sorting result: s3s1s4s2.

Sensitivity analysis. Sensitivity analysis observes the influence of the trade-off coefficient β on the ranking of plans [13]. The results are shown in Table 2 and Figure 1.

β influence on the closeness of each plan

β s1 s2 s3 s4

0 0.6726 0.0944 0.7366 0.3451
0.2 0.6705 0.0994 0.7358 0.3472
0.4 0.6683 0.1053 0.735 0.3496
0.6 0.666 0.1118 0.734 0.3522
0.8 0.6637 0.1191 0.7329 0.355
1 0.6613 0.1296 0.7316 0.358

When the system parameter is ρ = 0.6, the sensitivity analysis is shown in Figure 2.

2. Using model (8), we can get the objective weight W=(0.0777,0.5411,0.0392,0.3091,0.0157,0.0171). The sensitivity analysis is shown in Figure 3.

From Figure 3, the weight is more sensitive to the change of the program ranking, and it is difficult for decision-makers to choose. The main reason is that the objective weight is not very reasonable, and the evaluation value of each scheme under the second attribute is the most consistent. This causes the weight of the second attribute to be too large and it exceeds 50%. It is 35 times the minimum weight. In Figures 1 and 2, the pros and cons of the scheme are more pronounced, so it is easier for decision-makers to judge.

Conclusion

This article studied the mixed multi-attribute decision-making problem with quantitative and qualitative indicators and converts interval and fuzzy numbers into exact numbers to obtain a standardised judgement matrix. This method can resolve some of the issues involved with the mixed decision-making problem and simplify the calculation with the undefined index being the non-linear fuzzy number. We have established an entropy coefficient model for solving the objective weights of attributes. This model has a certain degree of flexibility, and the obtained weights are relatively reasonable.

#### The six attribute values of the four robots

u1 u2 u3 u4 u5 u6

s1 2 2.5 [55,56] [94,114] Normal (0.4, 0.5, 0.6) Very high (0.85, 0.9, 0.95, 1)
s2 2.5 2.7 [30,40] [84,104] Low (0.2, 0.3, 0.4) Normal (0.3, 0.4, 0.6, 0.7)
s3 1.8 2.4 [50,60] [100,120] High (0.6, 0.7, 0.8) High (0.5, 0.6, 0.8, 0.9)
s4 2.2 2.6 [35,45] [90,110] Normal (0.4, 0.5, 0.6) Normal (0.3, 0.4, 0.6, 0.7)

#### β influence on the closeness of each plan

β s1 s2 s3 s4

0 0.6726 0.0944 0.7366 0.3451
0.2 0.6705 0.0994 0.7358 0.3472
0.4 0.6683 0.1053 0.735 0.3496
0.6 0.666 0.1118 0.734 0.3522
0.8 0.6637 0.1191 0.7329 0.355
1 0.6613 0.1296 0.7316 0.358

Zolghadr-Asli, B., Bozorg-Haddad, O., Enayati, M., & Goharian, E. Developing a robust multi-attribute decision-making framework to evaluate performance of water system design and planning under climate change. Water Resources Management., 2021; 35(1): 279–298 Zolghadr-AsliB. Bozorg-HaddadO. EnayatiM. GoharianE. Developing a robust multi-attribute decision-making framework to evaluate performance of water system design and planning under climate change Water Resources Management 2021 35 1 279 298 10.1007/s11269-020-02725-y Search in Google Scholar

Zhang, H., Jiang, W., & Deng, X. Data-driven multi-attribute decision-making by combining probability distributions based on compatibility and entropy. Applied Intelligence., 2020; 50(11): 4081–4093 ZhangH. JiangW. DengX. Data-driven multi-attribute decision-making by combining probability distributions based on compatibility and entropy Applied Intelligence 2020 50 11 4081 4093 10.1007/s10489-020-01738-9 Search in Google Scholar

Ma, Z., Zhu, J., & Zhang, S. Probabilistic-based expressions in behavioral multi-attribute decision making considering pre-evaluation. Fuzzy Optimization and Decision Making., 2021; 20(1): 145–173 MaZ. ZhuJ. ZhangS. Probabilistic-based expressions in behavioral multi-attribute decision making considering pre-evaluation Fuzzy Optimization and Decision Making 2021 20 1 145 173 10.1007/s10700-020-09335-8 Search in Google Scholar

bin Liu, H., Liu, Y., Xu, L., & Abdullah, S. Multi-attribute group decision-making for online education live platform selection based on linguistic intuitionistic cubic fuzzy aggregation operators. Computational and Applied Mathematics., 2021; 40(1): 1–34 bin LiuH. LiuY. XuL. AbdullahS. Multi-attribute group decision-making for online education live platform selection based on linguistic intuitionistic cubic fuzzy aggregation operators Computational and Applied Mathematics 2021 40 1 1 34 Search in Google Scholar

Kexin, J., Quan, Z., & Manting, Y. Multi-attribute group decision making method under 2-dimension uncertain linguistic variables. Journal of Systems Engineering and Electronics., 2020; 31(6): 1254–1261 KexinJ. QuanZ. MantingY. Multi-attribute group decision making method under 2-dimension uncertain linguistic variables Journal of Systems Engineering and Electronics 2020 31 6 1254 1261 10.23919/JSEE.2020.000096 Search in Google Scholar

Wei, M., Sun, B., Wang, H., & Xu, Z. A multi-attribute decision-making model for the evaluation of uncertainties in traffic pollution control planning. Environmental Science and Pollution Research., 2019; 26(18): 17911–17917 WeiM. SunB. WangH. XuZ. A multi-attribute decision-making model for the evaluation of uncertainties in traffic pollution control planning Environmental Science and Pollution Research 2019 26 18 17911 17917 10.1007/s11356-017-0631-929103117 Search in Google Scholar

Tao, Z., Liu, X., Zhou, L., & Chen, H. Rank aggregation based multi-attribute decision making with hybrid Z-information and its application. Journal of Intelligent & Fuzzy Systems., 2019; 37(3): 4231–4239 TaoZ. LiuX. ZhouL. ChenH. Rank aggregation based multi-attribute decision making with hybrid Z-information and its application Journal of Intelligent & Fuzzy Systems 2019 37 3 4231 4239 10.3233/JIFS-190344 Search in Google Scholar

Liu, P., & Zhang, P. A normal wiggly hesitant fuzzy MABAC method based on CCSD and prospect theory for multiple attribute decision making. International Journal of Intelligent Systems., 2021; 36(1): 447–477 LiuP. ZhangP. A normal wiggly hesitant fuzzy MABAC method based on CCSD and prospect theory for multiple attribute decision making International Journal of Intelligent Systems 2021 36 1 447 477 10.1002/int.22306 Search in Google Scholar

Liu, P., Wang, P., & Liu, J. Normal neutrosophic frank aggregation operators and their application in multi-attribute group decision making. International Journal of Machine Learning and Cybernetics., 2019; 10(5): 833–852 LiuP. WangP. LiuJ. Normal neutrosophic frank aggregation operators and their application in multi-attribute group decision making International Journal of Machine Learning and Cybernetics 2019 10 5 833 852 10.1007/s13042-017-0763-8 Search in Google Scholar

Okfalisa, O., Rusnedy, H., Iswavigra, D. U., Pranggono, B., Haerani, E., & Saktioto, T. Decision support system for smartphone recommendation: The comparison of fuzzy AHP and fuzzy ANP in multi-attribute decision making. SINERGI., 2021; 25(1): 101–110 OkfalisaO. RusnedyH. IswavigraD. U. PranggonoB. HaeraniE. SaktiotoT. Decision support system for smartphone recommendation: The comparison of fuzzy AHP and fuzzy ANP in multi-attribute decision making SINERGI 2021 25 1 101 110 10.22441/sinergi.2021.1.013 Search in Google Scholar

Iglesias Martínez, M., Antonino-Daviu, J., de Córdoba, P. & Conejero, J. Higher-Order Spectral Analysis of Stray Flux Signals for Faults Detection in Induction Motors. Applied Mathematics and Nonlinear Sciences., 2020; 5(2): 1–14 Iglesias MartínezM. Antonino-DaviuJ. de CórdobaP. ConejeroJ. Higher-Order Spectral Analysis of Stray Flux Signals for Faults Detection in Induction Motors Applied Mathematics and Nonlinear Sciences 2020 5 2 1 14 10.2478/amns.2020.1.00032 Search in Google Scholar

Aghili, A. Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method. Applied Mathematics and Nonlinear Sciences., 2020; 6(1): 9–20 AghiliA. Complete Solution For The Time Fractional Diffusion Problem With Mixed Boundary Conditions by Operational Method Applied Mathematics and Nonlinear Sciences 2020 6 1 9 20 10.2478/amns.2020.2.00002 Search in Google Scholar

Lalotra, S., & Singh, S. Knowledge measure of hesitant fuzzy set and its application in multi-attribute decision-making. Computational and Applied Mathematics., 2020; 39(2): 1–31 LalotraS. SinghS. Knowledge measure of hesitant fuzzy set and its application in multi-attribute decision-making Computational and Applied Mathematics 2020 39 2 1 31 10.1007/s40314-020-1095-y Search in Google Scholar

• #### Higher Education Agglomeration Promoting Innovation and Entrepreneurship Based on Spatial Dubin Model

Articoli consigliati da Trend MD