This work is licensed under the Creative Commons Attribution 4.0 International License.
Introduction
With regard to finite common set theory with static feature, research on some dynamic systems often faced problems because change always exists. It became necessary to construct a new kind of set model with dynamic characteristics. Hence, Refs. [1,2,3,4] proposed two types of dynamic set models-packet sets and inverse packet sets (IPSs), by replacing “static” with “dynamic” to improve the finite common set. These dynamic set models provide a better theory foundation for dealing with dynamic applied systems. Later, the mathematical characteristics of the new sets such as quantitative characteristics, algebraic characteristics, geometrical characteristics, genetic characteristics, random characteristics, and theory applications are discussed by more and more scholars [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22]. Especially, literatures [5,6,7,8,9,10,11,12,13,14,15,16,17] developed the latter model by taking information instead of sets to obtain the inverse packet information (IPI) model and provide some applications for information fusion–separation, hidden information discovery, intelligent data digging, and big decomposition–fusion acquisition. However current research on random inverse packet information (RIPI) is rare. Hence, we consider the probabilities of information element migration in IPI and present some concepts about the RIPI and their structures. Furthermore, the random feature, dynamic characteristics, and identification relations on RIPI are discussed and applied to intelligent acquisition–separation of investment information.
Convention: (x) = {x1, x2, ···, xs} ⊂ U is a nonempty finite ordinary information and α ⊂ V is its nonempty attribute set; F,F̅ are information transition function families in which f ∈ F, f̅ ∈ F̅ are transition functions, whose detailed characteristics and occurrence probabilities can be found in Hao et al. [23]. The occurrence probabilities of two events
\{{x_i}|{w_i}\bar \in (x),f({w_i}) = {x_i} \in (x)\}
,
\{ x|x \in (x),\bar f(x) = w\bar \in (x)\}
are simply written as pF(f) and pF̅(f̅) in order.
RIPI and its construction
The theory model of IPSs [3, 4] with the inner IPS X̅F and exterior IPS X̅F̅ combined, has the following dynamic characteristics: given a finite common element set X = {x1, x2, ..., xr} with α = {α1,α2,...,αr′}. I. If some added attributes are transferred by f to α and to get αF such that α ⊆ αF, then some extra elements are accordingly removed to X to generate a new element set called inner IPS X̅F, X ⊆ X̅F. II. If some attributes are transferred by f̅ from α to generate αF̅ such that αF̅ ⊆ α, then some elements in X are accordingly deleted to generate a new element set called exterior IPS X̅F̅, X̅F̅ ⊆ X. III. If it happens in the same time that some extra attributes are moved into α and some other attributes in α are migrated out, that is, α becomes αF, and meanwhile α does αF̅, αF̅ ⊆ α ⊆ αF, then X becomes an IPS(X̅F, X̅F̅), which fulfills X̅F̅ ⊆ X ⊆ X̅F and has dynamic characteristics. All of the IPSs generated by setX constitute a set family called the IPS family
\{ (X_i^{\bar F},X_j^F)|i \in I,j \in J\}
[3]. Especially, if the above process occurs continuously, X would dynamically generate a linked IPS
(\bar X_1^F\bar X_1^{\bar F}),(\bar X_2^F\bar X_2^{\bar F}),...,(\bar X_s^F\bar X_s^{\bar F})
, which has the relation
\bar X_i^F \subseteq \bar X_{i + 1}^F
and
\bar X_{i + 1}^{\bar F} \subseteq \bar X_i^{\bar F}
, i = 1, 2, ..., s. Let us treat the sets X̅F̅, X, X̅F as information indicated orderly by (x̅)F,(x), and (x̅)F̅. Then we obtain IPI((x̅)F, (x̅)F̅) with all the characteristics of IPS [18,19,20,21,22].
For inner IPI (x̅)F, the dynamic process is shown by adding information elements under the condition that some attributes are migrated into α, as ∃wi ∉ (x), f(wi) = xi ∈ (x), where (x̅)F indicates (x) ∪ {xi | wi ∉ (x), f(wi) = xi ∈ (x)}. For exterior IPI (x̅)F̅, the dynamic process is done by some elements in (x) migrated out under the condition that some attributes in α are removed out, as ∃xi ∈ (x), f̅(xi) = wi ∉ (x), where (x̅)F̅ indicates (x) − {xi ∈ (x)|f̅(xi) = wi ∉ (x)}. Obviously, all of the inner IPI and exterior IPI generated by (x) can, respectively, form an inner IPI family and an exterior IPI family expressed as
\{ (\bar x)_i^F|i \in {\rm{I}}\}
,
\{ (\bar x)_j^F|j \in {\rm{J}}\}
. Certainly, all of the IPI generated by(x) can also form an IPI family as
\Phi (x) = \{ ((\bar x)_i^F,(\bar x)_j^F)|i \in {\rm{I}}{\rm{,}}j \in {\rm{J}}\}
. Considering
\{ {x_i}|{u_i}\bar \in (x),f({u_i}) = {x_i} \in (x)\}
as an event,
(\bar x)_i^F
is obtained in the case of the event occurrence probability equal to 1, namely, (x̅)F is obtained with the fact that
\{ {x_i}|{u_i}\bar \in (x),f({u_i}) = {x_i} \in (x)\}
is bound to happen. As we know, it is stochastic that wi ∉ (x) is transferred in(x) by f. The same goes for exterior IPI and IPI [23].
Definition 1
is called random inner IPI generated by (x) depending on information element migration probability σ, generally written as random inner IPI, such that
{(\bar x)^{_{F\sigma }}} = {(\bar x)^F} \cup (\bar x)_\sigma ^ + ,(\bar x)_\sigma ^ +
is called the added random information with
(\bar x)_\sigma ^ + = \{ x|w\bar \in (x),p(\{ f(w) = x \in (x)\} ) \in [\sigma,1]\}
on condition that αF fulfills
{\alpha ^F} = \alpha \cup \{ \alpha '|\delta \in V,\delta \bar \in \alpha ,f(\delta ) = \alpha ' \in \alpha \}
where σ ∈ (0, 1) and αF are also thought to be the attribute sets of (x̅)Fσ.
Definition 2
(x̅)F̅σ is called the random exterior IPI obtained by (x) depending on information element migration probability σ, briefly written as random exterior IPI, such that
{(\bar x)^{\bar F\sigma }} = {(\bar x)^{\bar F}} - \{ x|x \in (x),p(\{ \bar f(x) = w\bar \in (x)\} ) \in [\sigma ,1)\} ,(\bar x)_\sigma ^ -
is called deleted random information with
(\bar x)_\sigma ^ - = \{ x|x \in (x),p(\{ \bar f(x) = w\bar \in (x)\} ) \in [\sigma ,1]\}
on condition that αF̅ fulfills
{\alpha ^{\bar F}} = \alpha - \{ {\alpha _i}|{\alpha _i} \in \alpha ,\bar f({\alpha _i}) = {\delta _i}\bar \in \alpha \}
where the nonempty attribute set αF̅ of (x̅)F̅ is also that of (x̅)F̅σ ≠ ∅ and σ ∈ (0, 1).
Definition 3
The information pair formed by the random inner IPI and the random exterior IPI generated by (x) is called a random IPI generated by (x) depending on information element migration probability σ, also called RIPI as
({(\bar x)^{F\sigma }},{(\bar x)^{\bar F\sigma }})
where (αF, αF̅) is also the attribute set of ((x̅)Fσ, (x̅)F̅σ).
Considering Definitions 1–3 and the above assumption, it is easily noted that Formulas (1) and (4) can, respectively, be represented with other forms as the following:
{(\bar x)^{_{F\sigma }}} = (\bar x) \cup (\bar x)_\sigma ^ + = (x) \cup \{ x|w\bar \in (x),{p_F}(f) \in [\sigma ,1]\}{(\bar x)^{\bar F\rho }} = (x) - (\bar x)_\sigma ^ - = (x) - \{ x|x \in (x),{p_{\bar F}}(\bar f) \in [\sigma ,1]\}
Formulas (8) and (9) show RIPI to be as one information pair generated by not only the corresponding IPI, but also the ordinary information (x), as shown in Fig. 1.
Fig. 1
The relationship among (x), ((x̅)F, (x̅)F̅), and ((x̅)Fσ, (x̅)F̅σ)
All of the RIPI generated by information (x) constitute an RIPI family as
R\Phi (x) = \{ ((\bar x)_i^{F\sigma },(\bar x)_j^{\bar F\sigma })|\sigma \in (0,1),i \in {\rm{I}}{\rm{,}}j \in {\rm{J}}\} .
According to Definitions 1–3, Propositions 1–4 are simply derived as follows.
Proposition 1
Let pF(f) ≡ 0 for ∀f ∈ F, then (x̅)F, (x̅)Fσ are not indentified expressed as UNI((x̅)F, (x̅)Fσ).
Proposition 2
Let pF̅(f̅) ≡ 0 for ∀f̅ ∈ F̅, then UNI((x̅)F̅, (x̅)F̅σ).
Proposition 3
Let pF̅(f̅) = pF(f) ≡ 0 for ∀f̅ ∈ F̅ and ∀f ∈ F, then UNI(((x̅)F, (x̅)F̅), ((x̅)Fσ, (x̅)F̅σ)).
Proposition 4
For ∀σ ∈ (0, 1), there is UNI(Φ(x), RΦ(x)).
Propositions 1–4 state that RIPI ((x̅)Fσ, (x̅)F̅σ) is the extension of ((x̅)F, (x̅)F̅), and ((x̅)F, (x̅)F̅) is the particular case of ((x̅)Fσ, (x̅)F̅σ). Under certain conditions, RIPI could restore to homologous IPI, and to information (x).
Theorem 1
(Relation theorem between RIPI and IPI) Assume ((x̅)Fσ, (x̅)F̅σ) ∈ RΦ(x) and ((x̅)F, (x̅)F̅) ∈ Φ(x), then for ∀σ ∈ (0, 1) we have({(\bar x)^{F\sigma }},{(\bar x)^{\bar F\sigma }})where Formula (11) represents the relation of Fig. 1.
Proof
The assumption condition and Formulas (1) and (4) guarantee that (x̅)F ⊆ (x̅)Fσ and (x̅)F̅σ ⊆ (x̅)F̅ are fulfilled. According to the definition of IPI derived by common information (x) in Refs [3, 4, and 18], we can obtain (x̅)F̅ ⊆ (x̅)F. Hence, we get Formula (11) by set hereditary property.
Theorem 2
(Generation theorem of RIPI) Assume that (αF, αF̅) is the attribute sets of RIPI ((x̅)Fσ, (x̅)F̅σ). There exists the nonempty pair (Δα, ∇α) ≠ ∅ such that αF − (α ∪ Δα) = ∅ and αF̅ − (α − ∇α) = ∅, where (Δα, ∇α) ≠ ∅ is equal to Δα ≠ ∅ and ∇α ≠ ∅.
Proof
Let ((x̅)Fσ, (x̅)F̅σ) be different from (x), namely, (x̅)Fσ = (x̅)F̅σ = (x) fails the Theorem 2 condition, then there exists one working between (x̅)Fρ ≠ (x) and (x̅)F̅ρ ≠ (x) at least. Let us assume that (x̅)Fσ ≠ (x), the generating process of random inner IPI (x̅)Fρ depending on its attribute set αF points out that αF meets α ⊂ αF. Assuming Δα = αF − α, we obtained Δα ≠ ∅ and αF − (α ∪ Δα) = ∅ according to Definition 1. In the same way, it is proved that there exists ∇α ≠ ∅ such that αF̅ − (α − ∇α) = ∅.
RIPI characteristics
Supplementing some additional attributes into α, some information elements would be migrated into information (x) depending on certain probability in succession and form a chain of random inner IPI showing the following dynamic process:
(\bar x)_1^{F\sigma } \subseteq (\bar x)_2^{F\sigma } \subseteq ... \subseteq (\bar x)_s^{F\sigma }
Deleting some attributes out from α continuously, some information elements in (x) are migrated successively, depending on certain probability and form a chain of random exterior IPI showing the dynamic process as follows:
(\bar x)_s^{\bar F\sigma } \subseteq (\bar x)_{s - 1}^{\bar F\sigma } \subseteq ... \subseteq (\bar x)_1^{\bar F\sigma }
If the above change processes take place at the same time, we obtain a chain of the RIPI implying the dynamic process as
((\bar x)_i^{F\sigma },(\bar x)_i^{\bar F\sigma })
According to Formulas (12)–(14), we get the dynamic characteristics depending on the attribute sets indicated by Theorems 3–5.
Theorem 3
(Depending attribute theorem of RIPI) Let(\bar x)_i^{F\rho }
,
(\bar x)_j^{F\rho }be the random inner IPI and\alpha _i^F
,
\alpha_j^Fexpressing their attribute sets in order. Then(\bar x)_i^{F\sigma } \subseteq (\bar x)_j^{F\sigma } {iff} \alpha _i^F \subseteq \alpha _j^F
.
Inference 1 Let card(V − α) = t. Then information (x) can generate t! dynamic chains of RIPI.
Inference 2 Let card(α) = m. Then information (x) can generate m! dynamic chains of random exterior IPI.
Inference 3 Let card(α) = m and card(V − α) = t. Then information (x) can generate t! × m! dynamic chains of RIPI.
According to the dynamic characteristics of RIPI, the measurement of dynamic change degree is proposed in Definitions 4–6.
Definition 4
Let (x̅)Fσ be a random inner IPI derived by (x). Then call the real number γ(x̅)Fσ to be F−measure degree of (x̅)Fσ relative to (x) as
\gamma {(\bar x)^{F\sigma }} = \parallel {x^{(F\sigma )}} - {x^{(0)}}\parallel /\parallel {x^{(0)}}\parallel
where (x) = {x1,x2,..xs},(x)Fσ = {x1,x2,..xs,xs+1,...,xs+t}; the sequence of information value is expressed as
\matrix{ {x_i} = ({x_{i1}},{x_{i2}},...,{x_{im}}),i = 1,2,...,s + t,{x_{ik}} \in [0,1], {\rm and} \cr {x^{(0)}} = (\sum\limits_{i = 1}^s {{x_{i1}}} ,\sum\limits_{i = 1}^s {{x_{i2}}},...,\sum\limits_{i = 1}^s {{x_{im}}} ), \cr {x^{(F\sigma )}} = (\sum\limits_{i = 1}^{s + t} {{x_{i1}}} ,\sum\limits_{i =1}^{s + t} {{x_{i2}}} ,...,\sum\limits_{i = 1}^{s + t} {{x_{im}}} ), \cr \parallel {x^{(F\sigma )}} - {x^{(0)}}\parallel = {[\sum\limits_{k = 1}^m{{{(\sum\limits_{i = 1}^{s + t} {x_{ik}^{} - } \sum\limits_{i = 1}^s {x_{ik}^{}})}^\rho }} ]^{1/\rho }},, \cr \parallel {x^{(0)}}\parallel = {[\sum\limits_{k = 1}^m {{{(\sum\limits_{i =1}^s {x_{ik}^{}} )}^\rho }} ]^{1/\rho }},k = 1,2,...,m,\rho \in {Z^ + }.}
Definition 5
Let (x̅)F̅σ be a random exterior IPI derived by (x). Then call γ(x̅)F̅σF̅ − measure degree of (x̅)F̅σ relative to (x) as
\gamma {(\bar x)^{\bar F\sigma }} = \parallel {x^{(0)}} - {x^{(\bar F\sigma )}}\parallel /\parallel {x^{(0)}}\parallel
where (x) = {x1,x2,..xs},(x)F̅σ = {x1,x2,..xs−p}, 0 ≤ p < s, p ∈ Z+; x(0) and ‖ x(0) ‖ are the same as Definition 4, and the sequence of information value is written as
\matrix{{x_i} = ({x_{i1}},{x_{i2}},...,{x_{im}}),{x_{ik}} \in [0,1],i = 1,2,...,s, {\rm and} \cr{x^{(\bar F\sigma )}} = (\sum\limits_{i = 1}^{s - p} {{x_{i1}}} ,\sum\limits_{i = 1}^{s - p} {{x_{i2}}} ,...,\sum\limits_{i = 1}^{s - p} {{x_{im}}} ), \cr\parallel {x^{(0)}} - {x^{(\bar F\sigma )}}\parallel = {[\sum\limits_{k = 1}^m {{{(\sum\limits_{i = 1}^s {x_{ik}^{} - } \sum\limits_{i= 1}^{s - p} {x_{ik}^{}} )}^\rho }} ]^{1/\rho }}.}
Definition 6
Let ((x̅)Fσ, (x̅)F̅σ) be an RIPI generated by (x). Then call the real number pair composed by Formulas (15) and (16) to be (F, F̅)−measure degree of ((x̅)Fσ, (x̅)F̅σ) relative to (x), and
(\gamma {(\bar x)^{F\sigma }},\gamma {(\bar x)^{\bar F\sigma }})
Because ((x̅)F,(x̅)F̅) is a special case of ((x̅)Fσ, (x̅)F̅σ), (γ(x̅)F, γ(x̅)F̅) is chosen to express (F, F̅)− measure degree of ((x̅)F,(x̅)F̅) when UNI((x̅)F, (x̅)Fσ), UNI((x̅)F̅, (x̅)F̅σ) in Definitions 4 and 5.
Formula (15) notes the change measurement between (x̅)Fp and (x) caused by attribute supplementing set Δα; the same goes for Formulas (16) and (17). Thus Propositions 5–7 can be obtained.
Proposition 5
Given F − measure degree γ(x̅)Fσ, γ(x̅)Fσ ≠ 0 iff IDE((x̅)Fσ, (x)) or IDE(αF,α).
Proposition 6
Given F̅ − measure degree γ(x̅)F̅σ, γ(x̅)F̅σ ≠ 0 iff IDE((x̅)F̅σ, (x)) or IDE(αF̅, α).
Applications of RIPI model in intelligent acquisition–separation of investment information
For convenience, call x(0) the information value and x(Fσ) the inner IPI value in Definition 4; call x(F̅ρ) the exterior IPI value based on which Definition 7 is given.
Definition 7
Call
if\;\; \alpha \Rightarrow \alpha^F, \;\; then\;\; x^{(0) } \Rightarrow x^{(F\sigma)}
an inner RIPI reasoning in which α ⇒ αF,x(0) ⇒ x(Fσ) are equivalent to α ⊆ αF,x(0) ⊆ x(Fσ), respectively; call
if\;\; \alpha \Rightarrow \alpha^{\bar{F}}, \;\; then\;\; x^{({\bar{F}}\sigma)} \Rightarrow x^{(0) }
an exterior RIPI reasoning in which αF̅ ⇒ α and x(F̅σ) ⇒ x(0) are equivalent so that αF̅, x(F̅σ) are subsets of α,x(0), respectively.
For simplicity, this section only proposes the applications of random inner IPI in intelligent separation–acquisition of investment information. Suppose that W is a group company that produces petroleum and chemical products W = {W1,W2,W3,W4,W5}, where Wi ∈ W, i = 1,2,3,4,5 are subsidiary corporations of W. α = {α1,α2,α3,α4,α5,α6} is the attribute set of W (product market characteristics of W). Information form of W is (x) = {x1,x2,x3,x4,x5}. Due to trade secret, the group company and its subsidiary corporations and attributes (market characteristics) are expressed as W,Wi, and α1,α2,α3,α4,α5,α6, respectively. x(0),
x_i^{(0)}
are profit discrete value distributions of W,Wi from January to June in 2019 as
\begin{array}{l}{x^{(0)}} = (x_1^{(0)},x_2^{(0)},x_3^{(0)},x_4^{(0)}),\\ x_i^{(0)} = (x_{i1}^{(0)},x_{i2}^{(0)},x_{i3}^{(0)},...,x_{i6}^{(0)}),i = 1,2,3,4.\end{array}
Values in x(0),
x_i^{(0)}
are derived from dealing with the real profit value, and the results do not influence the analysis of the case. The profit discrete distributions x(0),
x_i^{(0)}
of W,Wi are listed in Table 1.
The profit discrete distributions x(0),
x_i^{(0)}
of W,Wi,i = 1,2,3,4 from June to December in 2019
k
1
2
3
4
5
6
x_{1k}^{(0)}
0.51
0.30
0.52
0.22
0.50
0.44
x_{2k}^{(0)}
0.21
0.28
0.45
0.36
0.66
0.53
x_{3k}^{(0)}
0.43
0.29
0.60
0.40
0.28
0.24
x_{4k}^{(0)}
0.19
0.35
0.48
0.66
0.67
0.26
By profit discrete distribution in Table 1, the profit information value of W is
\matrix{{x^{(0)}} = (x_1^{(0)},x_2^{(0)},x_3^{(0)},x_4^{(0)}) = (\sum\nolimits_{i = 1}^4 {x_{i1}^{(0)},} \sum\nolimits_{i = 1}^4 {x_{i2}^{(0)},} \sum\nolimits_{i = 1}^4 {x_{i3}^{(0)},} \sum\nolimits_{i = 1}^4 {x_{i4}^{(0)},} \sum\nolimits_{i = 1}^4 {x_{i5}^{(0)},} \sum\nolimits_{i = 1}^4 {x_{i6}^{(0)}} )\cr = (1.34,1.22,2.05,1.64,2.11,1.47).}
A global disease COVID-19 broke out during preliminary stage in 2020 and caused a series of economic changes such as some manufacturing industry profits reduced in different probabilities. In contrast, products relating to protective apparatus, therapeutic apparatus, their appurtenance, and so on, have great market potential and earn better profit in big probabilities. This random dynamic change suited the RIPI model in this paper. For simplicity, this section only considers the latter.
Suppose that α7 = outbreakofCOVID − 19 and the attribute set of (x) is α = {α1,α2,α3,α4,α5,α6}, then αF is derived through transferring α7 into α as
\begin{array}{l}{\alpha ^F} = \{ {\alpha _1},{\alpha _2},{\alpha _3},{\alpha _4},{\alpha_5},{\alpha _6}\} \cup \{ {\alpha _7}\} \\ {\rm{ = \{ }}{\alpha _1},{\alpha _2},{\alpha _3},{\alpha _4},{\alpha_5},{\alpha _6},{\alpha _7}{\rm{\} }}{\rm{.}}\end{array}
Under the condition, sub-companies W6,W7,W8,
{W_9}\bar \in W
(specific products of W5,W6,W7,W8 omit) would bring much profit differently with probabilities 1,0.8,0.75,0.67 after overall consideration. If probability ρ = 0.8 is chosen, then W5,W6 turn into W to form WF0.8 whose information is (x)Fσ = {x1,x2,x3,...,x6},
\begin{array}{l}{x^{(F0.8)}} =(x_1^{(F0.8)},x_2^{(F0.8)},x_3^{(F0.8)},x_4^{(F0.8)},x_5^{(F0.8)},x_6^{(F0.8)}),\\x_i^{(F0.8)} =(x_{i1}^{(F0.8)},x_{i2}^{(F0.8)},x_{i3}^{(F0.8)},x_{i4}^{(F0.8)},x_{i5}^{(F0.8)},x_{i6}^{(F0.8)}),i= 1,2,3,4,5,6.\end{array}
The detailed profit discrete distribution of WF0.8 is shown in Table 2.
The profit discrete distributions x(F0.8),
x_i^{(F0.8)}
of group company WF0.8 and sub-company Wi,i = 1,2,3,4,5,6 from January to June in 2020
k
1
2
3
4
5
6
x_{1k}^{(F\rho )}
0.51
0.30
0.52
0.22
0.50
0.44
x_{2k}^{(F\rho )}
0.21
0.28
0.45
0.36
0.66
0.53
x_{3k}^{(F\rho )}
0.43
0.28
0.60
0.40
0.28
0.24
x_{4k}^{(F\rho )}
0.19
0.35
0.48
0.66
0.67
0.26
x_{5k}^{(F\rho )}
0.61
0.60
0.72
0.69
0.65
0.70
x_{6k}^{(F\rho )}
0.71
0.69
0.82
0.72
0.66
0.69
By profit discrete distribution in Table 2, the profit information value of WF0.8 is
\begin{array}{l}{x^{(F0.8)}} = (x_1^{(F0.8)},x_2^{(F0.8)},x_3^{(F0.8)},x_4^{(F0.8)},x_5^{(F0.8)},x_6^{(F0.8)})\\= (\sum\nolimits_{i = 1}^6 {x_{i1}^{(F0.8)},} \sum\nolimits_{i = 1}^6{x_{i2}^{(F0.8)},} \sum\nolimits_{i = 1}^6 {x_{i3}^{(F0.8)},} \sum\nolimits_{i =1}^6 {x_{i4}^{(F0.8)},} \sum\nolimits_{i = 1}^6 {x_{i5}^{(F0.8)},}\sum\nolimits_{i = 1}^6 {x_{i6}^{(F0.8)}} )\end{array}=(2.66,2.51,3.59,3.05,3.42,2.86), (22)\gamma {(x)^{F0.8}} = \parallel {x^{(F0.8)}} - {x^{(0)}}\parallel /\parallel{x^{(0)}}\parallel = 0.857,
Analysis on intelligent acquisition of RIPI(x)Fρ
On the condition that αF is generated by supplementing attributes α7 into α, one can obtain the random inner inverse packet information (x)F0.8 = {x1,x2,x3,...,x6} based on information (x) = {x1,x2,x3,x4} through using Definition 1 and fulfill Formula (18).
x(F0.8) is intelligently separated out and acquired. If α7 does not occur, x(F0.8) would never have been gained, or (x)Fσ would never have been known depending on the probability 0.8. The example simply tells us that the following:
When α and αF satisfy α ⊆ αF, information (x)Fσ is intelligently discovered randomly out of information (x) by using the random inner inverse packet information generation model. While W5,W6 are found out of W due to (x)Fσ.
When α7 is thought to be a chance attribute and it invades the attribute se tα. Random inner IPI (x)Fσ is generated by (x) in Definition 1.
When the chance attributes α7 invades α, the profit discrete distribution data x(0) of group company is turned to x(Fσ), which makes the profit of W increase. This conclusion has been proved in the financial statement published by W.
Formula (23) means that W5,W6 will bring the extra profit 85.7% with a probability of 0.8 or greater.
Discussion
In Refs. [3, 4], dynamic feature was brought into common set X and proposed the structure of IPS. Based on IPS, IPI, and its applications in resolving practical problems with dynamic characteristics and heredity are discussed in [8,13,17,20, and 21]. The randomness of element transfer is considered in this paper according to the dynamic characteristics of IPI [23]. By integrating the possibility knowledge into IPI, this paper proposes the concepts and structures of RIPI and their applications. RIPI theory enriches IPI and enlarges its application category. It also provides a new theory tool for studying the information system.