This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.
Introduction
Ontology originally comes from philosophy. It is used to describe the natural connection of things and their components’ inherently hidden connections. Ontology is set up as a model for knowledge storage and representation in information and computer science. It has been extensively applied in different fields such as knowledge management, machine learning, information systems, image retrieval, information retrieval search extension, collaboration and intelligent information integration. As a conceptually semantic model and an analysis tool, being quite effective, ontology has been favored by researchers from pharmacology science, biology science, medical science, geographic information system and social sciences since a few years ago (for instance, see Przydzial et al., [1], Koehler et al., [2], Ivanovic and Budimac [3], Hristoskova et al., [4], and Kabir [5]).
A simple graph is usually used by researchers to represent the structure of ontology. Every concept, objects and elements in ontology are made to correspond to a vertex. Each (directed or undirected) edge on an ontology graph represents a relationship (or potential link) between two concepts (objects or elements). Let O be an ontology and G be a simple graph corresponding to O. It can be attributed to getting. We use the similarity calculating function, the nature of ontology engineer application to compute the similarities between ontology vertices, which represent the intrinsic link between vertices in ontology graph. The ontology similarity measuring function is obtained by measuring the similarity between vertices from different ontologies. That is the goal of ontology mapping. The mapping serves as a bridge connecting different ontologies. Only through mapping, we gain a potential association between the objects or elements from different ontologies. The semi-positive score function Sim : V × V → ℝ+ ∪ {0} maps each pair of vertices to a non-negative real number.
Several effective methods exist for getting efficient ontology similarity measure or ontology mapping algorithm in terms of ontology function. The ontology similarity calculation in terms of ranking learning technology was considered by Wang et al., [12]. The fast ontology algorithm in order to cut the time complexity for ontology application was raised by Huang et al., [13]. An ontology optimizing model in which the ontology function is determined by virtue of NDCG measure was presented by Gao and Liang [14], which is successfully applied in physics education. More ontology applications on various engineering can be refered to Gao et al., [11].
In this article, we determine a new ontology learning method by means of distance calculating. Moreover, we give a theoretical analysis for proposed ontology algorithm.
Algorithm Description
Let $\mathscr{S}=\{(v_{i},v_{j},y_{ij})\}_{i,j=1}^{N}$ be the ontology training data, where vi,vj ∊ ℝp are ontology vectors and yij = ±1 (if vi and vj are similar, then yij = 1; otherwise, yij = −1. We also fixed m relevant source ontology training sets $\begin{array}{}
\displaystyle
\mathscr{S}_{q}=\{(v_{qi}, v_{qj}, y_{qij})\}_{i,j=1}^{N_{q}}
\end{array}$ (q = 1,...,m) if the number of target ontology training samples N is not large, and vqi,vqj ∊ ℝp belong to the certain ontology feature space as vi, vj in this setting.
We aim to learn a distance function d(vi,vj|W) = (vi − vj)T W(vi − vj) which equals to learning a distance matrix W, and the similarity or dissimilarity between a ontology vertex pair vi and vj is obtained by comparing d(vi,vj|W) with a constant threshold parameter c. Specifically, our ontology optimization problem can be stated as
where $\begin{array}{}
\displaystyle
{v_i} - {v_j}_{\bf{W}}^2 = {({v_i} - {v_j})^T}{\bf{W}}({v_i} - {v_j}),g\left( z \right) = {\rm{max }}\left( {0,b - z} \right)
\end{array}$ is an ontology hinge loss function, ||W||F is the Frobenius norm of the metric W which is applied to control the model complexity, η is a balance parameter, and the constraint condition reveals that W is positive semi-definite.
The general version of ontology distance learning approach is formulated by
where $\begin{array}{}
\displaystyle
{\bf{W}} = \sum\nolimits_{i = 1}^n {{\theta _i}{{\bf{u}}_i}{\bf{u}}_i^T}
\end{array}$ and $\begin{array}{}
\displaystyle
{{\bf{W}}_D} = \sum\nolimits_{q = 1}^m {{\alpha _q}{{\bf{W}}_q}}
\end{array}$. Both $\begin{array}{}
\displaystyle
\|\alpha\|_{2}^{2}
\end{array}$ and ||θ||1 are employed to control the complexity of model. In what follows, γ1, γ2 and γ3 are all positive balance parameters.
Select $\begin{array}{}
\displaystyle
L(v_{i},v_{j},y_{ij})=g(y_{ij}[1-\|v_{i}-v_{j}\|_{{\bf
W}}^{2}])
\end{array}$ and use the ontology hinge loss for g, that is to say, g(z) = max(0, b − z) and b is set to be 0. Thus, we deduce the following ontology optimization problem:
For short expressions, we use vi, xj and yi j to denote $\begin{array}{}
\displaystyle
v_{k}^{1}
\end{array}$, $\begin{array}{}
\displaystyle
v_{k}^{2}
\end{array}$ and yk with $\begin{array}{}
\displaystyle
k = 1, \cdot \cdot \cdot ,\left( {\begin{array}{*{20}{c}}
N\\
2
\end{array}} \right) = N\prime
\end{array}$. Let $\begin{array}{}
\displaystyle
\delta_{k}=v_{k}^{1}-v_{k}^{2}
\end{array}$ with $\begin{array}{}
\displaystyle
\left\| {v_k^1 - v_k^2} \right\|_{\bf{W}}^2 = \sum\nolimits_{i = 1}^n {{\theta _i}\delta _k^T{{\bf{u}}_i}{\bf{u}}_i^T{\delta _k}} = {\delta ^T}{f_k}
\end{array}$, $\begin{array}{}
\displaystyle
f_{k}=[f_{k}^{1},\cdots,
f_{k}^{n}]^{T}
\end{array}$ and $\begin{array}{}
\displaystyle
f_{k}^{i}=\delta_{k}^{T}{\bf u}_{i}{\bf
u}_{i}^{T}\delta_{k}
\end{array}$. Therefore, the ontology problem (3) can be re-expressed as
The answer can be inferred by alternating between two sub ontology problems (minimization α = [α1,··· ,αm]T and θ = [θ1,··· ,θn]T respectively) until its convergence.
Given α, the ontology optimization problem with respect to θ then it can be stated as
where $\begin{array}{}
\displaystyle
\Lambda (\theta ) = \frac{1}{{{N^\prime }}}\sum\nolimits_N^{k = 1} {g({y_k}[1 - {\theta ^T}{f_k}) + {\gamma _3}{{\left\| \theta \right\|}_1}}
\end{array}$, and $\begin{array}{}
\displaystyle
\Omega(\theta) =\frac{\gamma_{{\bf W}}}{2}\|{\bf W}-{\bf
W}_{D}\|_{F}^{2}
\end{array}$. Since the ontology loss part Λ(θ) is non-differentiable, we should smooth the ontology loss and then solve (5) in terms of the gradient trick. Let Θ = {x : 0 ≤ xk ≤ 1,x ∊ ℝN′} and σ be the smooth parameter. Then, the smoothed expression of the ontology hinge loss g(fk, yk, θ) = max{0, −yk(1 − θT fk)} can be formulated as
where ||fk||∞ term is used as a normalization. In view of setting the objective ontology function of (6) to 0 and projecting xk on Θ, we infer the following solution: $\begin{array}{}
\displaystyle
x_{k}={\rm
median}\{\frac{-y_{k}(1-\theta^{T}f_{k})}{\sigma\|f_{k}\|_{\infty}},0,1\}
\end{array}$. Furthermore, the piece-wise approximation of g can be expressed as
Let HΛ = [ f1,··· , fN′] and Y = diag(y). We get $\begin{array}{}
\displaystyle
\frac{{{g_\sigma }(\theta )}}{{\partial \theta }} = \sum\nolimits_k {{y_k}{f_k}{x_k}} = {H^R}Yx
\end{array}$, and $\begin{array}{}
\displaystyle
L^{g}(\theta)\frac{N'}{\sigma}\max\frac{\|f_{k}f_{k}^{T}\|_{2}}{\|f_{k}\|_{\infty}}
\end{array}$ is the Lipschitz constant of gσ (θ ).
By setting l(θ) = ||θ||1, we infer the approximation of l with the smooth parameter σ′ as
Furthermore, for each $\begin{array}{}
\displaystyle
x_{r}^{'}={\rm
median}\{\frac{\theta_{r}}{\sigma'},-1, 1\}
\end{array}$, the gradient can be computed by $\begin{array}{}
\displaystyle
\frac{\partial\sum_{i=1}^{n}l_{\sigma'}(\theta_{r})}{\partial\theta}=x'
\end{array}$ and the Lipschitz constant is denoted as $\begin{array}{}
\displaystyle
L^{l}(\theta)=\frac{1}{\sigma'}
\end{array}$.
Moreover, set $\begin{array}{}
\displaystyle
H_{st}^{\Omega}=\gamma_{1}{\rm Tr}(({\bf
u}_{s}{\bf u}_{s}^{T})({\bf u}_{t}{\bf u}_{t}^{T}))
\end{array}$ and $\begin{array}{}
\displaystyle
f_{r}^{\Omega}=\gamma_{1}{\rm Tr}({\bf W}_{S}^{T}({\bf u}_{t}{\bf
u}_{t}^{T}))
\end{array}$, we have $\begin{array}{}
\displaystyle
\frac{\partial\Omega(\theta)}{\partial\theta}=H^{\Omega}\theta-f^{\Omega}
\end{array}$, $\begin{array}{}
\displaystyle
\frac{\partial
F_{\sigma}(\theta)}{\partial\theta}=\frac{1}{N'}H^{R}Yx+\gamma
cx'+H^{\Omega}\theta-f^{\Omega}
\end{array}$ and $\begin{array}{}
\displaystyle
L_{\sigma}=\frac{1}{\sigma}\max_{k}\frac{\|f_{k}f_{k}^{T}\|_{2}}{\|f_{k}\|_{\infty}}+\frac{\gamma
C}{\sigma'}+\|H^{\Omega}\|_{2}
\end{array}$ is the Lipschitz constant of F(θ ).
Denote θt, yt and zt as the solutions in the t-th iteration round, and use $\begin{array}{}
\displaystyle
\widehat{\theta}
\end{array}$ as a guessed solution of θ. We obtain that Lσ is the Lipschitz constant of Fσ (θ ) and the two attached ontology optimizations are stated as
respectively. Set the gradients of the two objective ontology functions in the above two attached ontology problems to be zeros, we yield $\begin{array}{}
\displaystyle
y^{t}=\theta^{t}-\frac{1}{L_{\sigma}}\bigtriangledown
F_{\sigma}(\theta^{t})
\end{array}$ and $\begin{array}{}
\displaystyle
z^{t}=\widehat{\theta}-\frac{1}{L_{\sigma}}\sum_{i=0}^{t}\frac{i+1}{2}\bigtriangledown
F_{\sigma}(\theta^{i})
\end{array}$. Hence, we deduce $\begin{array}{}
\displaystyle
\theta^{t+1}=\frac{2}{t+3}z^{t}+\frac{t+1}{t+3}y^{t}
\end{array}$ and the stop criterion is given by |Fσ (θt+1) − Fσ (θt)| < ε.
Given θ , the optimization ontology problem on parameter α can be stated as
where f = [ f1,··· , fm] with fq = γ1Tr(WTWq), and H is a symmetric positive semi-definite matrix such that $\begin{array}{}
\displaystyle
{\bf H}_{st}=\gamma_{1}{\rm
Tr}({\bf W}_{s}^{T}{\bf W}_{t})
\end{array}$. We only choose two elements αi and αj to update for each iteration. In order to meet the restraint $\begin{array}{}
\sum\nolimits_{q = 1}^m {{\alpha _q}} = 1
\end{array}$, we get $\begin{array}{}
\displaystyle
\alpha_{i}^{*}+\alpha_{j}^{*}=\alpha_{i}+\alpha_{j}
\end{array}$, where $\begin{array}{}
\displaystyle
\alpha_{i}^{*}
\end{array}$ and $\begin{array}{}
\displaystyle
\alpha_{j}^{*}
\end{array}$ are the solutions of the current iteration. Then, according to (10) and set $\begin{array}{}
\displaystyle
{\varepsilon _{ij}} = ({H_{ii}} - {H_{ij}} - {H_{ji}} + {H_{jj}}){\alpha _i} - \sum\nolimits_k {({H_{ik}} - {H_{jk}}){\alpha _k}}
\end{array}$, we designed the updating rule as follows: $\begin{array}{}
\displaystyle
\alpha_{i}^{*}=\frac{\gamma_{2}(\alpha_{i}+\alpha_{j})+(f_{i}-f_{j})+\varepsilon_{ij}}{(H_{ii}-H_{ij}-H_{ji}+H_{jj})+2\gamma_{2}}
\end{array}$ and $\begin{array}{}
\displaystyle
\alpha_{j}^{*}=\alpha_{i}+\alpha_{j}-\alpha_{i}^{*}
\end{array}$. In case the obtained $\begin{array}{}
\displaystyle
\alpha_{i}^{*}
\end{array}$ and $\begin{array}{}
\displaystyle
\alpha_{j}^{*}
\end{array}$ don’t meet the restraint αq ≥ 0, we further set
where Z is the ontology sample space, fs is the ontology function determined by the ontology algorithm learning with the set of samples s, and si = {z1,··· ,zi−1, zi+1,··· ,zm} denotes an ontology sample set with the i′-th element zi deleted.
Definition 2
(Leave-Two-Out) An ontology algorithm has uniform stability β2 with respect to the ontology loss function l if the following holds
where Z is the ontology sample space, fs is the ontology function determined by the ontology algorithm learning with the set of samples s, and si, j is the ontology sample set given from s by deleting two elements zi and zj.
For any convex and differentiable ontology function F : ℱ → ℝ as follows (here ℱ denotes the Hilbert space): ∀ f, g ∈ ℱ ,BF( f ||g) = F( f ) −F(g)−Tr(< f −g, ∇F(g) >), we have ∂ ℱ(∀ f ) = {g ∈ ℱ |∀ f′ ∈ ℱ ,F( f ′)− F( f ) ≥ Tr(< f′ − f′, δF( f ) >)}. Let δ F( f ) be any element of ∂ F(h). We infer 8∀ f , f′ ∈ ℱ, BF( f ′|| f ) = F( f′) − F( f ) − Tr(< f ′ − f , ∇F( f ) >), BF( f ′|| f ) ≥ 0 and BP+Q = BP + BQ for any convex ontology functions P and Q.
Lemma 1
For any three distance metricsWandW′, the following inequality established for any ontology sample zi and zj
where L is the Lipschitz constant of the function g.
Proof
We only present the detailed proof of the first inequality, and the second one can be determined in the similar way. Let F𝒩 (θ ) = P𝒩 (θ ) + Q(θ ), where $\begin{array}{}
\displaystyle
{P_{\mathscr N}}(\theta ) = \frac{1}{{\left( {\begin{array}{*{20}{c}}
N \\
2 \\
\end{array}} \right)}}\sum\nolimits_{i < j} {V(A,{z_i},{z_j})}
\end{array}$ and $\begin{array}{}
\displaystyle
Q(\theta)=\frac{\gamma_{1}}{2}\|{\bf W}-{\bf W}_{S}\|_{F}^{2}+
\gamma_{3}\|\theta\|_{1}
\end{array}$. Clearly, both P𝒩 (θ and Q(θ ) are convex. Suppose θ𝒩 and θ𝒩′ be the minimizers of F𝒩 (θ ) and F𝒩′(θ ) respectively, where 𝒩′ is the set of ontology examples that deletes zi ∈ 𝒩 from 𝒩 .
Let ∆ = ||θ𝒩′||1− < θ𝒩,sgn(θ𝒩′) > +||θ𝒩′||1− < θ𝒩′,sgn(θ𝒩 ) > ≥ 0, sgn(θ ) = [sgn(θ1),··· ,sgn(θ𝒩)]T . Hence, we have $\begin{array}{}
\displaystyle
\frac{{\partial Q({\theta _{\mathscr N}})}}{{\partial \theta }}
\end{array}$ where δ f (θ ) is the sub-gradient of ||θ||1 and
Let 𝒩 be the ontology sample set and $\begin{array}{}
\displaystyle
V({\bf W},
z_{i}, z_{j})=g(y_{ij}[1-\|v_{i}-v_{j}\|_{{\bf W}}^{2}])
\end{array}$. In this sub-section, the empirical ontology risk and expected ontology risk are denoted by $\begin{array}{}
\displaystyle
{R_{\mathscr N}}({\bf{W}}) = \frac{1}{{\left( {\begin{array}{*{20}{c}}
N \\
2 \\
\end{array}} \right)}}\sum\nolimits_{i < j} {V({\bf{W}},{z_i},{z_j})}
\end{array}$ and $\begin{array}{}
\displaystyle
R({\bf{W}}) = ({z_i},{z_j})[V({\bf{W}},{z_i},{z_j})]
\end{array}$, respectively. We will determine the generalization bound R(W) − R𝒩 (W) in the next theorem. For this purpose, we should use the following McDiarmid inequality.
Theorem 3
[15] Let X1,··· ,XN be independent random variables, each taking values in a set A. Let ϕ : AN → ℝ be such that for each i ∈ {1,··· ,N}, there exists a constant ci > 0 such that
The generalization error bound via uniform stability is presented as follows.
Theorem 4
Let 𝒩 be a set of N randomly selected ontology samples andW𝒩 be the ontology distance matrix determined by (2). With probability at least 1 − δ, we have
The method to proof Theorem 4 mainly followed by [16–19], we skip the detailed proof here.
Strong and weak stabilities
Naturally, the stability in uniform version is too restrictive for most learning algorithms, and only a small number of literatures presented that standard ontology learning algorithms met the uniform stability directly, most of these ontology learning algorithms were uncertain. Thus, we are inspired to consider the other “almost everywhere stability” beyond uniform stability in our ontology setting. We define strong and weak stabilities for our ontology framework which are also good measures to show how robust a ontology algorithm is. We assume 0 < δ3,δ4 < 1 in this subsection.
Definition 3
(Strong Stability) Let A be our ontology algorithm whose output on an ontology training sample Z is denoted by fs, and let l be an ontology loss function. Let β3 : ℕ → ℝ and si be the ontology sample set which vi is replaced by $\begin{array}{}
\displaystyle
v_{i}^{'}
\end{array}$. We say that ontology algorithm A has β3 loss stable with respect to ontology loss l if for all n ∈ ℕ, $\begin{array}{}
\displaystyle
v_{i}^{'}\in V
\end{array}$, i ∈ {1,··· ,n}, we have,
(Weak Stability) Let A be our ontology algorithm whose output on an ontology training sample Z is denoted by fs, and let l be an ontology loss function. Let β4 : ℕ → ℝ. We say that our ontology algorithm A has weak loss stability β4 if for all n ∈ ℕ, i ∈ {1,··· ,n}, we have
We present the following lemma which is a fundamental for proving the results on strong and weak stability.
Lemma 5
(Kutin [22]) Let X1,··· ,XN be independent random variables, each taking values in a set C. There is a “bad” subset B ⊆ C, where ℙ(x1,··· ,xN ∈ B) = δ. Let ϕ : CN → ℝ be such that for each k ∈ {1,··· ,N}, there exists b ≥ ck > 0 such that
(Kutin [22]) Let X1,··· ,XN be independent random variables, each taking values in a set C. Let ϕ : CN → ℝ be such that for each k ∈ {1,··· ,N}, there satisfies two condition inequalities in Lemma 1 by substituting $\begin{array}{}
\displaystyle
\frac{{{\lambda _k}}}{N}
\end{array}$ for ck, and substituting e−KN for δ . If 0 < e ≤ minkT (b,λk,K), and N ≥ maxk∆(b,λk,K,ε), then
The main result in this subsection is stated as follows.
Theorem 7
Let A be our ontology algorithm whose output on an ontology training sample Z is denoted by fs. Let l be an ontology loss function such that 0 ≤ l( f ,·) ≤ Ξ for all f and
The method to proof Theorem 7 is mainly followed by [20, 21], we skip the detailed proof here.
Experiments
In this section, we design five simulation experiments respectively concerning ontology measure and ontology mapping. In our experiment, we select the ontology loss function as the square loss. To make sure the accuracy of the comparison, we ran our algorithm in C++ through available LAPACK and BLAS libraries for linear algebra and operation computations. We implement five experiments on a double-core CPU with a memory of 8GB.
Ontology similarity measure experiment on plant data
We use O1, a plant “PO” ontology in the first experiment. It was constructed in www.plantontology.org. We use the structure of O1 presented in Fig. 1. P@N (Precision Ratio see Craswell and Hawking [5]) to measure the quality of the experiment data. At first, the closest N concepts for every vertex on the ontology graph in plant field was given by experts. Then we gain the first N concepts for every vertex on ontology graph by our algorithm, and compute the precision ratio.
Meanwhile, we apply ontology methods in [12], [13] and [14] to the “PO” ontology. Then after getting the average precision ratio by means of these three algorithms, the results with our algorithm are compared. Parts of the data can be referred to Table 1.
Tab. 1.The Experiment Results of Ontology Similarity measure
When N =3, 5 or 10, the precision ratio gained from our algorithms are a little bit higher than the precision ratio determined by algorithms proposed in [12], [13] and [14]. Furthermore, the precision ratios show it tends to increase apparently as N increases. As a result, our algorithms is proved to be better and more effective than those raised by [12], [13] and [14].
Ontology mapping experiment on humanoid robotics data
“Humanoid robotics” ontologies O2 and O3 are used in the second experiment. The structure of O2 and O3 are respectively presented in Fig. 2 and Fig. 3. The leg joint structure of bionic walking device for six-legged robot is presented by the ontology O2. The exoskeleton frame of a robot with wearable and power assisted lower extremities is presented by the ontology O3.
We set the experiment, aiming to get ontology mapping between O2 and O3. P@N Precision Ratio is taken as a measure for the quality of experiment. After applying ontology algorithms in [24], [13] and [14] on “humanoid robotics” ontology and getting the average precision ratio, the precision ratios gained from these three methods are compared. Some results can refer to Table 2.
Tab. 2. The Experiment Results of Ontology Mapping
When N = 1, 3 or 5, the precision ratios gained from our new ontology algorithm are higher than the precision ratios determined by algorithms proposed in [24], [13] and [14]. Furthermore, the precision ratios show they tend to increase apparently as N increases. As a result, our algorithms shows much more efficiency than those raised by [24], [13] and [14].
Ontology similarity measure experiment on biology data
Gene “GO” ontology O4 is used in the third experiment, which was constructed in the website http: //www. geneontology. We present the structure of O4 in Figure 4. Again, P@N is chosen as a measure for the quality of the experiment data. Then we apply the ontology methods in [13], [14] and [25] to the “GO” ontology. Then after getting the average precision ratio by means of these three algorithms, the results with our algorithm are compared. Parts of the data can be referred to Table 3.
Tab. 3. The Experiment Results of Ontology Similarity measure
When N = 3, 5 or 10, the precision ratios gained from our ontology algorithms are higher than the precision ratios determined by algorithms proposed in [13], [14] and [25]. Furthermore, the precision ratios show they tend to increase apparently as N increases. As a result, our algorithms turn out to have more effectiveness than those raised by [13], [14] and [25].
Ontology mapping experiment on physics education data
“Physics education” ontologies O5 and O6 are used in the fourth experiment. We respectively present the structures of O5 and O6 in Fig. 5 and Fig. 6.
We set the experiment, aiming to give ontology mapping between O5 and O6. P@N precision ratio is taken as a measure for the quality of the experiment. Ontology algorithms are applied in [13], [14] and [26] on “physics education” ontology. The precision ratio gotten from the three methods is compared. Some results can be referred to Table 4.
Tab. 4. The Experiment Results of Ontology Mapping
When N = 1, 3 or 5, the precision ratio in terms of our new ontology mapping algorithms are much higher than the precision ratio determined by algorithms proposed in [13], [14] and [26]. Furthermore, the precision ratios show they tend to increase apparently as N increases. As a result, our algorithms shows more effectiveness than those raised by [13], [14] and [26].
Ontology mapping experiment on university data
“University” ontologies O7 and O8 are applied in the last experiment. We present the structures of O7 and O8 in Fig. 7 and Fig. 8.
We set the experiment, aiming to give ontology mapping between O7 and O8. P@N precision ratio is taken as a criterion to measure the quality of the experiment. Ontology algorithms are applied in [12], [13] and [14] on “University” ontology. The precision ratios gotten from the three methods are compared. Some results can be referred to Table 5.
Tab. 5. The Experiment Results of Ontology Mapping
When N = 1, 3 or 5, the precision ratios in terms of our new ontology mapping algorithms are much higher than the precision ratios determined by algorithms proposed in [12], [13] and [14]. Furthermore, the precision ratios show they tend to increase apparently as N increases. As a result, our algorithms turn out to have more effectiveness than those raised by [12], [13] and [14].
Conclusions
In this paper, the new ontology learning framework and its optimal approaches are manifested for ontology similarity calculating and ontology mapping. The new ontology algorithm is based on distance function learning tricks. Also, the stability analysis and generalized bounding computation of ontology learning algorithm are presented. Finally, simulation data in five experiments show that our new ontology learning algorithm has high efficiency in these engineering applications. The distance learning based ontology algorithm proposed in our paper illustrates the promising application prospects for multiple disciplines.