The present paper aims to propose a new computational method for potential learning to improve generalization and interpretation. Potential learning has been proposed to simplify the computational procedures of information maximization and to specify which neurons should be fired. However, it is often the case that potential learning sometimes absorbs too much information content on input patterns in the early stage of learning, which tends to degrade generalization performance. This can be solved by making potential learning as slow as possible. Accordingly, we here propose a procedure called “self-assimilation” in which connection weights are accentuated by their characteristics observed in the specific learning step. This makes it possible to predict future connection weights in the early stage of learning. Thus, it is possible to improve generalization by slow learning and at the same time to improve the interpretation of connection weights via the enhanced characteristics of the connection weights. The method was applied to an artificial data set, as well as a real data set of counter services at a local government office in the Tokyo metropolitan area. The results show that improved generalization was observed by making learning as slow as possible. In addition, the number of strong connection weights became smaller for better interpretation by self-assimilation.
In this paper we investigate further and extend our previous work on radar signal identification and classification based on a data set which comprises continuous, discrete and categorical data that represent radar pulse train characteristics such as signal frequencies, pulse repetition, type of modulation, intervals, scan period, scanning type, etc. As the most of the real world datasets, it also contains high percentage of missing values and to deal with this problem we investigate three imputation techniques: Multiple Imputation (MI); K-Nearest Neighbour Imputation (KNNI); and Bagged Tree Imputation (BTI). We apply these methods to data samples with up to 60% missingness, this way doubling the number of instances with complete values in the resulting dataset. The imputation models performance is assessed with Wilcoxon’s test for statistical significance and Cohen’s effect size metrics. To solve the classification task, we employ three intelligent approaches: Neural Networks (NN); Support Vector Machines (SVM); and Random Forests (RF). Subsequently, we critically analyse which imputation method influences most the classifiers’ performance, using a multiclass classification accuracy metric, based on the area under the ROC curves. We consider two superclasses (‘military’ and ‘civil’), each containing several ‘subclasses’, and introduce and propose two new metrics: inner class accuracy (IA); and outer class accuracy (OA), in addition to the overall classification accuracy (OCA) metric. We conclude that they can be used as complementary to the OCA when choosing the best classifier for the problem at hand.
This paper proposes nonlinear operator of extreme doubly stochastic quadratic operator (EDSQO) for convergence algorithm aimed at solving consensus problem (CP) of discrete-time for multi-agent systems (MAS) on n-dimensional simplex. The first part undertakes systematic review of consensus problems. Convergence was generated via extreme doubly stochastic quadratic operators (EDSQOs) in the other part. However, this work was able to formulate convergence algorithms from doubly stochastic matrices, majorization theory, graph theory and stochastic analysis. We develop two algorithms: 1) the nonlinear algorithm of extreme doubly stochastic quadratic operator (NLAEDSQO) to generate all the convergent EDSQOs and 2) the nonlinear convergence algorithm (NLCA) of EDSQOs to investigate the optimal consensus for MAS. Experimental evaluation on convergent of EDSQOs yielded an optimal consensus for MAS. Comparative analysis with the convergence of EDSQOs and DeGroot model were carried out. The comparison was based on the complexity of operators, number of iterations to converge and the time required for convergences. This research proposed algorithm on convergence which is faster than the DeGroot linear model.
The present article reviews the application of Particle Swarm Optimization (PSO) algorithms to optimize a phrasing model, which splits any text into linguistically-motivated phrases. In terms of its functionality, this phrasing model is equivalent to a shallow parser. The phrasing model combines attractive and repulsive forces between neighbouring words in a sentence to determine which segmentation points are required. The extrapolation of phrases in the specific application is aimed towards the automatic translation of unconstrained text from a source language to a target language via a phrase-based system, and thus the phrasing needs to be accurate and consistent to the training data.
Experimental results indicate that PSO is effective in optimising the weights of the proposed parser system, using two different variants, namely sPSO and AdPSO. These variants result in statistically significant improvements over earlier phrasing results. An analysis of the experimental results leads to a proposed modification in the PSO algorithm, to prevent the swarm from stagnation, by improving the handling of the velocity component of particles. This modification results in more effective training sequences where the search for new solutions is extended in comparison to the basic PSO algorithm. As a consequence, further improvements are achieved in the accuracy of the phrasing module.
In this work we use the concept of a ‘n’-valued refined neutrosophic soft sets and its properties to solve decision making problems, Also a similarity measure between two ‘n’-valued refined neutrosophic soft sets are proposed. A medical diagnosis (MD) method is established for ‘n’-valued refined neutrosophic soft set setting using similarity measures. Lastly a numerical example is given to demonstrate the possible application of similarity measures in medical diagnosis (MD).
The present paper aims to propose a new computational method for potential learning to improve generalization and interpretation. Potential learning has been proposed to simplify the computational procedures of information maximization and to specify which neurons should be fired. However, it is often the case that potential learning sometimes absorbs too much information content on input patterns in the early stage of learning, which tends to degrade generalization performance. This can be solved by making potential learning as slow as possible. Accordingly, we here propose a procedure called “self-assimilation” in which connection weights are accentuated by their characteristics observed in the specific learning step. This makes it possible to predict future connection weights in the early stage of learning. Thus, it is possible to improve generalization by slow learning and at the same time to improve the interpretation of connection weights via the enhanced characteristics of the connection weights. The method was applied to an artificial data set, as well as a real data set of counter services at a local government office in the Tokyo metropolitan area. The results show that improved generalization was observed by making learning as slow as possible. In addition, the number of strong connection weights became smaller for better interpretation by self-assimilation.
In this paper we investigate further and extend our previous work on radar signal identification and classification based on a data set which comprises continuous, discrete and categorical data that represent radar pulse train characteristics such as signal frequencies, pulse repetition, type of modulation, intervals, scan period, scanning type, etc. As the most of the real world datasets, it also contains high percentage of missing values and to deal with this problem we investigate three imputation techniques: Multiple Imputation (MI); K-Nearest Neighbour Imputation (KNNI); and Bagged Tree Imputation (BTI). We apply these methods to data samples with up to 60% missingness, this way doubling the number of instances with complete values in the resulting dataset. The imputation models performance is assessed with Wilcoxon’s test for statistical significance and Cohen’s effect size metrics. To solve the classification task, we employ three intelligent approaches: Neural Networks (NN); Support Vector Machines (SVM); and Random Forests (RF). Subsequently, we critically analyse which imputation method influences most the classifiers’ performance, using a multiclass classification accuracy metric, based on the area under the ROC curves. We consider two superclasses (‘military’ and ‘civil’), each containing several ‘subclasses’, and introduce and propose two new metrics: inner class accuracy (IA); and outer class accuracy (OA), in addition to the overall classification accuracy (OCA) metric. We conclude that they can be used as complementary to the OCA when choosing the best classifier for the problem at hand.
This paper proposes nonlinear operator of extreme doubly stochastic quadratic operator (EDSQO) for convergence algorithm aimed at solving consensus problem (CP) of discrete-time for multi-agent systems (MAS) on n-dimensional simplex. The first part undertakes systematic review of consensus problems. Convergence was generated via extreme doubly stochastic quadratic operators (EDSQOs) in the other part. However, this work was able to formulate convergence algorithms from doubly stochastic matrices, majorization theory, graph theory and stochastic analysis. We develop two algorithms: 1) the nonlinear algorithm of extreme doubly stochastic quadratic operator (NLAEDSQO) to generate all the convergent EDSQOs and 2) the nonlinear convergence algorithm (NLCA) of EDSQOs to investigate the optimal consensus for MAS. Experimental evaluation on convergent of EDSQOs yielded an optimal consensus for MAS. Comparative analysis with the convergence of EDSQOs and DeGroot model were carried out. The comparison was based on the complexity of operators, number of iterations to converge and the time required for convergences. This research proposed algorithm on convergence which is faster than the DeGroot linear model.
The present article reviews the application of Particle Swarm Optimization (PSO) algorithms to optimize a phrasing model, which splits any text into linguistically-motivated phrases. In terms of its functionality, this phrasing model is equivalent to a shallow parser. The phrasing model combines attractive and repulsive forces between neighbouring words in a sentence to determine which segmentation points are required. The extrapolation of phrases in the specific application is aimed towards the automatic translation of unconstrained text from a source language to a target language via a phrase-based system, and thus the phrasing needs to be accurate and consistent to the training data.
Experimental results indicate that PSO is effective in optimising the weights of the proposed parser system, using two different variants, namely sPSO and AdPSO. These variants result in statistically significant improvements over earlier phrasing results. An analysis of the experimental results leads to a proposed modification in the PSO algorithm, to prevent the swarm from stagnation, by improving the handling of the velocity component of particles. This modification results in more effective training sequences where the search for new solutions is extended in comparison to the basic PSO algorithm. As a consequence, further improvements are achieved in the accuracy of the phrasing module.
In this work we use the concept of a ‘n’-valued refined neutrosophic soft sets and its properties to solve decision making problems, Also a similarity measure between two ‘n’-valued refined neutrosophic soft sets are proposed. A medical diagnosis (MD) method is established for ‘n’-valued refined neutrosophic soft set setting using similarity measures. Lastly a numerical example is given to demonstrate the possible application of similarity measures in medical diagnosis (MD).