Volume 33 (2023): Edizione 3 (September 2023) Mathematical Modeling in Medical Problems (Special section, pp. 349-428), Urszula Foryś, Katarzyna Rejniak, Barbara Pękala, Agnieszka Bartłomiejczyk (Eds.)
Volume 33 (2023): Edizione 2 (June 2023) Automation and Communication Systems for Autonomous Platforms (Special section, pp. 171-218), Zygmunt Kitowski, Paweł Piskur and Stanisław Hożyń (Eds.)
Volume 33 (2023): Edizione 1 (March 2023) Image Analysis, Classification and Protection (Special section, pp. 7-70), Marcin Niemiec, Andrzej Dziech and Jakob Wassermann (Eds.)
Volume 32 (2022): Edizione 4 (December 2022) Big Data and Artificial Intelligence for Cooperative Vehicle-Infrastructure Systems (Special section, pp. 523-599), Baozhen Yao, Shuaian (Hans) Wang and Sobhan (Sean) Asian (Eds.)
Volume 32 (2022): Edizione 3 (September 2022) Recent Advances in Modelling, Analysis and Implementation of Cyber-Physical Systems (Special section, pp. 345-413), Remigiusz Wiśniewski, Luis Gomes and Shaohua Wan (Eds.)
Volume 32 (2022): Edizione 2 (June 2022) Towards Self-Healing Systems through Diagnostics, Fault-Tolerance and Design (Special section, pp. 171-269), Marcin Witczak and Ralf Stetter (Eds.)
Volume 32 (2022): Edizione 1 (March 2022)
Volume 31 (2021): Edizione 4 (December 2021) Advanced Machine Learning Techniques in Data Analysis (special section, pp. 549-611), Maciej Kusy, Rafał Scherer, and Adam Krzyżak (Eds.)
Volume 31 (2021): Edizione 3 (September 2021)
Volume 31 (2021): Edizione 2 (June 2021)
Volume 31 (2021): Edizione 1 (March 2021)
Volume 30 (2020): Edizione 4 (December 2020)
Volume 30 (2020): Edizione 3 (September 2020) Big Data and Signal Processing (Special section, pp. 399-473), Joanna Kołodziej, Sabri Pllana, Salvatore Vitabile (Eds.)
Volume 30 (2020): Edizione 2 (June 2020)
Volume 30 (2020): Edizione 1 (March 2020)
Volume 29 (2019): Edizione 4 (December 2019) New Perspectives in Nonlinear and Intelligent Control (In Honor of Alexander P. Kurdyukov) (special section, pp. 629-712), Julio B. Clempner, Enso Ikonen, Alexander P. Kurdyukov (Eds.)
Volume 29 (2019): Edizione 3 (September 2019) Information Technology for Systems Research (special section, pp. 427-515), Piotr Kulczycki, Janusz Kacprzyk, László T. Kóczy, Radko Mesiar (Eds.)
Volume 29 (2019): Edizione 2 (June 2019) Advances in Complex Cloud and Service Oriented Computing (special section, pp. 213-274), Anna Kobusińska, Ching-Hsien Hsu, Kwei-Jay Lin (Eds.)
Volume 29 (2019): Edizione 1 (March 2019) Exploring Complex and Big Data (special section, pp. 7-91), Johann Gamper, Robert Wrembel (Eds.)
Volume 28 (2018): Edizione 4 (December 2018)
Volume 28 (2018): Edizione 3 (September 2018)
Volume 28 (2018): Edizione 2 (June 2018) Advanced Diagnosis and Fault-Tolerant Control Methods (special section, pp. 233-333), Vicenç Puig, Dominique Sauter, Christophe Aubrun, Horst Schulte (Eds.)
Volume 28 (2018): Edizione 1 (March 2018) Ediziones in Parameter Identification and Control (special section, pp. 9-122), Abdel Aitouche (Ed.)
Volume 27 (2017): Edizione 4 (December 2017)
Volume 27 (2017): Edizione 3 (September 2017) Systems Analysis: Modeling and Control (special section, pp. 457-499), Vyacheslav Maksimov and Boris Mordukhovich (Eds.)
Volume 27 (2017): Edizione 2 (June 2017)
Volume 27 (2017): Edizione 1 (March 2017)
Volume 26 (2016): Edizione 4 (December 2016)
Volume 26 (2016): Edizione 3 (September 2016)
Volume 26 (2016): Edizione 2 (June 2016)
Volume 26 (2016): Edizione 1 (March 2016)
Volume 25 (2015): Edizione 4 (December 2015) Special issue: Complex Problems in High-Performance Computing Systems, Editors: Mauro Iacono, Joanna Kołodziej
Volume 25 (2015): Edizione 3 (September 2015)
Volume 25 (2015): Edizione 2 (June 2015)
Volume 25 (2015): Edizione 1 (March 2015) Safety, Fault Diagnosis and Fault Tolerant Control in Aerospace Systems, Silvio Simani, Paolo Castaldi (Eds.)
Volume 24 (2014): Edizione 4 (December 2014)
Volume 24 (2014): Edizione 3 (September 2014) Modelling and Simulation of High Performance Information Systems (special section, pp. 453-566), Pavel Abaev, Rostislav Razumchik, Joanna Kołodziej (Eds.)
Volume 24 (2014): Edizione 2 (June 2014) Signals and Systems (special section, pp. 233-312), Ryszard Makowski and Jan Zarzycki (Eds.)
Volume 24 (2014): Edizione 1 (March 2014) Selected Problems of Biomedical Engineering (special section, pp. 7 - 63), Marek Kowal and Józef Korbicz (Eds.)
Volume 23 (2013): Edizione 4 (December 2013)
Volume 23 (2013): Edizione 3 (September 2013)
Volume 23 (2013): Edizione 2 (June 2013)
Volume 23 (2013): Edizione 1 (March 2013)
Volume 22 (2012): Edizione 4 (December 2012) Hybrid and Ensemble Methods in Machine Learning (special section, pp. 787 - 881), Oscar Cordón and Przemysław Kazienko (Eds.)
Volume 22 (2012): Edizione 3 (September 2012)
Volume 22 (2012): Edizione 2 (June 2012) Analysis and Control of Spatiotemporal Dynamic Systems (special section, pp. 245 - 326), Dariusz Uciński and Józef Korbicz (Eds.)
Volume 22 (2012): Edizione 1 (March 2012) Advances in Control and Fault-Tolerant Systems (special issue), Józef Korbicz, Didier Maquin and Didier Theilliol (Eds.)
Volume 21 (2011): Edizione 4 (December 2011)
Volume 21 (2011): Edizione 3 (September 2011) Ediziones in Advanced Control and Diagnosis (special section, pp. 423 - 486), Vicenç Puig and Marcin Witczak (Eds.)
Volume 21 (2011): Edizione 2 (June 2011) Efficient Resource Management for Grid-Enabled Applications (special section, pp. 219 - 306), Joanna Kołodziej and Fatos Xhafa (Eds.)
Volume 21 (2011): Edizione 1 (March 2011) Semantic Knowledge Engineering (special section, pp. 9 - 95), Grzegorz J. Nalepa and Antoni Ligęza (Eds.)
Volume 20 (2010): Edizione 4 (December 2010)
Volume 20 (2010): Edizione 3 (September 2010)
Volume 20 (2010): Edizione 2 (June 2010)
Volume 20 (2010): Edizione 1 (March 2010) Computational Intelligence in Modern Control Systems (special section, pp. 7 - 84), Józef Korbicz and Dariusz Uciński (Eds.)
Volume 19 (2009): Edizione 4 (December 2009) Robot Control Theory (special section, pp. 519 - 588), Cezary Zieliński (Ed.)
Volume 19 (2009): Edizione 3 (September 2009) Verified Methods: Applications in Medicine and Engineering (special issue), Andreas Rauh, Ekaterina Auer, Eberhard P. Hofer and Wolfram Luther (Eds.)
Volume 19 (2009): Edizione 2 (June 2009)
Volume 19 (2009): Edizione 1 (March 2009)
Volume 18 (2008): Edizione 4 (December 2008) Ediziones in Fault Diagnosis and Fault Tolerant Control (special issue), Józef Korbicz and Dominique Sauter (Eds.)
Volume 18 (2008): Edizione 3 (September 2008) Selected Problems of Computer Science and Control (special issue), Krzysztof Gałkowski, Eric Rogers and Jan Willems (Eds.)
Volume 18 (2008): Edizione 2 (June 2008) Selected Topics in Biological Cybernetics (special section, pp. 117 - 170), Andrzej Kasiński and Filip Ponulak (Eds.)
Volume 18 (2008): Edizione 1 (March 2008) Applied Image Processing (special issue), Anton Kummert and Ewaryst Rafajłowicz (Eds.)
Volume 17 (2007): Edizione 4 (December 2007)
Volume 17 (2007): Edizione 3 (September 2007) Scientific Computation for Fluid Mechanics and Hyperbolic Systems (special issue), Jan Sokołowski and Eric Sonnendrücker (Eds.)
This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges), which ultimately seems to be of greater importance than the sheer data volume.
The paper discusses various approaches to mining co-location patterns with extended spatial objects. We focus on the properties of transaction-free approaches EXCOM and DEOSP, and discuss the differences between the method using a buffer and that employing clustering and triangulation. These theoretical differences between the two methods are verified experimentally. In the performed tests three different implementations of EXCOMare compared with DEOSP, highlighting the advantages and downsides of both approaches.
The demand for performing data analysis is steadily rising. As a consequence, people of different profiles (i.e., nonexperienced users) have started to analyze their data. However, this is challenging for them. A key step that poses difficulties and determines the success of the analysis is data mining (model/algorithm selection problem). Meta-learning is a technique used for assisting non-expert users in this step. The effectiveness of meta-learning is, however, largely dependent on the description/characterization of datasets (i.e., meta-features used for meta-learning). There is a need for improving the effectiveness of meta-learning by identifying and designing more predictive meta-features. In this work, we use a method from exploratory factor analysis to study the predictive power of different meta-features collected in OpenML, which is a collaborative machine learning platform that is designed to store and organize meta-data about datasets, data mining algorithms, models and their evaluations. We first use the method to extract latent features, which are abstract concepts that group together meta-features with common characteristics. Then, we study and visualize the relationship of the latent features with three different performance measures of four classification algorithms on hundreds of datasets available in OpenML, and we select the latent features with the highest predictive power. Finally, we use the selected latent features to perform meta-learning and we show that our method improves the meta-learning process. Furthermore, we design an easy to use application for retrieving different meta-data from OpenML as the biggest source of data in this domain.
In online gambling, poker hands are one of the most popular and fundamental units of the game state and can be considered objects comprising all the events that pertain to the single hand played. In a situation where tens of millions of poker hands are produced daily and need to be stored and analysed quickly, the use of relational databases no longer provides high scalability and performance stability. The purpose of this paper is to present an efficient way of storing and retrieving poker hands in a big data environment. We propose a new, read-optimised storage model that offers significant data access improvements over traditional database systems as well as the existing Hadoop file formats such as ORC, RCFile or SequenceFile. Through index-oriented partition elimination, our file format allows reducing the number of file splits that needs to be accessed, and improves query response time up to three orders of magnitude in comparison with other approaches. In addition, our file format supports a range of new indexing structures to facilitate fast row retrieval at a split level. Both index types operate independently of the Hive execution context and allow other big data computational frameworks such as MapReduce or Spark to benefit from the optimized data access path to the hand information. Moreover, we present a detailed analysis of our storage model and its supporting index structures, and how they are organised in the overall data framework. We also describe in detail how predicate based expression trees are used to build effective file-level execution plans. Our experimental tests conducted on a production cluster, holding nearly 40 billion hands which span over 4000 partitions, show that multi-way partition pruning outperforms other existing file formats, resulting in faster query execution times and better cluster utilisation.
Imbalanced data classification is one of the most widespread challenges in contemporary pattern recognition. Varying levels of imbalance may be observed in most real datasets, affecting the performance of classification algorithms. Particularly, high levels of imbalance make serious difficulties, often requiring the use of specially designed methods. In such cases the most important issue is often to properly detect minority examples, but at the same time the performance on the majority class cannot be neglected. In this paper we describe a novel resampling technique focused on proper detection of minority examples in a two-class imbalanced data task. The proposed method combines cleaning the decision border around minority objects with guided synthetic oversampling. Results of the conducted experimental study indicate that the proposed algorithm usually outperforms the conventional oversampling approaches, especially when the detection of minority examples is considered.
When running data-mining algorithms on big data platforms, a parallel, distributed framework, such asMAPREDUCE, may be used. However, in a parallel framework, each individual model fits the data allocated to its own computing node without necessarily fitting the entire dataset. In order to induce a single consistent model, ensemble algorithms such as majority voting, aggregate the local models, rather than analyzing the entire dataset directly. Our goal is to develop an efficient algorithm for choosing one representative model from multiple, locally induced decision-tree models. The proposed SySM (syntactic similarity method) algorithm computes the similarity between the models produced by parallel nodes and chooses the model which is most similar to others as the best representative of the entire dataset. In 18.75% of 48 experiments on four big datasets, SySM accuracy is significantly higher than that of the ensemble; in about 43.75% of the experiments, SySM accuracy is significantly lower; in one case, the results are identical; and in the remaining 35.41% of cases the difference is not statistically significant. Compared with ensemble methods, the representative tree models selected by the proposed methodology are more compact and interpretable, their induction consumes less memory, and, as confirmed by the empirical results, they allow faster classification of new records.
In this paper, a control framework including active fault-tolerant control (FTC) and reference redesign is developed subject to actuator stuck failures under input saturations. FTC synthesis and reference redesign approaches are proposed to guarantee post-fault system safety and reference reachability. Then, these features are analyzed under both actuator stuck failures and constraints before fault-tolerant controller switches. As the main contribution, actuator stuck failures and constraints are unified so that they can be easily considered simultaneously. By means of transforming stuck failures into actuator constraints, the post-fault system can be regarded as an equivalent system with only asymmetrical actuator constraints. Thus, methods against actuator saturations can be used to guarantee regional stability and produce the stability region. Based on this region, stuck compensation is analyzed. Specifically, an unstable open-loop system is considered, which is more challenging. Furthermore, the method is extended to a set-point tracking problem where the reachability of the original reference can be evaluated. Then, a new optimal reference will be computed for the post-fault system if the original one is unreachable. Finally, simulation examples are shown to illustrate the theoretical results.
This paper develops a new actuator failure compensation scheme for two linked two-wheel drive (2WD) mobile robots based on multiple-model control. First, a configuration of two linked 2WD robots is described, and their kinematics and dynamics are modeled. Then, a multiple-model based failure compensation scheme is developed to compensate for actuator failures, consisting of a kinematic controller, multiple dynamic controllers and a control switching mechanism, which ensures system stability and asymptotic tracking properties. Finally, simulation results verify the effectiveness of the proposed failure compensation control system.
The frequent incremental release of software in agile development impacts the overall reliability of the product. In this paper, we propose a generic software reliability model for the agile process, taking permanent and transient faults into consideration. The proposed model is implemented using the NHPP (non-homogenous Poisson process) and the Musa model. The comparison of the two implementations yields an effective, empirical and reliable model for agile software development.
In this work, the problem of position regulation control is addressed for a 2DOF underactuated mechanical system with friction and backlash. For this purpose, a method combining sliding mode and H∞ control is developed. We prove that the application of the method to the nonlinear model considered results in an asymptotically stable equilibria set. Moreover, it is possible to achieve a sufficiently small and bounded steady-state position error even in the presence of disturbances by employing the proposed technique. That is, the developed controller is able to account not only for unmatched external perturbations and model discrepancies of the test rig considered, but also for matched bounded perturbations. The control methodology is presented from both the theoretical and experimental angles to demonstrate the good performance of the proposed controller.
This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals) that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005), which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.
In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM), particle swarm optimization (PSO) and the genetic algorithm (GA) are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA). Numerical experiments on a real test network are reported.
Aesthetic patterns are widely used nowadays, e.g., in jewellery design, carpet design, as textures and patterns on wallpapers, etc. Most of the work during the design stage is carried out by a designer manually. Therefore, it is highly useful to develop methods for aesthetic pattern generation. In this paper, we present methods for generating aesthetic patterns using the dynamics of a discrete dynamical system. The presented methods are based on the use of various iteration processes from fixed point theory (Mann, S, Noor, etc.) and the application of an affine combination of these iterations. Moreover, we propose new convergence tests that enrich the obtained patterns. The proposed methods generate patterns in a procedural way and can be easily implemented on the GPU. The presented examples show that using the proposed methods we are able to obtain a variety of interesting patterns. Moreover, the numerical examples show that the use of the GPU implementation with shaders allows the generation of patterns in real time and the speed-up (compared with a CPU implementation) ranges from about 1000 to 2500 times.
Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
This paper analyzes and proposes a solution to the transfer pricing problem from the point of view of the Nash bargaining game theory approach. We consider a firm consisting of several divisions with sequential transfers, in which central management provides a transfer price decision that enables maximization of operating profits. Price transferring between divisions is negotiable throughout the bargaining approach. Initially, we consider a disagreement point (status quo) between the divisions of the firm, which plays the role of a deterrent. We propose a framework and a method based on the Nash equilibrium approach for computing the disagreement point. Then, we introduce a bargaining solution, which is a single-valued function that selects an outcome from the feasible pay-offs for each bargaining problem that is a result of cooperation of the divisions of the firm involved in the transfer pricing problem. The agreement reached by the divisions in the game is the most preferred alternative within the set of feasible outcomes, which produces a profit-maximizing allocation of the transfer price between divisions. For computing the bargaining solution, we propose an optimization method. An example illustrating the usefulness of the method is presented.
This paper shows how big data analysis opens a range of research and technological problems and calls for new approaches. We start with defining the essential properties of big data and discussing the main types of data involved. We then survey the dedicated solutions for storing and processing big data, including a data lake, virtual integration, and a polystore architecture. Difficulties in managing data quality and provenance are also highlighted. The characteristics of big data imply also specific requirements and challenges for data mining algorithms, which we address as well. The links with related areas, including data streams and deep learning, are discussed. The common theme that naturally emerges from this characterization is complexity. All in all, we consider it to be the truly defining feature of big data (posing particular research and technological challenges), which ultimately seems to be of greater importance than the sheer data volume.
The paper discusses various approaches to mining co-location patterns with extended spatial objects. We focus on the properties of transaction-free approaches EXCOM and DEOSP, and discuss the differences between the method using a buffer and that employing clustering and triangulation. These theoretical differences between the two methods are verified experimentally. In the performed tests three different implementations of EXCOMare compared with DEOSP, highlighting the advantages and downsides of both approaches.
The demand for performing data analysis is steadily rising. As a consequence, people of different profiles (i.e., nonexperienced users) have started to analyze their data. However, this is challenging for them. A key step that poses difficulties and determines the success of the analysis is data mining (model/algorithm selection problem). Meta-learning is a technique used for assisting non-expert users in this step. The effectiveness of meta-learning is, however, largely dependent on the description/characterization of datasets (i.e., meta-features used for meta-learning). There is a need for improving the effectiveness of meta-learning by identifying and designing more predictive meta-features. In this work, we use a method from exploratory factor analysis to study the predictive power of different meta-features collected in OpenML, which is a collaborative machine learning platform that is designed to store and organize meta-data about datasets, data mining algorithms, models and their evaluations. We first use the method to extract latent features, which are abstract concepts that group together meta-features with common characteristics. Then, we study and visualize the relationship of the latent features with three different performance measures of four classification algorithms on hundreds of datasets available in OpenML, and we select the latent features with the highest predictive power. Finally, we use the selected latent features to perform meta-learning and we show that our method improves the meta-learning process. Furthermore, we design an easy to use application for retrieving different meta-data from OpenML as the biggest source of data in this domain.
In online gambling, poker hands are one of the most popular and fundamental units of the game state and can be considered objects comprising all the events that pertain to the single hand played. In a situation where tens of millions of poker hands are produced daily and need to be stored and analysed quickly, the use of relational databases no longer provides high scalability and performance stability. The purpose of this paper is to present an efficient way of storing and retrieving poker hands in a big data environment. We propose a new, read-optimised storage model that offers significant data access improvements over traditional database systems as well as the existing Hadoop file formats such as ORC, RCFile or SequenceFile. Through index-oriented partition elimination, our file format allows reducing the number of file splits that needs to be accessed, and improves query response time up to three orders of magnitude in comparison with other approaches. In addition, our file format supports a range of new indexing structures to facilitate fast row retrieval at a split level. Both index types operate independently of the Hive execution context and allow other big data computational frameworks such as MapReduce or Spark to benefit from the optimized data access path to the hand information. Moreover, we present a detailed analysis of our storage model and its supporting index structures, and how they are organised in the overall data framework. We also describe in detail how predicate based expression trees are used to build effective file-level execution plans. Our experimental tests conducted on a production cluster, holding nearly 40 billion hands which span over 4000 partitions, show that multi-way partition pruning outperforms other existing file formats, resulting in faster query execution times and better cluster utilisation.
Imbalanced data classification is one of the most widespread challenges in contemporary pattern recognition. Varying levels of imbalance may be observed in most real datasets, affecting the performance of classification algorithms. Particularly, high levels of imbalance make serious difficulties, often requiring the use of specially designed methods. In such cases the most important issue is often to properly detect minority examples, but at the same time the performance on the majority class cannot be neglected. In this paper we describe a novel resampling technique focused on proper detection of minority examples in a two-class imbalanced data task. The proposed method combines cleaning the decision border around minority objects with guided synthetic oversampling. Results of the conducted experimental study indicate that the proposed algorithm usually outperforms the conventional oversampling approaches, especially when the detection of minority examples is considered.
When running data-mining algorithms on big data platforms, a parallel, distributed framework, such asMAPREDUCE, may be used. However, in a parallel framework, each individual model fits the data allocated to its own computing node without necessarily fitting the entire dataset. In order to induce a single consistent model, ensemble algorithms such as majority voting, aggregate the local models, rather than analyzing the entire dataset directly. Our goal is to develop an efficient algorithm for choosing one representative model from multiple, locally induced decision-tree models. The proposed SySM (syntactic similarity method) algorithm computes the similarity between the models produced by parallel nodes and chooses the model which is most similar to others as the best representative of the entire dataset. In 18.75% of 48 experiments on four big datasets, SySM accuracy is significantly higher than that of the ensemble; in about 43.75% of the experiments, SySM accuracy is significantly lower; in one case, the results are identical; and in the remaining 35.41% of cases the difference is not statistically significant. Compared with ensemble methods, the representative tree models selected by the proposed methodology are more compact and interpretable, their induction consumes less memory, and, as confirmed by the empirical results, they allow faster classification of new records.
In this paper, a control framework including active fault-tolerant control (FTC) and reference redesign is developed subject to actuator stuck failures under input saturations. FTC synthesis and reference redesign approaches are proposed to guarantee post-fault system safety and reference reachability. Then, these features are analyzed under both actuator stuck failures and constraints before fault-tolerant controller switches. As the main contribution, actuator stuck failures and constraints are unified so that they can be easily considered simultaneously. By means of transforming stuck failures into actuator constraints, the post-fault system can be regarded as an equivalent system with only asymmetrical actuator constraints. Thus, methods against actuator saturations can be used to guarantee regional stability and produce the stability region. Based on this region, stuck compensation is analyzed. Specifically, an unstable open-loop system is considered, which is more challenging. Furthermore, the method is extended to a set-point tracking problem where the reachability of the original reference can be evaluated. Then, a new optimal reference will be computed for the post-fault system if the original one is unreachable. Finally, simulation examples are shown to illustrate the theoretical results.
This paper develops a new actuator failure compensation scheme for two linked two-wheel drive (2WD) mobile robots based on multiple-model control. First, a configuration of two linked 2WD robots is described, and their kinematics and dynamics are modeled. Then, a multiple-model based failure compensation scheme is developed to compensate for actuator failures, consisting of a kinematic controller, multiple dynamic controllers and a control switching mechanism, which ensures system stability and asymptotic tracking properties. Finally, simulation results verify the effectiveness of the proposed failure compensation control system.
The frequent incremental release of software in agile development impacts the overall reliability of the product. In this paper, we propose a generic software reliability model for the agile process, taking permanent and transient faults into consideration. The proposed model is implemented using the NHPP (non-homogenous Poisson process) and the Musa model. The comparison of the two implementations yields an effective, empirical and reliable model for agile software development.
In this work, the problem of position regulation control is addressed for a 2DOF underactuated mechanical system with friction and backlash. For this purpose, a method combining sliding mode and H∞ control is developed. We prove that the application of the method to the nonlinear model considered results in an asymptotically stable equilibria set. Moreover, it is possible to achieve a sufficiently small and bounded steady-state position error even in the presence of disturbances by employing the proposed technique. That is, the developed controller is able to account not only for unmatched external perturbations and model discrepancies of the test rig considered, but also for matched bounded perturbations. The control methodology is presented from both the theoretical and experimental angles to demonstrate the good performance of the proposed controller.
This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals) that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005), which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.
In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM), particle swarm optimization (PSO) and the genetic algorithm (GA) are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA). Numerical experiments on a real test network are reported.
Aesthetic patterns are widely used nowadays, e.g., in jewellery design, carpet design, as textures and patterns on wallpapers, etc. Most of the work during the design stage is carried out by a designer manually. Therefore, it is highly useful to develop methods for aesthetic pattern generation. In this paper, we present methods for generating aesthetic patterns using the dynamics of a discrete dynamical system. The presented methods are based on the use of various iteration processes from fixed point theory (Mann, S, Noor, etc.) and the application of an affine combination of these iterations. Moreover, we propose new convergence tests that enrich the obtained patterns. The proposed methods generate patterns in a procedural way and can be easily implemented on the GPU. The presented examples show that using the proposed methods we are able to obtain a variety of interesting patterns. Moreover, the numerical examples show that the use of the GPU implementation with shaders allows the generation of patterns in real time and the speed-up (compared with a CPU implementation) ranges from about 1000 to 2500 times.
Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
This paper analyzes and proposes a solution to the transfer pricing problem from the point of view of the Nash bargaining game theory approach. We consider a firm consisting of several divisions with sequential transfers, in which central management provides a transfer price decision that enables maximization of operating profits. Price transferring between divisions is negotiable throughout the bargaining approach. Initially, we consider a disagreement point (status quo) between the divisions of the firm, which plays the role of a deterrent. We propose a framework and a method based on the Nash equilibrium approach for computing the disagreement point. Then, we introduce a bargaining solution, which is a single-valued function that selects an outcome from the feasible pay-offs for each bargaining problem that is a result of cooperation of the divisions of the firm involved in the transfer pricing problem. The agreement reached by the divisions in the game is the most preferred alternative within the set of feasible outcomes, which produces a profit-maximizing allocation of the transfer price between divisions. For computing the bargaining solution, we propose an optimization method. An example illustrating the usefulness of the method is presented.