Published Online: 23 Feb 2022 Page range: 79 - 100
Abstract
Abstract
Context: Machine Learning (ML) is a disruptive concept that has given rise to and generated interest in different applications in many fields of study. The purpose of Machine Learning is to solve real-life problems by automatically learning and improving from experience without being explicitly programmed for a specific problem, but for a generic type of problem. This article approaches the different applications of ML in a series of econometric methods.
Objective: The objective of this research is to identify the latest applications and do a comparative study of the performance of econometric and ML models. The study aimed to find empirical evidence for the performance of ML algorithms being superior to traditional econometric models. The Methodology of systematic mapping of literature has been followed to carry out this research, according to the guidelines established by [39], and [58] that facilitate the identification of studies published about this subject.
Results: The results show, that in most cases ML outperforms econometric models, while in other cases the best performance has been achieved by combining traditional methods and ML applications.
Conclusion: inclusion and exclusions criteria have been applied and 52 articles closely related articles have been reviewed. The conclusion drawn from this research is that it is a field that is growing, which is something that is well known nowadays and that there is no certainty as to the performance of ML being always superior to that of econometric models.
Published Online: 23 Feb 2022 Page range: 101 - 120
Abstract
Abstract
There has been an amplified focus on and benefit from the adoption of artificial intelligence (AI) in medical imaging applications. However, deep learning approaches involve training with massive amounts of annotated data in order to guarantee generalization and achieve high accuracies. Gathering and annotating large sets of training images require expertise which is both expensive and time-consuming, especially in the medical field. Furthermore, in health care systems where mistakes can have catastrophic consequences, there is a general mistrust in the black-box aspect of AI models. In this work, we focus on improving the performance of medical imaging applications when limited data is available while focusing on the interpretability aspect of the proposed AI model. This is achieved by employing a novel transfer learning framework, progressive transfer learning, an automated annotation technique and a correlation analysis experiment on the learned representations.
Progressive transfer learning helps jump-start the training of deep neural networks while improving the performance by gradually transferring knowledge from two source tasks into the target task. It is empirically tested on the wrist fracture detection application by first training a general radiology network RadiNet and using its weights to initialize RadiNetwrist, that is trained on wrist images to detect fractures. Experiments show that RadiNetwrist achieves an accuracy of 87% and an AUC ROC of 94% as opposed to 83% and 92% when it is pre-trained on the ImageNet dataset.
This improvement in performance is investigated within an explainable AI framework. More concretely, the learned deep representations of RadiNetwrist are compared to those learned by the baseline model by conducting a correlation analysis experiment. The results show that, when transfer learning is gradually applied, some features are learned earlier in the network. Moreover, the deep layers in the progressive transfer learning framework are shown to encode features that are not encountered when traditional transfer learning techniques are applied.
In addition to the empirical results, a clinical study is conducted and the performance of RadiNetwrist is compared to that of an expert radiologist. We found that RadiNetwrist exhibited similar performance to that of radiologists with more than 20 years of experience.
This motivates follow-up research to train on more data to feasibly surpass radiologists’ performance, and investigate the interpretability of AI models in the healthcare domain where the decision-making process needs to be credible and transparent.
Published Online: 23 Feb 2022 Page range: 121 - 133
Abstract
Abstract
Text-based CAPTCHA is a convenient and effective safety mechanism that has been widely deployed across websites. The efficient end-to-end models of scene text recognition consisting of CNN and attention-based RNN show limited performance in solving text-based CAPTCHAs. In contrast with the street view image and document, the character sequence in CAPTCHA is non-semantic. The RNN loses its ability to learn the semantic context and only implicitly encodes the relative position of extracted features. Meanwhile, the security features, which prevent characters from segmentation and recognition, extensively increase the complexity of CAPTCHAs. The performance of this model is sensitive to different CAPTCHA schemes. In this paper, we analyze the properties of the text-based CAPTCHA and accordingly consider solving it as a highly position-relative character sequence recognition task. We propose a network named PosConv to leverage the position information in the character sequence without RNN. PosConv uses a novel padding strategy and modified convolution, explicitly encoding the relative position into the local features of characters. This mechanism of PosConv makes the extracted features from CAPTCHAs more informative and robust. We validate PosConv on six text-based CAPTCHA schemes, and it achieves state-of-the-art or competitive recognition accuracy with significantly fewer parameters and faster convergence speed.
Published Online: 23 Feb 2022 Page range: 135 - 148
Abstract
Abstract
1Most reinforcement learning benchmarks – especially in multi-agent tasks – do not go beyond observations with simple noise; nonetheless, real scenarios induce more elaborate vision pipeline failures: false sightings, misclassifications or occlusion. In this work, we propose a lightweight, 2D environment for robot soccer and autonomous driving that can emulate the above discrepancies. Besides establishing a benchmark for accessible multi-agent reinforcement learning research, our work addresses the challenges the simulator imposes. For handling realistic noise, we use self-supervised learning to enhance scene reconstruction and extend curiosity-driven learning to model longer horizons. Our extensive experiments show that the proposed methods achieve state-of-the-art performance, compared against actor-critic methods, ICM, and PPO.
Published Online: 23 Feb 2022 Page range: 149 - 163
Abstract
Abstract
Security threats, among other intrusions affecting the availability, confidentiality and integrity of IT resources and services, are spreading fast and can cause serious harm to organizations. Intrusion detection has a key role in capturing intrusions. In particular, the application of machine learning methods in this area can enrich the intrusion detection efficiency. Various methods, such as pattern recognition from event logs, can be applied in intrusion detection. The main goal of our research is to present a possible intrusion detection approach using recent machine learning techniques. In this paper, we suggest and evaluate the usage of stacked ensembles consisting of neural network (SNN) and autoen-coder (AE) models augmented with a tree-structured Parzen estimator hyperparameter optimization approach for intrusion detection. The main contribution of our work is the application of advanced hyperparameter optimization and stacked ensembles together.
We conducted several experiments to check the effectiveness of our approach. We used the NSL-KDD dataset, a common benchmark dataset in intrusion detection, to train our models. The comparative results demonstrate that our proposed models can compete with and, in some cases, outperform existing models.
Context: Machine Learning (ML) is a disruptive concept that has given rise to and generated interest in different applications in many fields of study. The purpose of Machine Learning is to solve real-life problems by automatically learning and improving from experience without being explicitly programmed for a specific problem, but for a generic type of problem. This article approaches the different applications of ML in a series of econometric methods.
Objective: The objective of this research is to identify the latest applications and do a comparative study of the performance of econometric and ML models. The study aimed to find empirical evidence for the performance of ML algorithms being superior to traditional econometric models. The Methodology of systematic mapping of literature has been followed to carry out this research, according to the guidelines established by [39], and [58] that facilitate the identification of studies published about this subject.
Results: The results show, that in most cases ML outperforms econometric models, while in other cases the best performance has been achieved by combining traditional methods and ML applications.
Conclusion: inclusion and exclusions criteria have been applied and 52 articles closely related articles have been reviewed. The conclusion drawn from this research is that it is a field that is growing, which is something that is well known nowadays and that there is no certainty as to the performance of ML being always superior to that of econometric models.
There has been an amplified focus on and benefit from the adoption of artificial intelligence (AI) in medical imaging applications. However, deep learning approaches involve training with massive amounts of annotated data in order to guarantee generalization and achieve high accuracies. Gathering and annotating large sets of training images require expertise which is both expensive and time-consuming, especially in the medical field. Furthermore, in health care systems where mistakes can have catastrophic consequences, there is a general mistrust in the black-box aspect of AI models. In this work, we focus on improving the performance of medical imaging applications when limited data is available while focusing on the interpretability aspect of the proposed AI model. This is achieved by employing a novel transfer learning framework, progressive transfer learning, an automated annotation technique and a correlation analysis experiment on the learned representations.
Progressive transfer learning helps jump-start the training of deep neural networks while improving the performance by gradually transferring knowledge from two source tasks into the target task. It is empirically tested on the wrist fracture detection application by first training a general radiology network RadiNet and using its weights to initialize RadiNetwrist, that is trained on wrist images to detect fractures. Experiments show that RadiNetwrist achieves an accuracy of 87% and an AUC ROC of 94% as opposed to 83% and 92% when it is pre-trained on the ImageNet dataset.
This improvement in performance is investigated within an explainable AI framework. More concretely, the learned deep representations of RadiNetwrist are compared to those learned by the baseline model by conducting a correlation analysis experiment. The results show that, when transfer learning is gradually applied, some features are learned earlier in the network. Moreover, the deep layers in the progressive transfer learning framework are shown to encode features that are not encountered when traditional transfer learning techniques are applied.
In addition to the empirical results, a clinical study is conducted and the performance of RadiNetwrist is compared to that of an expert radiologist. We found that RadiNetwrist exhibited similar performance to that of radiologists with more than 20 years of experience.
This motivates follow-up research to train on more data to feasibly surpass radiologists’ performance, and investigate the interpretability of AI models in the healthcare domain where the decision-making process needs to be credible and transparent.
Text-based CAPTCHA is a convenient and effective safety mechanism that has been widely deployed across websites. The efficient end-to-end models of scene text recognition consisting of CNN and attention-based RNN show limited performance in solving text-based CAPTCHAs. In contrast with the street view image and document, the character sequence in CAPTCHA is non-semantic. The RNN loses its ability to learn the semantic context and only implicitly encodes the relative position of extracted features. Meanwhile, the security features, which prevent characters from segmentation and recognition, extensively increase the complexity of CAPTCHAs. The performance of this model is sensitive to different CAPTCHA schemes. In this paper, we analyze the properties of the text-based CAPTCHA and accordingly consider solving it as a highly position-relative character sequence recognition task. We propose a network named PosConv to leverage the position information in the character sequence without RNN. PosConv uses a novel padding strategy and modified convolution, explicitly encoding the relative position into the local features of characters. This mechanism of PosConv makes the extracted features from CAPTCHAs more informative and robust. We validate PosConv on six text-based CAPTCHA schemes, and it achieves state-of-the-art or competitive recognition accuracy with significantly fewer parameters and faster convergence speed.
1Most reinforcement learning benchmarks – especially in multi-agent tasks – do not go beyond observations with simple noise; nonetheless, real scenarios induce more elaborate vision pipeline failures: false sightings, misclassifications or occlusion. In this work, we propose a lightweight, 2D environment for robot soccer and autonomous driving that can emulate the above discrepancies. Besides establishing a benchmark for accessible multi-agent reinforcement learning research, our work addresses the challenges the simulator imposes. For handling realistic noise, we use self-supervised learning to enhance scene reconstruction and extend curiosity-driven learning to model longer horizons. Our extensive experiments show that the proposed methods achieve state-of-the-art performance, compared against actor-critic methods, ICM, and PPO.
Security threats, among other intrusions affecting the availability, confidentiality and integrity of IT resources and services, are spreading fast and can cause serious harm to organizations. Intrusion detection has a key role in capturing intrusions. In particular, the application of machine learning methods in this area can enrich the intrusion detection efficiency. Various methods, such as pattern recognition from event logs, can be applied in intrusion detection. The main goal of our research is to present a possible intrusion detection approach using recent machine learning techniques. In this paper, we suggest and evaluate the usage of stacked ensembles consisting of neural network (SNN) and autoen-coder (AE) models augmented with a tree-structured Parzen estimator hyperparameter optimization approach for intrusion detection. The main contribution of our work is the application of advanced hyperparameter optimization and stacked ensembles together.
We conducted several experiments to check the effectiveness of our approach. We used the NSL-KDD dataset, a common benchmark dataset in intrusion detection, to train our models. The comparative results demonstrate that our proposed models can compete with and, in some cases, outperform existing models.