Journal & Issues

Volume 28 (2023): Issue 1 (June 2023)

Volume 27 (2022): Issue 2 (December 2022)

Volume 27 (2022): Issue 1 (June 2022)

Volume 26 (2021): Issue 2 (December 2021)

Volume 26 (2021): Issue 1 (May 2021)

Volume 25 (2020): Issue 2 (December 2020)

Volume 25 (2020): Issue 1 (May 2020)

Volume 24 (2019): Issue 2 (December 2019)

Volume 24 (2019): Issue 1 (May 2019)

Volume 23 (2018): Issue 2 (December 2018)

Volume 23 (2018): Issue 1 (May 2018)

Volume 22 (2017): Issue 1 (December 2017)

Volume 21 (2017): Issue 1 (May 2017)

Volume 20 (2016): Issue 1 (December 2016)

Volume 19 (2016): Issue 1 (May 2016)

Volume 18 (2015): Issue 1 (December 2015)

Volume 17 (2015): Issue 1 (May 2015)

Volume 16 (2014): Issue 1 (December 2014)

Volume 15 (2014): Issue 1 (July 2014)

Volume 14 (2013): Issue 1 (June 2013)

Volume 13 (2012): Issue 1 (November 2012)

Journal Details
Format
Journal
eISSN
2255-8691
First Published
08 Nov 2012
Publication timeframe
2 times per year
Languages
English

Search

Volume 24 (2019): Issue 2 (December 2019)

Journal Details
Format
Journal
eISSN
2255-8691
First Published
08 Nov 2012
Publication timeframe
2 times per year
Languages
English

Search

0 Articles
Open Access

A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques

Published Online: 20 Feb 2020
Page range: 82 - 93

Abstract

Abstract

During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.

Keywords

  • Clustering
  • estimation
  • feature selection
  • genetic algorithm
  • imperialist competitive algorithm
  • neural network
  • regression
  • software development effort
Open Access

Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language

Published Online: 20 Feb 2020
Page range: 94 - 103

Abstract

Abstract

Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.

Keywords

  • Computation independent model
  • functional feature
  • natural language processing
  • Stanford CoreNLP
  • topological functioning model
  • use case
Open Access

A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem

Published Online: 20 Feb 2020
Page range: 104 - 110

Abstract

Abstract

The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.

Keywords

  • Big data
  • cluster-based resampling
  • imbalanced big data classification
  • imbalanced data
Open Access

Development of Ontology Based Competence Management Model for Non-Formal Education Services

Published Online: 20 Feb 2020
Page range: 111 - 118

Abstract

Abstract

Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another.

The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.

Keywords

  • Competence management model
  • competence model
  • non-formal education
  • ontology
Open Access

Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data

Published Online: 20 Feb 2020
Page range: 119 - 127

Abstract

Abstract

High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.

Keywords

  • Classification algorithms
  • evolutionary computation
  • feature extraction
  • genetic algorithms
Open Access

Fuzzy Expert System Generalised Model for Medical Applications

Published Online: 20 Feb 2020
Page range: 128 - 133

Abstract

Abstract

Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.

Keywords

  • Expert system
  • fuzzy logic
  • generalized model
  • medical applications
Open Access

Affective State Based Anomaly Detection in Crowd

Published Online: 20 Feb 2020
Page range: 134 - 140

Abstract

Abstract

To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.

Keywords

  • Anomaly detection in crowd
  • dangerous anomaly detection
  • emotional state
  • person extraction from crowd
  • surveillance system automation
Open Access

Results From Expert Survey on System Analysis Process Activities

Published Online: 20 Feb 2020
Page range: 141 - 149

Abstract

Abstract

System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.

Keywords

  • Delphi method
  • expert survey
  • framework
  • system analysis
Open Access

Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling

Published Online: 20 Feb 2020
Page range: 150 - 160

Abstract

Abstract

Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.

Keywords

  • Boolean linear programming model
  • heuristic
  • job scheduling
  • job preemptions
  • relative gap
  • remaining available period
  • remaining processing period
  • single machine scheduling
  • total weighted tardiness
Open Access

Some Aspects of Good Practice for Safe Use of Wi-Fi, Based on Experiments and Standards

Published Online: 20 Feb 2020
Page range: 161 - 165

Abstract

Abstract

The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.

Keywords

  • Electric field intensity
  • good practice
  • microwave
  • occupational safety
  • plants
  • router
  • standards
  • Wi-Fi
0 Articles
Open Access

A Dataset-Independent Model for Estimating Software Development Effort Using Soft Computing Techniques

Published Online: 20 Feb 2020
Page range: 82 - 93

Abstract

Abstract

During the recent years, numerous endeavours have been made in the area of software development effort estimation for calculating the software costs in the preliminary development stages. These studies have resulted in the offering of a great many of the models. Despite the large deal of efforts, the substantial problems of the offered methods are their dependency on the used data collection and, sometimes, their lack of appropriate efficiency. The current article attempts to present a model for software development effort estimation through making use of evolutionary algorithms and neural networks. The distinctive characteristic of this model is its lack of dependency on the collection of data used as well as its high efficiency. To evaluate the proposed model, six different data collections have been used in the area of software effort estimation. The reason for the application of several data collections is related to the investigation of the model performance independence of the data collection used. The evaluation scales have been MMRE, MdMRE and PRED (0.25). The results have indicated that the proposed model, besides delivering high efficiency in contrast to its counterparts, produces the best responses for all of the used data collections.

Keywords

  • Clustering
  • estimation
  • feature selection
  • genetic algorithm
  • imperialist competitive algorithm
  • neural network
  • regression
  • software development effort
Open Access

Extracting TFM Core Elements From Use Case Scenarios by Processing Structure and Text in Natural Language

Published Online: 20 Feb 2020
Page range: 94 - 103

Abstract

Abstract

Extracting core elements of Topological Functioning Model (TFM) from use case scenarios requires processing of both structure and natural language constructs in use case step descriptions. The processing steps are discussed in the present paper. Analysis of natural language constructs is based on outcomes provided by Stanford CoreNLP. Stanford CoreNLP is the Natural Language Processing pipeline that allows analysing text at paragraph, sentence and word levels. The proposed technique allows extracting actions, objects, results, preconditions, post-conditions and executors of the functional features, as well as cause-effect relations between them. However, accuracy of it is dependent on the used language constructs and accuracy of specification of event flows. The analysis of the results allows concluding that even use case specifications require the use of rigor, or even uniform, structure of paths and sentences as well as awareness of the possible parsing errors.

Keywords

  • Computation independent model
  • functional feature
  • natural language processing
  • Stanford CoreNLP
  • topological functioning model
  • use case
Open Access

A New Big Data Model Using Distributed Cluster-Based Resampling for Class-Imbalance Problem

Published Online: 20 Feb 2020
Page range: 104 - 110

Abstract

Abstract

The class imbalance problem, one of the common data irregularities, causes the development of under-represented models. To resolve this issue, the present study proposes a new cluster-based MapReduce design, entitled Distributed Cluster-based Resampling for Imbalanced Big Data (DIBID). The design aims at modifying the existing dataset to increase the classification success. Within the study, DIBID has been implemented on public datasets under two strategies. The first strategy has been designed to present the success of the model on data sets with different imbalanced ratios. The second strategy has been designed to compare the success of the model with other imbalanced big data solutions in the literature. According to the results, DIBID outperformed other imbalanced big data solutions in the literature and increased area under the curve values between 10 % and 24 % through the case study.

Keywords

  • Big data
  • cluster-based resampling
  • imbalanced big data classification
  • imbalanced data
Open Access

Development of Ontology Based Competence Management Model for Non-Formal Education Services

Published Online: 20 Feb 2020
Page range: 111 - 118

Abstract

Abstract

Competence management is a discipline that recently has regained popularity due to the growing demand for constantly higher competences of employees as well as graduates. One of the main implementation challenges of competence management is that, as a rule, it is based on experts’ implicit knowledge. This is the reason why the transformation of implicit knowledge into explicit knowledge practically is unmanageable and, as a consequence, limits the ability to transfer the already existing knowledge from one organisation to another.

The paper proposes an ontology-based competence model that allows the reuse of existing competence frameworks in the field of non-formal education where different competence frameworks need to be used together for the purpose of identification, assessment and development of customers’ competences without forcing the organisations to change their routine competence management processes. The proposed competence model is used as a basis for development of competence management model on which IT tools that support a competence management processes may be built up. Several existing frameworks have been analysed and the terminology used in them has been combined in a single model. The usage of the proposed model is discussed and the possible IT tools to support the competence management process are identified in the paper.

Keywords

  • Competence management model
  • competence model
  • non-formal education
  • ontology
Open Access

Genetic Algorithm Based Feature Selection Technique for Electroencephalography Data

Published Online: 20 Feb 2020
Page range: 119 - 127

Abstract

Abstract

High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.

Keywords

  • Classification algorithms
  • evolutionary computation
  • feature extraction
  • genetic algorithms
Open Access

Fuzzy Expert System Generalised Model for Medical Applications

Published Online: 20 Feb 2020
Page range: 128 - 133

Abstract

Abstract

Over the past two decades an exponential growth of medical fuzzy expert systems has been observed. These systems address specific forms of medical and health problems resulting in differentiated models which are application dependent and may lack adaptability. This research proposes a generalized model encompassing major features in specialized existing fuzzy systems. Generalization modelling by design in which the major components of differentiated the system were identified and used as the components of the general model. The prototype shows that the proposed model allows medical experts to define fuzzy variables (rules base) for any medical application and users to enter symptoms (facts base) and ask their medical conditions from the designed generalised core inference engine. Further research may include adding more composition conditions, more combining techniques and more tests in several environments in order to check its precision, sensitivity and specificity.

Keywords

  • Expert system
  • fuzzy logic
  • generalized model
  • medical applications
Open Access

Affective State Based Anomaly Detection in Crowd

Published Online: 20 Feb 2020
Page range: 134 - 140

Abstract

Abstract

To distinguish individuals with dangerous abnormal behaviours from the crowd, human characteristics (e.g., speed and direction of motion, interaction with other people), crowd characteristics (such as flow and density), space available to individuals, etc. must be considered. The paper proposes an approach that considers individual and crowd metrics to determine anomaly. An individual’s abnormal behaviour alone cannot indicate behaviour, which can be threatening toward other individuals, as this behaviour can also be triggered by positive emotions or events. To avoid individuals whose abnormal behaviour is potentially unrelated to aggression and is not environmentally dangerous, it is suggested to use emotional state of individuals. The aim of the proposed approach is to automate video surveillance systems by enabling them to automatically detect potentially dangerous situations.

Keywords

  • Anomaly detection in crowd
  • dangerous anomaly detection
  • emotional state
  • person extraction from crowd
  • surveillance system automation
Open Access

Results From Expert Survey on System Analysis Process Activities

Published Online: 20 Feb 2020
Page range: 141 - 149

Abstract

Abstract

System analysis is a crucial and complex step in software engineering process, which affects the overall success of the project and quality of the project outcome. Even though Agile methods have become widely popular, these methods have no structure when it comes to requirements elicitation and specification, which can have impact on whether a project has favourable outcome. Nevertheless, regardless of the chosen approach by industry practitioners, it is important to identify, which activities are currently performed, and analyse the causes and possible issues, which are encountered. The paper presents results from expert survey on the importance of activities related to requirements elicitation, analysis and specification process and the use of tools to support this process. Delphi method, which is used to evaluate the responses, is described. Lists of activities are ranked according to importance and additional information on expert responses is given in the paper. The information can give an insight into the activities and tools that are used in the industry.

Keywords

  • Delphi method
  • expert survey
  • framework
  • system analysis
Open Access

Minimal Total Weighted Tardiness in Tight-Tardy Single Machine Preemptive Idling-Free Scheduling

Published Online: 20 Feb 2020
Page range: 150 - 160

Abstract

Abstract

Two possibilities of obtaining the minimal total weighted tardiness in tight-tardy single machine preemptive idling-free scheduling are studied. The Boolean linear programming model, which allows obtaining the exactly minimal tardiness, becomes too time-consuming as either the number of jobs or numbers of job parts increase. Therefore, a heuristic based on remaining available and processing periods is used instead. The heuristic schedules 2 jobs always with the minimal tardiness. In scheduling 3 to 7 jobs, the risk of missing the minimal tardiness is just 1.5 % to 3.2 %. It is expected that scheduling 12 and more jobs has at the most the same risk or even lower. In scheduling 10 jobs without a timeout, the heuristic is almost 1 million times faster than the exact model. The exact model is still applicable for scheduling 3 to 5 jobs, where the averaged computation time varies from 0.1 s to 1.02 s. However, the maximal computation time for 6 jobs is close to 1 minute. Further increment of jobs may delay obtaining the minimal tardiness at least for a few minutes, but 7 jobs still can be scheduled at worst for 7 minutes. When scheduling 8 jobs and more, the exact model should be substituted with the heuristic.

Keywords

  • Boolean linear programming model
  • heuristic
  • job scheduling
  • job preemptions
  • relative gap
  • remaining available period
  • remaining processing period
  • single machine scheduling
  • total weighted tardiness
Open Access

Some Aspects of Good Practice for Safe Use of Wi-Fi, Based on Experiments and Standards

Published Online: 20 Feb 2020
Page range: 161 - 165

Abstract

Abstract

The aim of the research is to study the effect of microwave Wi-Fi radiation on humans and plants. The paper investigates national standards for permissible exposure levels to microwave radiation, measures electric field intensity and justifies the point of view regarding the safe use of microwave technologies based on multiple plant cultivation experiments at different distances from a Wi-Fi router. The results demonstrate that the radiation of Wi-Fi routers significantly impairs the growth, development, yield and unexpected drought resistance of plants at short distances from the microwave source (up to 1 m to 2 m; –33 dBm to –43 dBm; >10 V/m). Slight effects are found up to about 4.5 m from a full-power home Wi-Fi router. As a result, suggestions are made for safe and balanced use of modern wireless technologies, which can complement occupational safety and health regulations.

Keywords

  • Electric field intensity
  • good practice
  • microwave
  • occupational safety
  • plants
  • router
  • standards
  • Wi-Fi