Rivista e Edizione

Volume 32 (2022): Edizione 2 (June 2022)
Towards Self-Healing Systems through Diagnostics, Fault-Tolerance and Design (Special section, pp. 171-269), Marcin Witczak and Ralf Stetter (Eds.)

Volume 32 (2022): Edizione 1 (March 2022)

Volume 31 (2021): Edizione 4 (December 2021)
Advanced Machine Learning Techniques in Data Analysis (special section, pp. 549-611), Maciej Kusy, Rafał Scherer, and Adam Krzyżak (Eds.)

Volume 31 (2021): Edizione 3 (September 2021)

Volume 31 (2021): Edizione 2 (June 2021)

Volume 31 (2021): Edizione 1 (March 2021)

Volume 30 (2020): Edizione 4 (December 2020)

Volume 30 (2020): Edizione 3 (September 2020)
Big Data and Signal Processing (Special section, pp. 399-473), Joanna Kołodziej, Sabri Pllana, Salvatore Vitabile (Eds.)

Volume 30 (2020): Edizione 2 (June 2020)

Volume 30 (2020): Edizione 1 (March 2020)

Volume 29 (2019): Edizione 4 (December 2019)
New Perspectives in Nonlinear and Intelligent Control (In Honor of Alexander P. Kurdyukov) (special section, pp. 629-712), Julio B. Clempner, Enso Ikonen, Alexander P. Kurdyukov (Eds.)

Volume 29 (2019): Edizione 3 (September 2019)
Information Technology for Systems Research (special section, pp. 427-515), Piotr Kulczycki, Janusz Kacprzyk, László T. Kóczy, Radko Mesiar (Eds.)

Volume 29 (2019): Edizione 2 (June 2019)
Advances in Complex Cloud and Service Oriented Computing (special section, pp. 213-274), Anna Kobusińska, Ching-Hsien Hsu, Kwei-Jay Lin (Eds.)

Volume 29 (2019): Edizione 1 (March 2019)
Exploring Complex and Big Data (special section, pp. 7-91), Johann Gamper, Robert Wrembel (Eds.)

Volume 28 (2018): Edizione 4 (December 2018)

Volume 28 (2018): Edizione 3 (September 2018)

Volume 28 (2018): Edizione 2 (June 2018)
Advanced Diagnosis and Fault-Tolerant Control Methods (special section, pp. 233-333), Vicenç Puig, Dominique Sauter, Christophe Aubrun, Horst Schulte (Eds.)

Volume 28 (2018): Edizione 1 (March 2018)
Ediziones in Parameter Identification and Control (special section, pp. 9-122), Abdel Aitouche (Ed.)

Volume 27 (2017): Edizione 4 (December 2017)

Volume 27 (2017): Edizione 3 (September 2017)
Systems Analysis: Modeling and Control (special section, pp. 457-499), Vyacheslav Maksimov and Boris Mordukhovich (Eds.)

Volume 27 (2017): Edizione 2 (June 2017)

Volume 27 (2017): Edizione 1 (March 2017)

Volume 26 (2016): Edizione 4 (December 2016)

Volume 26 (2016): Edizione 3 (September 2016)

Volume 26 (2016): Edizione 2 (June 2016)

Volume 26 (2016): Edizione 1 (March 2016)

Volume 25 (2015): Edizione 4 (December 2015)
Special issue: Complex Problems in High-Performance Computing Systems, Editors: Mauro Iacono, Joanna Kołodziej

Volume 25 (2015): Edizione 3 (September 2015)

Volume 25 (2015): Edizione 2 (June 2015)

Volume 25 (2015): Edizione 1 (March 2015)
Safety, Fault Diagnosis and Fault Tolerant Control in Aerospace Systems, Silvio Simani, Paolo Castaldi (Eds.)

Volume 24 (2014): Edizione 4 (December 2014)

Volume 24 (2014): Edizione 3 (September 2014)
Modelling and Simulation of High Performance Information Systems (special section, pp. 453-566), Pavel Abaev, Rostislav Razumchik, Joanna Kołodziej (Eds.)

Volume 24 (2014): Edizione 2 (June 2014)
Signals and Systems (special section, pp. 233-312), Ryszard Makowski and Jan Zarzycki (Eds.)

Volume 24 (2014): Edizione 1 (March 2014)
Selected Problems of Biomedical Engineering (special section, pp. 7 - 63), Marek Kowal and Józef Korbicz (Eds.)

Volume 23 (2013): Edizione 4 (December 2013)

Volume 23 (2013): Edizione 3 (September 2013)

Volume 23 (2013): Edizione 2 (June 2013)

Volume 23 (2013): Edizione 1 (March 2013)

Volume 22 (2012): Edizione 4 (December 2012)
Hybrid and Ensemble Methods in Machine Learning (special section, pp. 787 - 881), Oscar Cordón and Przemysław Kazienko (Eds.)

Volume 22 (2012): Edizione 3 (September 2012)

Volume 22 (2012): Edizione 2 (June 2012)
Analysis and Control of Spatiotemporal Dynamic Systems (special section, pp. 245 - 326), Dariusz Uciński and Józef Korbicz (Eds.)

Volume 22 (2012): Edizione 1 (March 2012)
Advances in Control and Fault-Tolerant Systems (special issue), Józef Korbicz, Didier Maquin and Didier Theilliol (Eds.)

Volume 21 (2011): Edizione 4 (December 2011)

Volume 21 (2011): Edizione 3 (September 2011)
Ediziones in Advanced Control and Diagnosis (special section, pp. 423 - 486), Vicenç Puig and Marcin Witczak (Eds.)

Volume 21 (2011): Edizione 2 (June 2011)
Efficient Resource Management for Grid-Enabled Applications (special section, pp. 219 - 306), Joanna Kołodziej and Fatos Xhafa (Eds.)

Volume 21 (2011): Edizione 1 (March 2011)
Semantic Knowledge Engineering (special section, pp. 9 - 95), Grzegorz J. Nalepa and Antoni Ligęza (Eds.)

Volume 20 (2010): Edizione 4 (December 2010)

Volume 20 (2010): Edizione 3 (September 2010)

Volume 20 (2010): Edizione 2 (June 2010)

Volume 20 (2010): Edizione 1 (March 2010)
Computational Intelligence in Modern Control Systems (special section, pp. 7 - 84), Józef Korbicz and Dariusz Uciński (Eds.)

Volume 19 (2009): Edizione 4 (December 2009)
Robot Control Theory (special section, pp. 519 - 588), Cezary Zieliński (Ed.)

Volume 19 (2009): Edizione 3 (September 2009)
Verified Methods: Applications in Medicine and Engineering (special issue), Andreas Rauh, Ekaterina Auer, Eberhard P. Hofer and Wolfram Luther (Eds.)

Volume 19 (2009): Edizione 2 (June 2009)

Volume 19 (2009): Edizione 1 (March 2009)

Volume 18 (2008): Edizione 4 (December 2008)
Ediziones in Fault Diagnosis and Fault Tolerant Control (special issue), Józef Korbicz and Dominique Sauter (Eds.)

Volume 18 (2008): Edizione 3 (September 2008)
Selected Problems of Computer Science and Control (special issue), Krzysztof Gałkowski, Eric Rogers and Jan Willems (Eds.)

Volume 18 (2008): Edizione 2 (June 2008)
Selected Topics in Biological Cybernetics (special section, pp. 117 - 170), Andrzej Kasiński and Filip Ponulak (Eds.)

Volume 18 (2008): Edizione 1 (March 2008)
Applied Image Processing (special issue), Anton Kummert and Ewaryst Rafajłowicz (Eds.)

Volume 17 (2007): Edizione 4 (December 2007)

Volume 17 (2007): Edizione 3 (September 2007)
Scientific Computation for Fluid Mechanics and Hyperbolic Systems (special issue), Jan Sokołowski and Eric Sonnendrücker (Eds.)

Volume 17 (2007): Edizione 2 (June 2007)

Volume 17 (2007): Edizione 1 (March 2007)

Dettagli della rivista
Formato
Rivista
eISSN
2083-8492
ISSN
1641-876X
Pubblicato per la prima volta
05 Apr 2007
Periodo di pubblicazione
4 volte all'anno
Lingue
Inglese

Cerca

Volume 24 (2014): Edizione 1 (March 2014)
Selected Problems of Biomedical Engineering (special section, pp. 7 - 63), Marek Kowal and Józef Korbicz (Eds.)

Dettagli della rivista
Formato
Rivista
eISSN
2083-8492
ISSN
1641-876X
Pubblicato per la prima volta
05 Apr 2007
Periodo di pubblicazione
4 volte all'anno
Lingue
Inglese

Cerca

16 Articoli
Accesso libero

An analytical iterative statistical algorithm for image reconstruction from projections

Pubblicato online: 25 Mar 2014
Pagine: 7 - 17

Astratto

Abstract

The main purpose of the paper is to present a statistical model-based iterative approach to the problem of image reconstruction from projections. This originally formulated reconstruction algorithm is based on a maximum likelihood method with an objective adjusted to the probability distribution of measured signals obtained from an x-ray computed tomograph with parallel beam geometry. Various forms of objectives are tested. Experimental results show that an objective that is exactly tailored statistically yields the best results, and that the proposed reconstruction algorithm reconstructs an image with better quality than a conventional algorithm with convolution and back-projection.

Parole chiave

  • computed tomography
  • image reconstruction from projections
  • statistical reconstruction algorithm.
Accesso libero

Nuclei segmentation for computer-aided diagnosis of breast cancer

Pubblicato online: 25 Mar 2014
Pagine: 19 - 31

Astratto

Abstract

Breast cancer is the most common cancer among women. The effectiveness of treatment depends on early detection of the disease. Computer-aided diagnosis plays an increasingly important role in this field. Particularly, digital pathology has recently become of interest to a growing number of scientists. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies. The task at hand is to classify those as either benign or malignant. We propose a robust segmentation procedure giving satisfactory nuclei separation even when they are densely clustered in the image. Firstly, we determine centers of the nuclei using conditional erosion. The erosion is performed on a binary mask obtained with the use of adaptive thresholding in grayscale and clustering in a color space. Then, we use the multi-label fast marching algorithm initialized with the centers to obtain the final segmentation. A set of 84 features extracted from the nuclei is used in the classification by three different classifiers. The approach was tested on 450 microscopic images of fine needle biopsies obtained from patients of the Regional Hospital in Zielona Góra, Poland. The classification accuracy presented in this paper reaches 100%, which shows that a medical decision support system based on our method would provide accurate diagnostic information.

Parole chiave

  • computer-aided diagnosis
  • breast cancer
  • pattern analysis
  • fast marching.
Accesso libero

Recognition of atherosclerotic plaques and their extended dimensioning with computerized tomography angiography imaging

Pubblicato online: 25 Mar 2014
Pagine: 33 - 47

Astratto

Abstract

In this paper the authors raise the issue of automatic discrimination of atherosclerotic plaques within an artery lumen based on numerical and statistical thresholding of Computerized Tomography Angiographic (CTA) images and their advanced dimensioning as a support for preoperative vessel assessment. For the study, a set of tomograms of the aorta, as well as the ilio-femoral and femoral arteries were examined. In each case a sequence of about 130-480 images of the artery cutoff planes were analyzed prior to their segmentation based on morphological image transformation. A crucial step in the staging of atherosclerotic alteration is recognition of the plaque in the CTA image. To solve this problem, statistical and linear fitting methods, including the least-squares approximation by polynomial and spline polynomial functions, as well as the error fitting function were used. Also, new descriptors of atherosclerotic changes, such as the lumen decrease factor, the circumference occupancy factor, and the convex plaque area factor, are proposed as a means of facilitating preoperative vessel examination. Finally, ways to reduce the computational time are discussed. The proposed methods can be very useful for automatic quantification of atherosclerotic changes visualized by CTA imaging.

Parole chiave

  • computed tomography
  • atherosclerotic plaque
  • image processing
  • approximation.
Accesso libero

From the slit-island method to the Ising model: Analysis of irregular grayscale objects

Pubblicato online: 25 Mar 2014
Pagine: 49 - 63

Astratto

Abstract

The Slit Island Method (SIM) is a technique for the estimation of the fractal dimension of an object by determining the area- perimeter relations for successive slits. The SIM could be applied for image analysis of irregular grayscale objects and their classification using the fractal dimension. It is known that this technique is not functional in some cases. It is emphasized in this paper that for specific objects a negative or an infinite fractal dimension could be obtained. The transformation of the input image data from unipolar to bipolar gives a possibility of reformulated image analysis using the Ising model context. The polynomial approximation of the obtained area-perimeter curve allows object classification. The proposed technique is applied to the images of cervical cell nuclei (Papanicolaou smears) for the preclassification of the correct and atypical cells.

Keywords

  • slit island method
  • area-perimeter method
  • Ising model
  • image analysis
  • cervical cancer.
Accesso libero

Discretization of singular systems and error estimation

Pubblicato online: 25 Mar 2014
Pagine: 65 - 73

Astratto

Abstract

This paper proposes a discretization technique for a descriptor differential system. The methodology used is both triangular first order hold discretization and zero order hold for the input function. Upper bounds for the error between the continuous and the discrete time solution are produced for both discretization methods and are shown to be better than any other existing method in the literature.

Parole chiave

  • descriptor systems
  • discretization
  • truncation error
  • first order hold
  • zero order hold.
Accesso libero

On attaining the prescribed quality of a controlled fourth order system

Pubblicato online: 25 Mar 2014
Pagine: 75 - 85

Astratto

Abstract

In this paper, we discuss a method of auxiliary controlled models and its application to solving some robust control problems for a system described by differential equations. As an illustration, a system of nonlinear differential equations of the fourth order is used. A solution algorithm, which is stable with respect to informational noise and computational errors, is presented. The algorithm is based on a combination of online state/input reconstruction and feedback control methods.

Parole chiave

  • auxiliary models
  • feedback control
  • online reconstruction.
Accesso libero

An optimal sliding mode congestion controller for connection-oriented communication networks with lossy links

Pubblicato online: 25 Mar 2014
Pagine: 87 - 97

Astratto

Abstract

A new discrete-time sliding-mode congestion controller for connection-oriented networks is proposed. Packet losses which may occur during the transmission process are explicitly taken into account. Two control laws are presented, each obtained by minimizing a different cost functional. The first one concentrates on the output variable, whereas in the second one the whole state vector is considered. Weighting factors for adjusting the influence of the control signal and appropriate (state or output) errors are incorporated in both the functionals. The asymptotic stability of the closed-loop system is proved, and the conditions for 100% bottleneck node bandwidth utilization are derived. The performance of the proposed algorithm is verified by computer simulations.

Parole chiave

  • optimal control
  • sliding-mode control
  • flow control
  • discrete-time systems.
Accesso libero

Approximation of a linear dynamic process model using the frequency approach and a non-quadratic measure of the model error

Pubblicato online: 25 Mar 2014
Pagine: 99 - 109

Astratto

Abstract

The paper presents a novel approach to approximation of a linear transfer function model, based on dynamic properties represented by a frequency response, e.g., determined as a result of discrete-time identification. The approximation is derived for minimization of a non-quadratic performance index. This index can be determined as an exponent or absolute norm of an error. Two algorithms for determination of the approximation coefficients are considered, a batch processing one and a recursive scheme, based on the well-known on-line identification algorithm. The proposed approach is not sensitive to local outliers present in the original frequency response. Application of the approach and its features are presented on examples of two simple dynamic systems.

Parole chiave

  • approximation method
  • frequency domain
  • non-quadratic criterion
  • recursive algorithm.
Accesso libero

A differential evolution approach to dimensionality reduction for classification needs

Pubblicato online: 25 Mar 2014
Pagine: 111 - 122

Astratto

Abstract

The feature selection problem often occurs in pattern recognition and, more specifically, classification. Although these patterns could contain a large number of features, some of them could prove to be irrelevant, redundant or even detrimental to classification accuracy. Thus, it is important to remove these kinds of features, which in turn leads to problem dimensionality reduction and could eventually improve the classification accuracy. In this paper an approach to dimensionality reduction based on differential evolution which represents a wrapper and explores the solution space is presented. The solutions, subsets of the whole feature set, are evaluated using the k-nearest neighbour algorithm. High quality solutions found during execution of the differential evolution fill the archive. A final solution is obtained by conducting k-fold crossvalidation on the archive solutions and selecting the best one. Experimental analysis is conducted on several standard test sets. The classification accuracy of the k-nearest neighbour algorithm using the full feature set and the accuracy of the same algorithm using only the subset provided by the proposed approach and some other optimization algorithms which were used as wrappers are compared. The analysis shows that the proposed approach successfully determines good feature subsets which may increase the classification accuracy.

Parole chiave

  • classification
  • differential evolution
  • feature subset selection
  • k-nearest neighbour algorithm
  • wrapper method.
Accesso libero

An efficient eigenspace updating scheme for high-dimensional systems

Pubblicato online: 25 Mar 2014
Pagine: 123 - 131

Astratto

Abstract

Systems based on principal component analysis have developed from exploratory data analysis in the past to current data processing applications which encode and decode vectors of data using a changing projection space (eigenspace). Linear systems, which need to be solved to obtain a constantly updated eigenspace, have increased significantly in their dimensions during this evolution. The basic scheme used for updating the eigenspace, however, has remained basically the same: (re)computing the eigenspace whenever the error exceeds a predefined threshold. In this paper we propose a computationally efficient eigenspace updating scheme, which specifically supports high-dimensional systems from any domain. The key principle is a prior selection of the vectors used to update the eigenspace in combination with an optimized eigenspace computation. The presented theoretical analysis proves the superior reconstruction capability of the introduced scheme, and further provides an estimate of the achievable compression ratios.

Parole chiave

  • eigenspace updating
  • projection space
  • data compression
  • principal component analysis.
Accesso libero

An algorithm for reducing the dimension and size of a sample for data exploration procedures

Pubblicato online: 25 Mar 2014
Pagine: 133 - 149

Astratto

Abstract

The paper deals with the issue of reducing the dimension and size of a data set (random sample) for exploratory data analysis procedures. The concept of the algorithm investigated here is based on linear transformation to a space of a smaller dimension, while retaining as much as possible the same distances between particular elements. Elements of the transformation matrix are computed using the metaheuristics of parallel fast simulated annealing. Moreover, elimination of or a decrease in importance is performed on those data set elements which have undergone a significant change in location in relation to the others. The presented method can have universal application in a wide range of data exploration problems, offering flexible customization, possibility of use in a dynamic data environment, and comparable or better performance with regards to the principal component analysis. Its positive features were verified in detail for the domain’s fundamental tasks of clustering, classification and detection of atypical elements (outliers).

Parole chiave

  • dimension reduction
  • sample size reduction
  • linear transformation
  • simulated annealing
  • data mining.
Accesso libero

Center-based l1–clustering method

Pubblicato online: 25 Mar 2014
Pagine: 151 - 163

Astratto

Abstract

In this paper, we consider the l1-clustering problem for a finite data-point set which should be partitioned into k disjoint nonempty subsets. In that case, the objective function does not have to be either convex or differentiable, and generally it may have many local or global minima. Therefore, it becomes a complex global optimization problem. A method of searching for a locally optimal solution is proposed in the paper, the convergence of the corresponding iterative process is proved and the corresponding algorithm is given. The method is illustrated by and compared with some other clustering methods, especially with the l2-clustering method, which is also known in the literature as a smooth k-means method, on a few typical situations, such as the presence of outliers among the data and the clustering of incomplete data. Numerical experiments show in this case that the proposed l1-clustering algorithm is faster and gives significantly better results than the l2-clustering algorithm.

Parole chiave

  • l1 clustering
  • data mining
  • optimization
  • weighted median problem.
Accesso libero

Approximation of phenol concentration using novel hybrid computational intelligence methods

Pubblicato online: 25 Mar 2014
Pagine: 165 - 181

Astratto

Abstract

This paper presents two innovative evolutionary-neural systems based on feed-forward and recurrent neural networks used for quantitative analysis. These systems have been applied for approximation of phenol concentration. Their performance was compared against the conventional methods of artificial intelligence (artificial neural networks, fuzzy logic and genetic algorithms). The proposed systems are a combination of data preprocessing methods, genetic algorithms and the Levenberg-Marquardt (LM) algorithm used for learning feed forward and recurrent neural networks. The initial weights and biases of neural networks chosen by the use of a genetic algorithm are then tuned with an LM algorithm. The evaluation is made on the basis of accuracy and complexity criteria. The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems.

Parole chiave

  • soft computing
  • neural networks
  • genetic algorithms
  • fuzzy systems
  • evolutionary-neural systems
  • pattern recognition
  • chemometrics.
Accesso libero

Cross-task code reuse in genetic programming applied to visual learning

Pubblicato online: 25 Mar 2014
Pagine: 183 - 197

Astratto

Abstract

We propose a method that enables effective code reuse between evolutionary runs that solve a set of related visual learning tasks. We start with introducing a visual learning approach that uses genetic programming individuals to recognize objects. The process of recognition is generative, i.e., requires the learner to restore the shape of the processed object. This method is extended with a code reuse mechanism by introducing a crossbreeding operator that allows importing the genetic material from other evolutionary runs. In the experimental part, we compare the performance of the extended approach to the basic method on a real-world task of handwritten character recognition, and conclude that code reuse leads to better results in terms of fitness and recognition accuracy. Detailed analysis of the crossbred genetic material shows also that code reuse is most profitable when the recognized objects exhibit visual similarity.

Parole chiave

  • genetic programming
  • code reuse
  • knowledge sharing
  • visual learning
  • multi-task learning
  • optical character recognition.
Accesso libero

Survival analysis on data streams: Analyzing temporal events in dynamically changing environments

Pubblicato online: 25 Mar 2014
Pagine: 199 - 212

Astratto

Abstract

In this paper, we introduce a method for survival analysis on data streams. Survival analysis (also known as event history analysis) is an established statistical method for the study of temporal “events” or, more specifically, questions regarding the temporal distribution of the occurrence of events and their dependence on covariates of the data sources. To make this method applicable in the setting of data streams, we propose an adaptive variant of a model that is closely related to the well-known Cox proportional hazard model. Adopting a sliding window approach, our method continuously updates its parameters based on the event data in the current time window. As a proof of concept, we present two case studies in which our method is used for different types of spatio-temporal data analysis, namely, the analysis of earthquake data and Twitter data. In an attempt to explain the frequency of events by the spatial location of the data source, both studies use the location as covariates of the sources.

Parole chiave

  • data streams
  • survival analysis
  • event history analysis
  • earthquake data
  • Twitter data.
Accesso libero

A new lightweight method for security risk assessment based on fuzzy cognitive maps

Pubblicato online: 25 Mar 2014
Pagine: 213 - 225

Astratto

Abstract

For contemporary software systems, security is considered to be a key quality factor and the analysis of IT security risk becomes an indispensable stage during software deployment. However, performing risk assessment according to methodologies and standards issued for the public sector or large institutions can be too costly and time consuming. Current business practice tends to circumvent risk assessment by defining sets of standard safeguards and applying them to all developed systems. This leads to a substantial gap: threats are not re-evaluated for particular systems and the selection of security functions is not based on risk models. This paper discusses a new lightweight risk assessment method aimed at filling this gap. In this proposal, Fuzzy Cognitive Maps (FCMs) are used to capture dependencies between assets, and FCM-based reasoning is performed to calculate risks. An application of the method is studied using an example of an e-health system providing remote telemonitoring, data storage and teleconsultation services. Lessons learned indicate that the proposed method is an efficient and low-cost approach, giving instantaneous feedback and enabling reasoning on the effectiveness of the security system.

Parole chiave

  • security
  • risk assessment
  • telemedicine
  • fuzzy cognitive maps.
16 Articoli
Accesso libero

An analytical iterative statistical algorithm for image reconstruction from projections

Pubblicato online: 25 Mar 2014
Pagine: 7 - 17

Astratto

Abstract

The main purpose of the paper is to present a statistical model-based iterative approach to the problem of image reconstruction from projections. This originally formulated reconstruction algorithm is based on a maximum likelihood method with an objective adjusted to the probability distribution of measured signals obtained from an x-ray computed tomograph with parallel beam geometry. Various forms of objectives are tested. Experimental results show that an objective that is exactly tailored statistically yields the best results, and that the proposed reconstruction algorithm reconstructs an image with better quality than a conventional algorithm with convolution and back-projection.

Parole chiave

  • computed tomography
  • image reconstruction from projections
  • statistical reconstruction algorithm.
Accesso libero

Nuclei segmentation for computer-aided diagnosis of breast cancer

Pubblicato online: 25 Mar 2014
Pagine: 19 - 31

Astratto

Abstract

Breast cancer is the most common cancer among women. The effectiveness of treatment depends on early detection of the disease. Computer-aided diagnosis plays an increasingly important role in this field. Particularly, digital pathology has recently become of interest to a growing number of scientists. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies. The task at hand is to classify those as either benign or malignant. We propose a robust segmentation procedure giving satisfactory nuclei separation even when they are densely clustered in the image. Firstly, we determine centers of the nuclei using conditional erosion. The erosion is performed on a binary mask obtained with the use of adaptive thresholding in grayscale and clustering in a color space. Then, we use the multi-label fast marching algorithm initialized with the centers to obtain the final segmentation. A set of 84 features extracted from the nuclei is used in the classification by three different classifiers. The approach was tested on 450 microscopic images of fine needle biopsies obtained from patients of the Regional Hospital in Zielona Góra, Poland. The classification accuracy presented in this paper reaches 100%, which shows that a medical decision support system based on our method would provide accurate diagnostic information.

Parole chiave

  • computer-aided diagnosis
  • breast cancer
  • pattern analysis
  • fast marching.
Accesso libero

Recognition of atherosclerotic plaques and their extended dimensioning with computerized tomography angiography imaging

Pubblicato online: 25 Mar 2014
Pagine: 33 - 47

Astratto

Abstract

In this paper the authors raise the issue of automatic discrimination of atherosclerotic plaques within an artery lumen based on numerical and statistical thresholding of Computerized Tomography Angiographic (CTA) images and their advanced dimensioning as a support for preoperative vessel assessment. For the study, a set of tomograms of the aorta, as well as the ilio-femoral and femoral arteries were examined. In each case a sequence of about 130-480 images of the artery cutoff planes were analyzed prior to their segmentation based on morphological image transformation. A crucial step in the staging of atherosclerotic alteration is recognition of the plaque in the CTA image. To solve this problem, statistical and linear fitting methods, including the least-squares approximation by polynomial and spline polynomial functions, as well as the error fitting function were used. Also, new descriptors of atherosclerotic changes, such as the lumen decrease factor, the circumference occupancy factor, and the convex plaque area factor, are proposed as a means of facilitating preoperative vessel examination. Finally, ways to reduce the computational time are discussed. The proposed methods can be very useful for automatic quantification of atherosclerotic changes visualized by CTA imaging.

Parole chiave

  • computed tomography
  • atherosclerotic plaque
  • image processing
  • approximation.
Accesso libero

From the slit-island method to the Ising model: Analysis of irregular grayscale objects

Pubblicato online: 25 Mar 2014
Pagine: 49 - 63

Astratto

Abstract

The Slit Island Method (SIM) is a technique for the estimation of the fractal dimension of an object by determining the area- perimeter relations for successive slits. The SIM could be applied for image analysis of irregular grayscale objects and their classification using the fractal dimension. It is known that this technique is not functional in some cases. It is emphasized in this paper that for specific objects a negative or an infinite fractal dimension could be obtained. The transformation of the input image data from unipolar to bipolar gives a possibility of reformulated image analysis using the Ising model context. The polynomial approximation of the obtained area-perimeter curve allows object classification. The proposed technique is applied to the images of cervical cell nuclei (Papanicolaou smears) for the preclassification of the correct and atypical cells.

Keywords

  • slit island method
  • area-perimeter method
  • Ising model
  • image analysis
  • cervical cancer.
Accesso libero

Discretization of singular systems and error estimation

Pubblicato online: 25 Mar 2014
Pagine: 65 - 73

Astratto

Abstract

This paper proposes a discretization technique for a descriptor differential system. The methodology used is both triangular first order hold discretization and zero order hold for the input function. Upper bounds for the error between the continuous and the discrete time solution are produced for both discretization methods and are shown to be better than any other existing method in the literature.

Parole chiave

  • descriptor systems
  • discretization
  • truncation error
  • first order hold
  • zero order hold.
Accesso libero

On attaining the prescribed quality of a controlled fourth order system

Pubblicato online: 25 Mar 2014
Pagine: 75 - 85

Astratto

Abstract

In this paper, we discuss a method of auxiliary controlled models and its application to solving some robust control problems for a system described by differential equations. As an illustration, a system of nonlinear differential equations of the fourth order is used. A solution algorithm, which is stable with respect to informational noise and computational errors, is presented. The algorithm is based on a combination of online state/input reconstruction and feedback control methods.

Parole chiave

  • auxiliary models
  • feedback control
  • online reconstruction.
Accesso libero

An optimal sliding mode congestion controller for connection-oriented communication networks with lossy links

Pubblicato online: 25 Mar 2014
Pagine: 87 - 97

Astratto

Abstract

A new discrete-time sliding-mode congestion controller for connection-oriented networks is proposed. Packet losses which may occur during the transmission process are explicitly taken into account. Two control laws are presented, each obtained by minimizing a different cost functional. The first one concentrates on the output variable, whereas in the second one the whole state vector is considered. Weighting factors for adjusting the influence of the control signal and appropriate (state or output) errors are incorporated in both the functionals. The asymptotic stability of the closed-loop system is proved, and the conditions for 100% bottleneck node bandwidth utilization are derived. The performance of the proposed algorithm is verified by computer simulations.

Parole chiave

  • optimal control
  • sliding-mode control
  • flow control
  • discrete-time systems.
Accesso libero

Approximation of a linear dynamic process model using the frequency approach and a non-quadratic measure of the model error

Pubblicato online: 25 Mar 2014
Pagine: 99 - 109

Astratto

Abstract

The paper presents a novel approach to approximation of a linear transfer function model, based on dynamic properties represented by a frequency response, e.g., determined as a result of discrete-time identification. The approximation is derived for minimization of a non-quadratic performance index. This index can be determined as an exponent or absolute norm of an error. Two algorithms for determination of the approximation coefficients are considered, a batch processing one and a recursive scheme, based on the well-known on-line identification algorithm. The proposed approach is not sensitive to local outliers present in the original frequency response. Application of the approach and its features are presented on examples of two simple dynamic systems.

Parole chiave

  • approximation method
  • frequency domain
  • non-quadratic criterion
  • recursive algorithm.
Accesso libero

A differential evolution approach to dimensionality reduction for classification needs

Pubblicato online: 25 Mar 2014
Pagine: 111 - 122

Astratto

Abstract

The feature selection problem often occurs in pattern recognition and, more specifically, classification. Although these patterns could contain a large number of features, some of them could prove to be irrelevant, redundant or even detrimental to classification accuracy. Thus, it is important to remove these kinds of features, which in turn leads to problem dimensionality reduction and could eventually improve the classification accuracy. In this paper an approach to dimensionality reduction based on differential evolution which represents a wrapper and explores the solution space is presented. The solutions, subsets of the whole feature set, are evaluated using the k-nearest neighbour algorithm. High quality solutions found during execution of the differential evolution fill the archive. A final solution is obtained by conducting k-fold crossvalidation on the archive solutions and selecting the best one. Experimental analysis is conducted on several standard test sets. The classification accuracy of the k-nearest neighbour algorithm using the full feature set and the accuracy of the same algorithm using only the subset provided by the proposed approach and some other optimization algorithms which were used as wrappers are compared. The analysis shows that the proposed approach successfully determines good feature subsets which may increase the classification accuracy.

Parole chiave

  • classification
  • differential evolution
  • feature subset selection
  • k-nearest neighbour algorithm
  • wrapper method.
Accesso libero

An efficient eigenspace updating scheme for high-dimensional systems

Pubblicato online: 25 Mar 2014
Pagine: 123 - 131

Astratto

Abstract

Systems based on principal component analysis have developed from exploratory data analysis in the past to current data processing applications which encode and decode vectors of data using a changing projection space (eigenspace). Linear systems, which need to be solved to obtain a constantly updated eigenspace, have increased significantly in their dimensions during this evolution. The basic scheme used for updating the eigenspace, however, has remained basically the same: (re)computing the eigenspace whenever the error exceeds a predefined threshold. In this paper we propose a computationally efficient eigenspace updating scheme, which specifically supports high-dimensional systems from any domain. The key principle is a prior selection of the vectors used to update the eigenspace in combination with an optimized eigenspace computation. The presented theoretical analysis proves the superior reconstruction capability of the introduced scheme, and further provides an estimate of the achievable compression ratios.

Parole chiave

  • eigenspace updating
  • projection space
  • data compression
  • principal component analysis.
Accesso libero

An algorithm for reducing the dimension and size of a sample for data exploration procedures

Pubblicato online: 25 Mar 2014
Pagine: 133 - 149

Astratto

Abstract

The paper deals with the issue of reducing the dimension and size of a data set (random sample) for exploratory data analysis procedures. The concept of the algorithm investigated here is based on linear transformation to a space of a smaller dimension, while retaining as much as possible the same distances between particular elements. Elements of the transformation matrix are computed using the metaheuristics of parallel fast simulated annealing. Moreover, elimination of or a decrease in importance is performed on those data set elements which have undergone a significant change in location in relation to the others. The presented method can have universal application in a wide range of data exploration problems, offering flexible customization, possibility of use in a dynamic data environment, and comparable or better performance with regards to the principal component analysis. Its positive features were verified in detail for the domain’s fundamental tasks of clustering, classification and detection of atypical elements (outliers).

Parole chiave

  • dimension reduction
  • sample size reduction
  • linear transformation
  • simulated annealing
  • data mining.
Accesso libero

Center-based l1–clustering method

Pubblicato online: 25 Mar 2014
Pagine: 151 - 163

Astratto

Abstract

In this paper, we consider the l1-clustering problem for a finite data-point set which should be partitioned into k disjoint nonempty subsets. In that case, the objective function does not have to be either convex or differentiable, and generally it may have many local or global minima. Therefore, it becomes a complex global optimization problem. A method of searching for a locally optimal solution is proposed in the paper, the convergence of the corresponding iterative process is proved and the corresponding algorithm is given. The method is illustrated by and compared with some other clustering methods, especially with the l2-clustering method, which is also known in the literature as a smooth k-means method, on a few typical situations, such as the presence of outliers among the data and the clustering of incomplete data. Numerical experiments show in this case that the proposed l1-clustering algorithm is faster and gives significantly better results than the l2-clustering algorithm.

Parole chiave

  • l1 clustering
  • data mining
  • optimization
  • weighted median problem.
Accesso libero

Approximation of phenol concentration using novel hybrid computational intelligence methods

Pubblicato online: 25 Mar 2014
Pagine: 165 - 181

Astratto

Abstract

This paper presents two innovative evolutionary-neural systems based on feed-forward and recurrent neural networks used for quantitative analysis. These systems have been applied for approximation of phenol concentration. Their performance was compared against the conventional methods of artificial intelligence (artificial neural networks, fuzzy logic and genetic algorithms). The proposed systems are a combination of data preprocessing methods, genetic algorithms and the Levenberg-Marquardt (LM) algorithm used for learning feed forward and recurrent neural networks. The initial weights and biases of neural networks chosen by the use of a genetic algorithm are then tuned with an LM algorithm. The evaluation is made on the basis of accuracy and complexity criteria. The main advantage of proposed systems is the elimination of random selection of the network weights and biases, resulting in increased efficiency of the systems.

Parole chiave

  • soft computing
  • neural networks
  • genetic algorithms
  • fuzzy systems
  • evolutionary-neural systems
  • pattern recognition
  • chemometrics.
Accesso libero

Cross-task code reuse in genetic programming applied to visual learning

Pubblicato online: 25 Mar 2014
Pagine: 183 - 197

Astratto

Abstract

We propose a method that enables effective code reuse between evolutionary runs that solve a set of related visual learning tasks. We start with introducing a visual learning approach that uses genetic programming individuals to recognize objects. The process of recognition is generative, i.e., requires the learner to restore the shape of the processed object. This method is extended with a code reuse mechanism by introducing a crossbreeding operator that allows importing the genetic material from other evolutionary runs. In the experimental part, we compare the performance of the extended approach to the basic method on a real-world task of handwritten character recognition, and conclude that code reuse leads to better results in terms of fitness and recognition accuracy. Detailed analysis of the crossbred genetic material shows also that code reuse is most profitable when the recognized objects exhibit visual similarity.

Parole chiave

  • genetic programming
  • code reuse
  • knowledge sharing
  • visual learning
  • multi-task learning
  • optical character recognition.
Accesso libero

Survival analysis on data streams: Analyzing temporal events in dynamically changing environments

Pubblicato online: 25 Mar 2014
Pagine: 199 - 212

Astratto

Abstract

In this paper, we introduce a method for survival analysis on data streams. Survival analysis (also known as event history analysis) is an established statistical method for the study of temporal “events” or, more specifically, questions regarding the temporal distribution of the occurrence of events and their dependence on covariates of the data sources. To make this method applicable in the setting of data streams, we propose an adaptive variant of a model that is closely related to the well-known Cox proportional hazard model. Adopting a sliding window approach, our method continuously updates its parameters based on the event data in the current time window. As a proof of concept, we present two case studies in which our method is used for different types of spatio-temporal data analysis, namely, the analysis of earthquake data and Twitter data. In an attempt to explain the frequency of events by the spatial location of the data source, both studies use the location as covariates of the sources.

Parole chiave

  • data streams
  • survival analysis
  • event history analysis
  • earthquake data
  • Twitter data.
Accesso libero

A new lightweight method for security risk assessment based on fuzzy cognitive maps

Pubblicato online: 25 Mar 2014
Pagine: 213 - 225

Astratto

Abstract

For contemporary software systems, security is considered to be a key quality factor and the analysis of IT security risk becomes an indispensable stage during software deployment. However, performing risk assessment according to methodologies and standards issued for the public sector or large institutions can be too costly and time consuming. Current business practice tends to circumvent risk assessment by defining sets of standard safeguards and applying them to all developed systems. This leads to a substantial gap: threats are not re-evaluated for particular systems and the selection of security functions is not based on risk models. This paper discusses a new lightweight risk assessment method aimed at filling this gap. In this proposal, Fuzzy Cognitive Maps (FCMs) are used to capture dependencies between assets, and FCM-based reasoning is performed to calculate risks. An application of the method is studied using an example of an e-health system providing remote telemonitoring, data storage and teleconsultation services. Lessons learned indicate that the proposed method is an efficient and low-cost approach, giving instantaneous feedback and enabling reasoning on the effectiveness of the security system.

Parole chiave

  • security
  • risk assessment
  • telemedicine
  • fuzzy cognitive maps.

Pianifica la tua conferenza remota con Sciendo