This paper presents a new image reconstruction method for spiral cone- beam tomography scanners in which an X-ray tube with a flying focal spot is used. The method is based on principles related to the statistical model-based iterative reconstruction (MBIR) methodology. The proposed approach is a continuous-to-continuous data model approach, and the forward model is formulated as a shift-invariant system. This allows for avoiding a nutating reconstruction-based approach, e.g. the advanced single slice rebinning methodology (ASSR) that is usually applied in computed tomography (CT) scanners with X-ray tubes with a flying focal spot. In turn, the proposed approach allows for significantly accelerating the reconstruction processing and, generally, for greatly simplifying the entire reconstruction procedure. Additionally, it improves the quality of the reconstructed images in comparison to the traditional algorithms, as confirmed by extensive simulations. It is worth noting that the main purpose of introducing statistical reconstruction methods to medical CT scanners is the reduction of the impact of measurement noise on the quality of tomography images and, consequently, the dose reduction of X-ray radiation absorbed by a patient. A series of computer simulations followed by doctor’s assessments have been performed, which indicate how great a reduction of the absorbed dose can be achieved using the reconstruction approach presented here.
In this paper1 a new neural networks training algorithm is presented. The algorithm originates from the Recursive Least Squares (RLS) method commonly used in adaptive filtering. It uses the QR decomposition in conjunction with the Givens rotations for solving a normal equation - resulting from minimization of the loss function. An important parameter in neural networks is training time. Many commonly used algorithms require a big number of iterations in order to achieve a satisfactory outcome while other algorithms are effective only for small neural networks. The proposed solution is characterized by a very short convergence time compared to the well-known backpropagation method and its variants. The paper contains a complete mathematical derivation of the proposed algorithm. There are presented extensive simulation results using various benchmarks including function approximation, classification, encoder, and parity problems. Obtained results show the advantages of the featured algorithm which outperforms commonly used recent state-of-the-art neural networks training algorithms, including the Adam optimizer and the Nesterov’s accelerated gradient.
The paper presents a performance analysis of a selected few rough set–based classification systems. They are hybrid solutions designed to process information with missing values. Rough set-–based classification systems combine various classification methods, such as support vector machines, k–nearest neighbour, fuzzy systems, and neural networks with the rough set theory. When all input values take the form of real numbers, and they are available, the structure of the classifier returns to a non–rough set version. The performance of the four systems has been analysed based on the classification results obtained for benchmark databases downloaded from the machine learning repository of the University of California at Irvine.
Data clustering is an important method used to discover naturally occurring structures in datasets. One of the most popular approaches is the grid-based concept of clustering algorithms. This kind of method is characterized by a fast processing time and it can also discover clusters of arbitrary shapes in datasets. These properties allow these methods to be used in many different applications. Researchers have created many versions of the clustering method using the grid-based approach. However, the key issue is the right choice of the number of grid cells. This paper proposes a novel grid-based algorithm which uses a method for an automatic determining of the number of grid cells. This method is based on the kdist function which computes the distance between each element of a dataset and its kth nearest neighbor. Experimental results have been obtained for several different datasets and they confirm a very good performance of the newly proposed method.
Efficient lead management allows substantially enhancing online channel marketing programs. In the paper, we classify website traffic into human- and bot-origin ones. We use feedforward neural networks with embedding layers. Moreover, we use one-hot encoding for categorical data. The data of mouse clicks come from seven large retail stores and the data of lead classification from three financial institutions. The data are collected by a JavaScript code embedded into HTML pages. The three proposed models achieved relatively high accuracy in detecting artificially generated traffic.
This paper presents a new image reconstruction method for spiral cone- beam tomography scanners in which an X-ray tube with a flying focal spot is used. The method is based on principles related to the statistical model-based iterative reconstruction (MBIR) methodology. The proposed approach is a continuous-to-continuous data model approach, and the forward model is formulated as a shift-invariant system. This allows for avoiding a nutating reconstruction-based approach, e.g. the advanced single slice rebinning methodology (ASSR) that is usually applied in computed tomography (CT) scanners with X-ray tubes with a flying focal spot. In turn, the proposed approach allows for significantly accelerating the reconstruction processing and, generally, for greatly simplifying the entire reconstruction procedure. Additionally, it improves the quality of the reconstructed images in comparison to the traditional algorithms, as confirmed by extensive simulations. It is worth noting that the main purpose of introducing statistical reconstruction methods to medical CT scanners is the reduction of the impact of measurement noise on the quality of tomography images and, consequently, the dose reduction of X-ray radiation absorbed by a patient. A series of computer simulations followed by doctor’s assessments have been performed, which indicate how great a reduction of the absorbed dose can be achieved using the reconstruction approach presented here.
In this paper1 a new neural networks training algorithm is presented. The algorithm originates from the Recursive Least Squares (RLS) method commonly used in adaptive filtering. It uses the QR decomposition in conjunction with the Givens rotations for solving a normal equation - resulting from minimization of the loss function. An important parameter in neural networks is training time. Many commonly used algorithms require a big number of iterations in order to achieve a satisfactory outcome while other algorithms are effective only for small neural networks. The proposed solution is characterized by a very short convergence time compared to the well-known backpropagation method and its variants. The paper contains a complete mathematical derivation of the proposed algorithm. There are presented extensive simulation results using various benchmarks including function approximation, classification, encoder, and parity problems. Obtained results show the advantages of the featured algorithm which outperforms commonly used recent state-of-the-art neural networks training algorithms, including the Adam optimizer and the Nesterov’s accelerated gradient.
The paper presents a performance analysis of a selected few rough set–based classification systems. They are hybrid solutions designed to process information with missing values. Rough set-–based classification systems combine various classification methods, such as support vector machines, k–nearest neighbour, fuzzy systems, and neural networks with the rough set theory. When all input values take the form of real numbers, and they are available, the structure of the classifier returns to a non–rough set version. The performance of the four systems has been analysed based on the classification results obtained for benchmark databases downloaded from the machine learning repository of the University of California at Irvine.
Data clustering is an important method used to discover naturally occurring structures in datasets. One of the most popular approaches is the grid-based concept of clustering algorithms. This kind of method is characterized by a fast processing time and it can also discover clusters of arbitrary shapes in datasets. These properties allow these methods to be used in many different applications. Researchers have created many versions of the clustering method using the grid-based approach. However, the key issue is the right choice of the number of grid cells. This paper proposes a novel grid-based algorithm which uses a method for an automatic determining of the number of grid cells. This method is based on the kdist function which computes the distance between each element of a dataset and its kth nearest neighbor. Experimental results have been obtained for several different datasets and they confirm a very good performance of the newly proposed method.
Efficient lead management allows substantially enhancing online channel marketing programs. In the paper, we classify website traffic into human- and bot-origin ones. We use feedforward neural networks with embedding layers. Moreover, we use one-hot encoding for categorical data. The data of mouse clicks come from seven large retail stores and the data of lead classification from three financial institutions. The data are collected by a JavaScript code embedded into HTML pages. The three proposed models achieved relatively high accuracy in detecting artificially generated traffic.