Cite

Introduction

Several factors impact on colour images and they do not only affect visual perception of the image. They also hinder the identification and distinction of image features that are relevant for different applications such as segmentation or pattern recognition. Noise is one of the most common of these factors and it can significantly affect visual quality of images, as well as the performance of most image processing tasks. It is the result of errors in the image acquisition process.

In several cases, images are taken under not suitable conditions: low light, too much clarity or poor weather conditions. A deficient quality equipment can hamper image acquisition because of transmissions errors, problems with networked cables, signal disturbances, troubles with sensors, etc. Therefore, pixel intensity values do not reflect true colors of the real scene we are shooting. For these reasons, lots of methods have been developed in order to recover lost image information and to enhance image details. Color image smoothing is part of preprocessing techniques intended for removing possible image perturbations without losing image information.

Analogously, sharpening is a pre-processing technique that plays an important role for feature extraction in image processing. But even in this last case, smoothing will be needed in order to obtain a robust solution. This has motivated the study and development of methods that were able to cope with both operations.

The initial approach is usually to consider it as a two-steps process: first smoothing and later sharpening, or the other way around. However, this approach usually leads to many problems. On the one hand, if we first apply a smoothing technique, then we could be losing information that cannot be recovered in the succeeding sharpening step. On the other hand, if we first apply a sharpening method over a noisy image, we will amplify the noise present in it. The ideal way to address this problem is to consider a method that was able to sharp image details and edges while removing noise. Nevertheless, this is not a simple task given the opposite nature of these two operations.

Many methods for both sharpening and smoothing have been proposed in the literature, but if we restrict ourselves to methods that consider both of them simultaneously, the state-of-the-art is not so extensive. In this work we will also survey several methods of two-steps approaches in order to intensify the features of an image and to reduce the existing noise of the image. We will also review techniques that address both goals simultaneously.

In this way, the paper is organized as follows: Section 2 presents a brief review about image smoothing. In Section 3 we revisit some well-known techniques within enhancement and sharpening field. In Section 4.1 we introduce two-steps methods for smoothing and later sharpening and, alternatively, for sharpening and later smoothing. A comparison of both approaches will be shown. This will motivate the need of techniques that simultaneously address both processes, that will be exposed in Section 4.2. Finally, in Section 5 we compare the results given by the aforementioned methods.

Smoothing

Image smoothing techniques have the goal of preserving image quality. In other words, to remove noise without losing the principal features of the image. However, there are several types of noise. The main three types are: impulsive, additive, and multiplicative. Impulsive noise is usually characterized by some portion of image pixels that are corrupted, leaving the others unchanged. Additive noise appears when the values of the original image have been modified by adding random values which follow a certain probability distribution. Finally, multiplicative noise is more difficult to be removed from images than additive noise, because in this case intensities vary along with signal intensity (e.g., speckle noise).

There are different sources of noise and plenty of denoising methods for each kind of noise. The most common one is probably the so-called thermal noise. This impulsive noise is due to CCD sensor malfunction in the image acquisition process.

Another interesting case is Gaussian noise, in which each pixel of the image will be changed from its original value by some small amount that follows a Gaussian distribution. This kind of noise is modelled as an additive white Gaussian noise. So that, its presence can be simulated by adding random values from a zero-mean Gaussian distribution to the original pixel intensities in each image channel independently, where the standard deviation σ of the Gaussian distribution characterizes the noise intensity [44].

The elimination of this type of noise is known as smoothing, and this will be the type of noise elimination considered in this work. There are plenty of nonlinear methods for smoothing. In the rest of the section, we will review some of them.

Arithmetic Mean Filter

First approaches for Gaussian noise smoothing were based on linear strategies. These methods, such as the Arithmetic Mean Filter (AMF), see for instance [44], are able to suppress noise, because they take advantage of its zero-mean property. However, they blur edges and texture significantly. This motivated the development of non-linear methods that try to alleviate these problems by, firstly, detecting image edges and details, and secondly, by smoothing edges less than other parts of the image.

Bilateral Filter (BF)

Within nonlinear methods, a wide class of them uses averaging to take advantage of the zero-mean property of the Gaussian noise. This class includes the well-known Bilateral Filter (BF) [59] and its variants [5]. BF is a non-linear method able to smooth an image while respecting strong edges. This can be done by processing each pixel as a weighted average of its neighbours, where the weights depend on the spatial and intensity distance of each pixel respect to the others. Several variants of the BF have been developed, for instance, the integration of a BF with an edge detection algorithm proposed in [16], or an adaptation of the BF with fuzzy metrics, as it is proposed in [34].

Another non-linear method respectful with image structure is the Smallest Univalue Segment Assimilating Nucleus (SUSAN) [56]. Here, a feature extraction algorithm is used to reduce noise using only sections from the local image structure that have been selected as similar pixels. The original value of each pixel is estimated using a weighted mean of the closest neighbours to it.

Fuzzy Noise Reduction Filters

A well-known nonlinear filter is the Fuzzy Noise Reduction Method (FNRM) [53]. The core idea behind this method is to denoise each pixel using pixels within its neighbourhood but using two sub-filters. FNRM provides very successful results. However, its drawback is that it respects image edges but at the expense of removing less noise.

To overcome the shortcomings of this kind of filters, linear and non-linear methods are combined in order to exploit the benefits of each of them for denoising colour images respecting details. In [15] graph theory is used to propose Soft-Switching Graph Denoising method (SSGD) that combines AMF and FNRM where AMF is more relevant in homogeneous regions and FNRM is more suitable for processing details. This method has been computationally improved in [40].

The filters introduced in [17] give detection rules based on differences between the peer group of a pixel and the peer groups of pixels in its neighbourhood. In [33], an averaging operation of the fuzzy peer group of each pixel is used for processing, which is called Fuzzy Peer Group Averaging (FPGA). Other methods have been developed using fuzzy logic or soft-switching strategies, such as those in [35, 45]. Methods based on different optimizations of weighted averaging are proposed in [27, 54]. Another important family of filters are the partition based filters [28, 54], that classify each pixel to be processed into several signal activity categories which, in turn, are associated to appropriate processing methods.

Annisotropic Filtering (PM)

The Annisotropic Filtering was introduced by Perona and Malik (PM) [39]. There, a nonlinear adaptive diffusion process, called anisotropic diffusion, is considered. It consists on adapting the diffusion coefficient with a double goal: to reduce the smoothing effects near the edges for preserving image details, while smoothing flatter areas. There are several methods inspired by the PM model such as the one in [62], where a model based on a directional Laplacian is shown.

Guo et al [8] presented an adaptive PM filter able to segment the noisy image into two different regions, inner ones and borders. Then diffusion is applied by adapting it depending on the region we are considering.

Block-Matching and 3D Filtering (BM3D)

In [2], Dabov et al introduced collaborative filtering strategies which are probably the ones that provide the most impressive results within the block matching based denoising. The method presented there is called Block-Matching and 3D Filtering (BM3D). It is based on grouping, by matching, similar 2D fragments of the image into a 3D data matrix in order to use a different filter for each group. Details on matching algorithms can be shown in [12]. More precisely, filtering is achieved by the combination of collaborative non-local means and a transform-domain shrinkage. It can be summarized into 3 steps. First, a 3D transformation of each 3D group; secondly, a shrinkage of the spectrum of the transform; and lastly, an inverse 3D transformation. This technique is applied to each channel of the luminance-chrominance colour space, such as YCbCr or YIQ.

Principal Component Analysis (PCA)

Methods based on Principal Component Analysis (PCA) in the spatial domain have been applied in image denoising [37, 57]. The use of this technique allows us to reduce dimensionality, by transforming input data into the PCA domain in order to only preserve the most significant components. Muresan and Parks propose to divide every image into patches that are in turn divided into sub-windows, each of one has an associated vector built from pixels of the corresponding sub-window. Then, PCA is applied over these vectors for selecting a few principal components that are later used for smoothing [36]. This method has been refined in [67], where the training sample is selected by grouping pixels with similar local spatial structures using Local Pixel Grouping (LPG) before performing PCA. Additionally, the method in [36] has also inspired the development of a filter for images obtained from single-sensor digital cameras, named Colour Filter Array (CFA) [68].

Wavelet methods

Wavelet representation has become very popular within smoothing of images field [29]. It consists on decomposing an image signal into multiple scales, which represent its different frequency components. There are plenty of wavelets families, such as the ones of Haar, Daubechies, Coiflet, Symlet, Meyer, Morlet or the Mexican Hat, among others. In these methods, smoothing is applied in the image by using a threshold for removing detail coefficients. In this way, a hard scale-dependent threshold is proposed in [38]. Statistical modeling can be performed instead of thresholding to operate over wavelet coefficients to suppress noise [31, 43]. Wavelet transformation also works for data regularization as it is proposed in [9].

Results

In Figures 1 and 2 we can see the performance of some of the smoothing filters reported in this section. They have been applied to classical Lenna and Parrots images corrupted by some additive white Gaussian noise. BM3D method offers impressive results in comparison to the others, as we can see with Lenna images in Figure 1. PM smoothing method also presents good results, since it smooths well the noise without losing details and edge information. However, if the level of noise is high, PM can produce some artifacts in the image, as we can see in the PM filtered image of Parrots in Figure 2.

Fig. 1

Results under different smoothing methods applied to Lenna image corrupted by a Gaussian noise with standard deviation σ = 10.

Fig. 2

Results under different smoothing methods applied to Parrots image corrupted by a Gaussian noise with standard deviation σ = 20.

Sharpening

Image enhancement process consists of a collection of techniques whose purpose is to improve image visual appearance and to highlight or recover certain details of the image for conducting an appropriate analysis by a human or a machine.

During the acquisition process, several factors can influence on the quality of the image such as illumination conditions, ambient pressure or temperature fluctuations. In order to enhance the image, we try to convert it for getting details that are obscured, or to sharpen certain features of interest. There is a large number of applications of these techniques that include medical image analysis, remote sensing, high definition television, microscopic imaging, etc. The existence of such a variety implies that there will also be very different goals within image enhancement, according to each particular application. In some cases, the purpose is to enhance the contrast, in others, to emphasize details and/or borders of the image. We will refer to this last process as sharpening, although the difference is not always clear. The choice of the most suitable techniques for each purpose will be a function of the specific task to be conducted, the image content, the observer characteristics, and the viewing conditions.

In this section we present a brief overview about the principal sharpening techniques. They can be classified into two different groups depending on the image domain: spatial based and frequency based techniques. In the first case, we directly operate over the pixel, while in the second we do it over the transform (Fourier or wavelet) coefficients of the image. Here, the effect of the transformation can only be noticed once we recover the image by the inverse transform.

Spatial domain techniques

Spatial domain techniques for sharpening an image are based on manipulations of pixel values. One of the ways to improve it is by augmenting the contrast among different parts of the image.

There are several methods for image sharpening in the spatial domain. One of the most well-known is Histogram Equalization (HE). It is based on an adjustment of the contrast by using the histogram of the input image. It is manipulated in order to separate intensity levels of higher probability respect to their neighbour levels. In Figure 3 we can see the initial histogram for a gray-scale image of Lenna and the one obtained after having applied HE over the image. In Figure 4, we can see the input and output images corresponding to these histograms and how HE method works over a gray-scale image increasing the global contrast.

Fig. 3

Histogram of a gray-scale image of Lenna and the histogram of the resulting image after having applied HE.

Fig. 4

Comparation between a gray-scale image of Lenna and the resulting image after having applied HE.

The application of this technique in colour images is not a simple task. Histogram equalization is a non-linear process and involves intensity values of the image and not the colour components. For these reasons, channel splitting and equalizing each channel separately is not the proper way for equalization of contrast. So, the first step is to convert the colour space of the image from RGB into other colour space which separates intensity values from colour components such as HSV, YCbCr or Lab, and apply to the equalization over the H, Y or L channel respectively. In Figure 5 we can see the result of apply HE over the R, G and B channel separately and over the channel L in the Lab space. There are other approaches that generalize histogram equalization to colour spaces. Among the most well-known is 3D histogram [60].

Fig. 5

Comparation between HE applied over RGB channels separately and over L channel in Lab space.

There are lots of works seeking to improve HE techniques such as Brightness Bi-Histogram Equalization (BBHE) [19], where the image histogram is divided into two sub-histograms and they are independently equalized later. Dualistic Sub-Image Histogram Equalization (DSIHE) [61] is similar to BBHE, but in this case the median value is used as a separation intensity level of reference in order to divide the histogram into two sub-histograms.

With the Brightness Preserving Dynamic Histogram Equalization (BPDHE) [11], we smooth the input histogram by using a Gaussian kernel and by avoiding a re-mapping of peaks unlike with the HE. This technique does not carry on the imprecision of gray-values while processing crisp histograms. In order to improve this technique, a fuzzy version of BPDHE is proposed to handle inaccuracy of gray levels, which is called the Brightness Preserving Dynamic Fuzzy Histogram Equalization (BPDFHE) [55]. A more rigorous study of methods that are based on histograms is presented in [64]. Besides, we can find there a method proposed with the goal of maximization of the expected contrast, called the Optimal Contrast-Tone Mapping (OCTM) method.

The aforementioned methods do not use spatial information neighbours of a given pixel. They are confined to use the intensity values of all pixels of the image. Local histogram equalization based methods were introduced in order to adapt these techniques by using local information. In this way, the Contrast Limited Adaptative Histogram Equalization method (CLAHE) is proposed in order to enhance image contrast by applying CLHE on small data regions for adjusting the local contrast of an image [69]. The results locally obtained are joined together by bilinear interpolation to get the output image.

We can see in Figure 6 the result of applying BPDFHE and CLAHE methods to Lenna image. As it is indicated above, with this last method we improve the performance through a local approach that allows us to extract more information of the image structure.

Fig. 6

Original image of Lenna and the output images obtained after applying BPDFHE and CLAHE methods.

Another well-known technique within spatial domain sharpening is the Contrast Stretching (CS), which is based on modifying the dynamic range, i.e., the range between the minimum and maximum intensity values of the image of the gray levels in the image being processed [65]. Linear Contrast Stretch (LCS) is the simplest contrast stretch algorithm that stretches pixel values of a low or high contrast image by extending the dynamic range across the whole image spectrum. One of the disadvantages of this method is that some details may be loss due to saturation and clipping.

In the un-sharp masking (UM) approach [47] an edge image is computed by using a fraction of the high-pass filtered version of the original image. This edge image is added to the original one to form the enhanced image. The main advantage of this method is their simplicity, however, this technique produce an large amplification of noise which often makes this method not useful in practice. Several approaches have been suggested for reducing the noise sensitivity of the linear UM technique. Many of these methods are based on the use of nonlinear operators in the correction path. A quadratic filter that can be approximately characterized as a local-mean-weighted adaptive high-pass filter is described in [32, 48]. An approach based on the order statistics Laplacian operator is described in [20]. An adaptative approach that prevents sharpening in flat regions is proposed in [46], that makes the method more robust in presence of noise.

Frequency domain techniques

Frequency domain techniques are based on the use of transformations like the Discrete Fourier (or Cosine) Transform or Wavelet Transforms. We remind that each one of these methods is not unique and, in fact, they compile a family of methods that are in essence the same, but each one with slight differences respect to the others. They work as follows: First, we apply one of these transformation methods, after we process the transform under one of these methods and, finally, the inverse transformation of the processed image gives us the result.

This approach has a wide advantage, the facility of distinguish different regions in an image. Higher frequencies are related to edges or details and lower correspond to smooth areas of the image. This easy separation allows to process the image appropriately depending on the goal. However, this also comprises that we are processing details of different regions at the same time in a indistinguishable way. This also happens with smooth regions.

Wavelet theory has become a potent image processing tool in the last years, this technique provide us image spatial and frequency information. An enhancement of the image can be obtained by adding high-pass or substracting low-pass filtered versions from the image [29, 30]. One of the early works on contrast sharpening in the wavelet domain is reported in [26], where a parametrised hyperbolic function is applied to the gradient of the wavelet coefficients. Since then, lots of works have been developed in the wavelet domain. For instance, Loza et al. proposed a non-linear enhancement method based on the local dispersion of the wavelet coefficients [25]. This algorithm enhances the contrast in images adaptively, based on local statistics of the wavelet coefficients of the image.

A contrast enhancement technique using a scaling of the internal noise of a dark image in the Discrete Cosine Transform (DCT) domain is developed in [13, 14]. It is based on a concept of physics called Dynamic stochastic resonance (DSR), that uses noise to improve the performance of a system [6]. The proposed algorithm enhances the contrast on colour images by applying the DSR method iteratively on the DCT coefficients of the image. DSR based methods in the wavelet domain have been also proposed in [1]. DSR based techniques are mainly centered in enhancement and not so much in sharpening edges or details of the image. They provide a better outcome when applied to low lighted images.

Results

In Figure 7 we can see the output of the UM and CLAHE methods for Parrot image. We also can see enlarged images of detail regions of them, where we can appreciate the sharpening effect over edges. This is an example of sharpening technique as opposite to the examples showed in Figure 6, which were methods more tied to contrast enhancement. They can be compared in Figure 7, where we can see an example of contrast enhancement, using CLAHE, versus sharpening using UM.

Fig. 7

First row, original Parrot image, filtered with UM and with CLAHE. Second row, a little detail region.

Smoothing and sharpening of colour images

In this section we discuss about techniques that jointly considered smoothing and sharpening. The first idea we come up is to process the image in two different steps: first, by implementing one operation and then, over the processed image, carrying on the second process. Here, the order in which we carry the operations can greatly change the output. If we sharpen before smoothing, we can increase the relevance of image noise, which will complicate the smoothing task. If, by contrast, we smooth before sharpening, we may loss information in the smoothing process that the sharpen method could not recover. In general, the second approach usually provides better outcomes, however, it is still not an optimal solution. For that reason, techniques that were able to combine simultaneously both smoothing and sharpness have been suggested in the last few years.

Two steps approach

Two-step methods for smoothing and sharpening consist on the sequential application of two methods, one of each type. In Figure 8 we can compare two-step methods based on BF for smoothing and CLAHE for sharpening. In the first case, we start with BF, and in the second one with CLAHE. This last method is applied to blurred Lenna and Parrot images in Figure 9.

Fig. 8

First row, original image, original image blurred with Gaussian noise with σ = 10, filtered image with BF and finally ouput of BF and posterior CLAHE. Second row, original and noisy image, the enhanced image with CLAHE and finally output of CLAHE and posterior BF.

Fig. 9

First row, original images and original images blurred with Gaussian noise with σ = 10. Second row, the result of applying CLAHE and subsequently BF to both images and then the opposite approach, BF and subsequently CLAHE.

We have seen the result of smoothing an image and subsequently apply a sharpening technique over the denoised image. In the first step, we lost a lot of information about the image, and then the second step was not good enough to recover the lost information. To overcome this drawback, we can first apply a sharpening, and in a second step we smooth the image. Results of both approaches can be seen in Figures 8 and 9.

Another example of a unified two-step method for both smoothing and sharpening over low light colour images is proposed in [21]. There two different steps are applied too. BM3D filter is combined with a structural filter for smoothing. Afterwards, a luminance adaptive contrast is applied in order to sharp the details of the smoothed image.

Simultaneous approach

Although smoothing and sharpening are apparently opposite operations, the necessity of using both techniques at the same time is ever increasing. Both of them have been extensively studied and the techniques developed for each process are very different. However, this does not happen if we talk about doing both operations at the same time. The state of the art in terms of methods that are able to sharp details while removing noise is still relatively reduced. In this section we present some of these techniques.

Two smooth and sharpening techniques, such as PM and CLAHE, have been combined simultaneously by means of a synchronization algorithm [4], where we can appreciate the improvement respect the corresponding two-step methods based on them. The method draw on the advantage of these original models and combine it for constructing a good tool for medical images, more concretely for magnetic resonance.

As we saw in Section 4.1, PM is based in a non-linear forward diffusion process geared by a diffusion variable that permits to control the smoothing effects over the image. In this way, it is tempting to use backward diffusion in order to obtain a sharpened image. However, backward diffusion is unstable and an ill-posed problem. Nevertheless, Gilboa et al. show that it is possible to combine forward and backward nonlinear diffusion processes for getting the Forward-and-Backward (FAB) diffusion process [7]. FAB is able to sharpen details while removing the noise. An adaptative control of the local degree of diffusion depending on the local gradient and inhomogeneity is considered to introduce the Local Variance-Controlled Forward-and-Backward (LVCFAB) [63].

Nevertheless, in the same way that te backward diffusion process, the FAB diffusion is unstable and ill-posed. In order to overcome this drawback Vadim and Yehoshua proposed the use of Telegraph-Diffusion(TeD) [49, 50], instead of the diffusion equation, giving rise to a stable smoothing and sharpening method, called (TeD-FAB).

In [3], the authors proposed to combine BM3D with a transform-domain sharpening technique, applied to blocks, in order to sharpen while noise is being removed. We will refer to this method as (BM3DSharp).

We can also find fuzzy based methods with this double purpose. Russo proposed, in [51, 52], a fuzzy neural network technique that consists on a multiple-output processing system that adopts fuzzy networks in order to combine sharpening and smoothing. In particular, three fuzzy networks are combined; the first and third one smooth the image and the second one is responsible of the sharpening. The aforementioned methods can be compared in Figures 10, 11, and 12.

Fig. 10

Denoising results for Lenna image corrupted by Gaussian noise with standard deviation σ = 20.

Fig. 11

Results of smoothing & sharpening with different methods an image corrupted by Gaussian noise with σ = 30.

Fig. 12

First row, Lenna image corrupted by Gaussian noise with standard deviations σ = 10, σ = 20 and σ = 30. Second row, the output of Fuzzy Network filter. Third row, output of BM3DShar and in the last one, the output of TeD-FAB.

As we have mentioned UM has the disadvantages of increasing the noise in homogeneous regions and of not being able to sharpen all details due to its use of a fixed sharpening strength. With the purpose of overcome this drawback and to remove the noise at the same time that edges are sharpened, Kim et al. have developed an adaptive unsharp mask, called Optimal Unsharp Mask (OUM) [18]. It is based on the classical approach of the UM but changing its parameter according to the local edge strength.

In [66], an Adaptive Bilateral Filter (ABF) based on the classical BF is presented. BF is reformulated by integrating a shift-variant technique to increase the slope of the edges and to smooth the noise. ABF presents a similar sharpening performance as OUM, but without producing the artefacts of OUM. Moreover, ABF achieves a better noise suppression than OUM. However, ABF significantly increases the computational complexity, that is proportional to the window size.

To overcome this problem, an Adaptative Guided Image Filtering (AGF), that combines a guided filter with the shift-variant technique, has been proposed in [41, 42].

In a few words, the guided filter is a linear translation-variant filter in which each pixel is replaced by a linear transform of a guidance image (input image or another one). Saini et al. proposed a modification of the ABF that firstly considers a segmentation of the image in clusters with similar structure [58]. This clustering is based on features that describe the local structure of the image. After a segmentation, each pixel is processed with a weighted mean that uses bilateral weights of the corresponding cluster.

Wavelet based methods have also been proposed for dealing with smoothing and sharpening simultaneously. In [24] the image on the HSV space is transformed into the wavelet domain by Dual-Tree Complex Wavelet Transform (DT-CWT), where the wavelets coefficients are adjusted in order to obtain a smooth and enhanced image. In this line, Li-na et al. applied wavelet methods colour images in the HSV space [22]. Their method uses properties of each canal to get the desired result. In this way, the saturation channel is smoothed according to a simple transformation by using the maximum and minimum values of the RGB space. The luminance channel is smooth by using a wavelet threshold and it is also enhanced by compression of the low frequencies of the image. Finally, the hue channel is kept invariable.

In [10], the authors apply smoothing and sharpening process on images in the Y IQ space. This method depends on the surface texture of each pixel in order to smooth flat regions of the image while sharpening details. This is done by a combination of a Gaussian derivative filter that divides the Y image into flat and edge areas. The first ones are sharpened by using a Gaussian derivative operator and the second ones are smoothed using SUSAN method.

A combined method based on the graph Laplacian operator is performed in [23] where the output image is the solution of a minimization problem of a function with two different terms: one is a standard sparse coding formulation for image smoothing and the other one allows to sharpen the image thanks to the Laplacian operator.

Conclusions

In this paper, the main techniques for removing white Gaussian noise in colour images have been revisited. Also, we have reviewed the typical techniques for colour images smoothing and sharpening, both in spatial and in frequency domain.

Both operations have an opposite nature, the aim of smoothing an image is to remove the noise. However, the aim of sharpening is somehow the opposite, since it tries to emphasize details. These techniques are responsible for making more visible variations and details or edges of the images. We have seen that the application of both techniques in two steps, one after the other, produce wrong results because of loosing some relevant information or sharpening the noise.

The reduced number of approaches that simultaneously respond to both goals lies on the difficulty of combining these apparently contradictory process. We have reported the most remarkable of these methods.

eISSN:
2444-8656
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics