Open Access

Application of various approaches of multispectral and radar data fusion for modelling of aboveground forest biomass


Cite

Introduction

Rational forest management in today’s changing world is an important element of sustainable development. Forest ecosystems comprise a significant carbon pool and play a substantial role in the carbon exchange between the land and the atmosphere through the processes of photosynthesis, respiration and decomposition. For this reason, forests have an important resource, recreation and conservation function, and they maintain and stabilize the climate system. This is why the REDD Programme was launched in 2008 under the auspices of the United Nations (http://www.un-redd.org) while the new EU Forest Strategy 2013 also recognizes the important role that forests play in this area (Schepaschenko et al. 2014). The study and assessment of forest productivity are, therefore, important components of rational forest management given the changing climate (Lyalko et al. 2012). Forest productivity is conventionally assessed using the methods of ground forest inventory. These methods provide a wide set of forest parameters, including the data needed for the assessment of the role of forest ecosystems in biogeochemical cycles. Tables and models of growth and productivity of forests form the basis of these methods. Such methods have been applied for a long time, and to date, many guidelines and primary standards for forest biomass estimation have been developed (Shvidenko et al. 1987, 2003, 2007; Shvidenko 2003; Lakyda 2002). Ground methods of forest inventory provide results with quite high accuracy. However, they require considerable time and effort on the part of humans to inventory vast areas. Moreover, these methods are hard to use in territories that are difficult to reach geographically. In this latter situation, remote sensing methods can be very useful to overcome these problems. Among all the different methods for studying and measuring forests, only remote sensing provides a spatially continuous distribution of land cover parameters. We can approximate the ground forest inventory data measured for local calibration forest plots with the land cover parameters calculated using remote sensing. From this, we can create mathematical models of aboveground forest biomass on the basis of remote sensing and then extrapolate the results to other areas. The first attempts of using remote sensing for the study of vegetation cover were made in the 1970s after the first natural resource satellite ERTS (Landsat 1) was launched. The main idea behind passive optical remote sensing of vegetation investigation is to use differences in the solar radiation absorption and reflection by plants in different spectral bands. The normalized difference vegetation index (NDVI) is one of the first vegetation indices (VI) that was proposed for the estimation of vegetation (Rouse et al. 1974; Tucker 1979). It remains the most famous and commonly used index for living green biomass identification based on multispectral remote sensing data. During the last few decades, many other VIs have been proposed for estimating the relationship between the spectral signature of the forest stand and its productivity (Hall et al. 2006; Lu et al. 2004; Ji et al. 2012; Lu et al. 2005; Steininger 2000; Song 2013; Hudak et al. 2002). However, these optical methods have certain application limitations. For example, they can only be used for estimation of top cover, and it is impossible to take into account the structure of the stand, which greatly affects the quality of the forest biomass estimation.

In recent years, there have been numerous works undertaken to study forest stand structure and their bioproductivity using synthetic aperture radar (SAR) systems (Champion et al. 2013, 2014; Balzter et al. 2007; Neumann et al. 2010; Santoro et al. 2011, 2013; Stankevich et al. 2017). The results have shown that radar surveys can be used successfully to improve forest inventory. Unlike optical systems, radar ones are able to penetrate the forest cover and allow us to estimate the forest stand structure. Therefore, combining these two types of systems is very promising for improving the estimation of forest biomass (Cartus et al. 2012; Hyde et al. 2006; Attarchi and Gloaguen 2014). Based on the integration of multi-dimensional models of forest ecosystems, multi-sensor remote sensing concepts and ground data, a comprehensive quantification of forest cover and its parametrization is provided with uncertainties acceptable for policy-making (Schepaschenko et al. 2014).

This research aimed to study and comparexx1 various approaches of optical and SAR images merging for aboveground forest biomass modelling.

Material and methods
Estimation of the aboveground forest biomass with field data

The main source of data for ground-based estimation of Ukrainian forest biomass parameters is the forest inventory carried out by forest enterprises. Previous assessments of Ukrainian forest live biomass have been undertaken by various researchers (Lakyda 2002; Lakyda et al. 2011, 2012). In these cases, the aboveground forest biomass has not been measured directly during the forest inventory but computed using allometric models (Shvidenko et al. 1987, 2007; Lakyda 2002; Lakyda et al. 2012).

Algorithms for modelling aboveground forest biomass have been developed by Lakyda et al. (2011). The models for the assessment of live aboveground biomass and the carbon content of different components of the forest stand are based on the relations between forest productivity and the main inventory parameters of the forest stand (e.g. age, diameter and height) (Lakyda et al. 2012; Cortés et al. 2014; Lu et al. 2004; Singh et al. 2014). It has been shown that the relation between live biomass and the main biometric indicators of forest stands can be expressed as follows: Mi=f(A,D,H,P) \[{{M}_{i}}=f(A,D,H,P)\] where:

M - aboveground biomass (t/ha),

i - a component of the forest stand (stem, branches, leaves, etc.),

A – forest age,

D – average diameter of stems,

H – average height of forest stand,

P - relative stocking (Lakyda et al. 2012).

For example, aboveground biomass of deciduous forests of the Ukrainian Polissya was estimated using the following equation (Lakyda et al. 2012): Mi=ai×Dbi×Hci×Pdi \[{{M}_{i}}={{a}_{i}}\times {{D}^{{{b}_{i}}}}\times {{H}^{{{c}_{i}}}}\times {{P}^{{{d}_{i}}}}\] where:

ai, bi, ci, di – regression coefficients calculated for each biomass fraction of forest (i).

The biomass for the main fractions of trees (stem (st), branches (br) and leaves (lv)) has been calculated using equation (2), while the biomass of the crown (Mcr) and the whole aboveground tree biomass (Mtr) is calculated using the following: Mcr=Mbr+Mlv \[{{M}_{cr}}={{M}_{br}}+{{M}_{lv}}\] Mtr=Mst+Mcr \[{{M}_{tr}}={{M}_{st}}+{{M}_{cr}}\]

To convert this biomass into carbon, coefficients of carbon content in the corresponding fraction are utilized. From the literature, Lakyda et al. (2012) have assumed values of 0.5 and 0.45 for wood and leaves respectively. Thus, the equations for calculating carbon stock in the forest stand are given as follows: Mcrc=0.5Mbr+0.45Mlv \[M_{cr}^{c}=0.5{{M}_{br}}+0.45{{M}_{lv}}\] Mtrc=0.5(M|st+Mbr)+0.45Mlv \[M_{tr}^{c}=0.5(M|st+{{M}_{br}})+0.45{{M}_{lv}}\]

Optical remote sensing for forest biomass modelling

Optical remote sensing of forests is based on developing relationships between spectral signatures and measured parameters. Multispectral satellite images provide the spatial distribution of land cover reflectance in the visible, near-infrared and infrared spectral ranges. These reflectance data can be used directly as well as for VI calculations. Depending on the complexity, such indices can be divided into simple ratios (SR), normalized differences (ND) and complex vegetation indices (Lu et al. 2004). A wide set of VIs has already been proposed to date; some of these are listed in Table 1. The theoretical basis for empirical VIs is derived from an examination of the typical spectral reflectance signatures of leaves. The reflected radiation in visible bands is very low as a result of high absorption by photosynthetically active pigments, with maximum absorption in blue (470 nm) and red (670 nm) wavelengths. Near-infrared radiation (NIR) is scattered (reflected and transmitted) with very little absorption in a manner dependent upon the structural properties of a canopy (leaf area index (LAI), leaf angle distribution and leaf morphology). As a result, the contrast between red and near-infrared responses is a sensitive measure of vegetation amount, with RED/NIR maximum over a dense canopy and minimal contrast over rare or no vegetation. A major finding on atmospheric effect minimization is the use of the difference in blue and red reflectances as an estimator of the atmosphere influence level. This concept is based on the wavelength dependency of aerosol scattering cross sections. In general, the scattering cross section in the blue band is larger than that in the red band. When the aerosol concentration is higher, the difference in the two bands becomes larger. This information is used to stabilize the index value against variations in aerosol concentration levels.

Some spectral vegetation indices used for vegetation cover estimation

Index Equation Reference
Simple ratio
TM4/3 TM4/TM3 Jordan, 1969
TM5/3 TM5/TM3 Lu et al., 2004
TM5/4 TM5/TM4 Lu et al., 2004
TM5/7 TM5/TM7 Lu et al., 2004
Normalized different vegetation indices
NDVI (TM4 – TM3)/(TM4 + TM3) Rouse et al., 1974
NDVI53 (TM5 – TM3)/(TM5 + TM3) Lu et al., 2004
NDVI54 (TM5 – TM4)/(TM5 + TM4) Lu et al., 2004
NDVI57 (TM5 – TM7)/(TM5 + TM7) Lu et al., 2004
NDVI32 (TM3 – TM2)/(TM3 + TM2) Lu et al., 2004
GNDVI (TM4 – TM2)/(TM4 + TM2) Gitelson et al., 1996
NDII (TM4 – TM5)/(TM4 + TM5) Hardisky et al, 1983
NDII7 (TM4 – TM7)/(TM4 + TM7) Hardisky et al, 1983
NDWI (TM2 – TM5)/(TM2 + TM5) Lacaux et al., 2007
NDWI7 (TM2 – TM7)/(TM2 + TM7) Lacaux et al., 2007
Complex vegetation indices
SAVI (TM4 – TM3)/(TM4 + TM3 + L)(1 + L) Huete, 1988
EVI G(TM4 – TM3)/(TM4 + C1TM3 – C2TM1)+L Huete, 1997
ARVI (TM4 – 2TM3 + TM1)/(TM4 + 2TM3 – TM1) Lu et al., 2004
GEMI ξ(1 – 0.25ξ) – ((TM3 – 0.125)/(1 – TM3)) Lu et al., 2004
ASVI ((2TM4 + 1) – √((2TM4 + 1)2 – 8(TM4 – 2TM3 + TM1)))/2 Lu et al., 2004
MSAVI ((2TM4 + 1) – √((2TM4 + 1)2 – 8(TM4 – 2TM3)))/2 Lu et al., 2004

Note: TM(X) is corresponding Landsat TM/ETM+ spectral band

The enhanced vegetation index (EVI) incorporates this atmospheric resistance concept in the atmospheric resistant index (ARVI), along with the removal of soilbrightness-induced variations in VI as in the soil adjusted vegetation index (SAVI). The EVI additionally decouples the soil and atmospheric influences from the vegetation signal by including a feedback term for simultaneous correction.

Lu et al. (2004) showed that the relationships between TM reflectance and forest stand parameters vary depending on the characteristics of the study areas. However, not all vegetation indices are significantly related to forest stand parameters. What is crucial is the selection of suitable TM band (s) and VIs for relevant biophysical parameter estimation.

SAR data for forest biomass modelling

Unlike optical remote sensing, which allows for estimating only the top of the forest cover and does not provide any information about forest structure (due to impermeability of the optical waves), radar remote sensing uses the microwave portion of the electromagnetic spectrum. Canopy penetration varies with different wavelengths. Shorter wavelengths (e.g. X-band imagery at 3 cm) are reflected from the top of the canopy, while longer wavelengths (e.g. L-band imagery at 24 cm) go down to the ground and are reflected from there. Using these properties of different wavelengths makes it possible to discern information about canopy structure of a forested area from a multi-wavelength image and thus estimate the different component of AGB.

One more useful feature of radar data for studying forest structure is polarization. Transmitted and received radar signals are propagated in a certain plane. The propagation planes are usually horizontal (H) and vertical (V). Vertically polarized waves interact with the vertical stems of the forest cover, while horizontally polarized waves penetrate through the canopy. Thus, the combination of images with different channels can provide additional information about forest structure.

As with optical remote sensing data, two sets of texture features can be calculated using SAR data. The first set of calculations can be derived from the backscatter (sigma nought, σ0) distribution and the second one from the grey-level co-occurrence matrix. The intensity scenes of SAR images are converted in their corresponding backscattering coefficient (σ°) values using the following equation (Attarchi et al. 2014; Shimada et al. 2009): σ0=10×log10(I2+Q2)+CF32.0 \[{{\sigma }^{0}}=10\times {{\log }_{10}}({{I}^{2}}+{{Q}^{2}})+CF-32.0\] where:

CF – calibration factor = −83 dB,

I, Q – the real and imaginary parts of the complex SAR image pixel values.

To take into account the effects of relief, the topographic normalized backscattering coefficient (the corrected backscatter in gamma-nought γ°) can be obtained from the sigma-nought σ° value according to Ulander (1996), Castel et al. (2001) and Santoro et al. (2011): γ0=σ0×AflatAslope×(cosθrefcosθloc)n where:

σ0 – the radar backscattering coefficient,

Aflat – the pixel size for a theoretical flat terrain,

Aslope – the true local pixel size for the mountain terrain,

θloc – the local incidence angle,

θloc – the radar incidence angle at the image centre.

The exponent n is the optical canopy depth and ranges between 0 and 1. It is a site-specific factor and difficult to obtain in practice; therefore, it is set to 1 (Thiel et al. 2009; Kim 2012; Santoro et al. 2011; Attarchi et al. 2014).

Previous studies have shown that the forest backscatter can be described as a function of the growing stock volume, V (Pulliainen et al. 1994; Santoro et al. 2011): σforo=σgro×eβV+σvego(1eβV) \[\sigma _{for}^{o}=\sigma _{gr}^{o}\times {{e}^{-\beta V}}+\sigma _{veg}^{o}(1-{{e}^{-\beta V}})\]

The backscatter model in Eq. (9) contains three unknowns that need to be estimated: σgro$\sigma _{gr}^{o}$, σvego$\sigma _{veg}^{o}$ and β. These can be determined by means of least-squares regression, using a dataset of reference forest growing stock volume (GSV) measurements (Pulliainen et al. 1994; Santoro et al. 2002, 2013). After backscatter coefficients and texture features have been calculated, we can estimate the forest GSV.

Multi-source remote sensing and field data fusion

Data fusion is a general multi-discipline approach. It combines data from multiple sources to improve the potential values and interpretation performances of the source data and to produce a high-quality visible representation of the data.

In general, remote sensing fusion techniques can be classified into three different levels: the pixel/data level, the feature level and the decision level (Pohl and van Genderen 1998; Zhang 2010). Pixel-level fusion is the combination of raw data from multiple sources into single resolution data, which is expected to be more accurate than either of the individual input data or it may reveal the changes between data sets acquired at different times (Zhang 2010).

Feature-level fusion extracts various features, e.g. edges, corners, lines and texture parameters, from different data sources and then combines them into one or more feature maps that may be used instead of the original data for further processing. This is particularly important when the number of available spectral bands becomes so large that it is impossible to analyse each band separately. Typically, in image processing, such fusion requires a precise (pixel-level) registration of the available images. Feature maps thus obtained are then used as inputs to pre-processing for image segmentation or change detection (Zhang 2010).

Decision-level fusion combines the results from multiple algorithms to yield a final fused decision. When the results from different algorithms are expressed as confidences (or scores) rather than decisions, it is called soft fusion; otherwise, it is called hard fusion. Methods of decision fusion include voting methods, statistical methods and fuzzy logic-based methods (Zhang 2010).

There are a set of multi-source remote sensing data fusion methods, which are based on different techniques: Markov random field (MRF) (Solberg et al. 1996), support vector machines (SVM) (Waske and Benediktsson 2007) and the decision fusion approach for multi-temporal classification (Jeon and Landgrebe 1999). However, the different nature and content of remote sensing imagery and GIS data prevent a direct comparison. Therefore, the integration of data from different applications must address the differences in the object model and the semantics of the objects themselves (Zhang 2010). Images are usually composed of a raster of pixels representing the intensities, whereas GIS data contain artificial objects (points, lines and polygons) with label forms, representing the objects or region affiliations. To combine segmented objects or primitives from remote sensing images and GIS data at the feature level or decision level, traditional pattern recognition methods can be used and have demonstrated their potential capabilities, e.g. knowledge-based techniques (Amarsaikhan and Douglas 2004), neural network and statistical approaches (Benediktsson and Kanellopoulos 1999), fuzzy set theories (Fauvel et al. 2006), Bayesian techniques and Dempster-Shafer-based methods (Zhang 2010).

Data
Field data (measurement of forest live biomass in situ)

The field data were collected during detailed forest inventory supported by the National University of Life and Environmental Sciences of Ukraine. The study area is located in Chernihiv region (Ukrainian Polissya) (lat., long. of the area centre are 52.070556 N, 31.839722 E), and it covers an area of over 60 sq. km. Within this site, the measurements of forest parameters by ground-based methods have been performed. The dataset contains the results of the measurements of forest live biomass structure. The data are presented in the same units and organized as a unified structure. The dataset is designed for studying the biological production of Ukrainian forests under global change. It has been used for (1) modelling of fractional structure of forest live biomass in Ukraine based on data from the forest inventory (State Forest Account) and (2) the development of models and tables of dynamics of biological productivity of forests. For these reasons, the dataset contains detailed biometric indicators of forest stands within the sample plots.

The dataset contains the following information for each test plot:

Geographical location (administrative region, forestry, plot number, plot area and geographical coordinates (if available))

Forest type

Biometric characteristics of stands (dominant species, species composition, number of model trees, age, average height and average stem diameter, number of trees per 1 ha, absolute and relative stocking, growing stock volume and AGB of different components of the forest stand (stem, branches, leaves, bark and undergrowth)

There are more than 150 and more than 200 samples for coniferous forest and for softwood species respectively. This provides an adequate sample for building the models and their validation.

Remote sensing data

The RapidEye multispectral image from July 1, 2010, and the PALSAR radar image from August 2, 2009, were selected as sources of remote sensing data.

PALSAR is L-band SAR sensor. We used imagery of SLC Fine Beam Double polarization HH/HV product (level 1.1) with a 12-meter resolution. In order to use SAR data in a quantitative fashion, we applied a range of pre-processing steps. First, radiometric calibration has been done to convert radiometry default value of amplitude (the pixel values in the image are raw digital numbers) into sigma value. Second, terrain correction has been done to remove geometry-induced distortions and geocoding has been performed to transform the image from the SAR geometry into UTM projection. Finally, speckle filtering has been applied using Lee-Sigma filter.

RapidEye optical system provides five spectral bands (440–510 nm (blue), 520–590 nm (green), 630–685 nm (red), 690–730 nm (red edge) and 760–850 nm (near infrared)) with a 6.5-meter spatial resolution. We used RapidEye Ortho Tile Product (Level 3A). In this product, radiometric and sensor corrections applied to the data as well as imagery are orthorectified using the RPCs and an elevation model. Therefore, we additionally performed only atmospheric correction of the image and converted its spatial resolution to adapt it to SAR image.

Results and discussion

Using the RapidEye multispectral image, the average reflectance for each spectral band, as well as the average amplitude value for HH and HV polarimetry modes in case of the PALSAR image, was calculated for each forest sample plot. The obtained values were compared with the aboveground forest biomass data from field data set. This preliminary analysis shows that forest AGB values have either exponential or power relation with spectral reflectance and SAR amplitude value. It also reveals that the saturation level of informative signal for the optical spectral bands is achieved at the value of forest AGB about 50 t/ha. This is due to the above-mentioned limitations of optical systems. It means that at the forest AGB value of 50 t/ha and above, the forest canopy is too dense to be penetrated by optical system. In case of the PALSAR image, such saturation level of radar signal is achieved at the value of forest AGB about 100–150 t/ha.

However, one of the disadvantages of radar data is that radar measurements over rough surfaces are corrupted by “speckle” that significantly affects the results accuracy of the vegetation productivity modelling. To reduce such impact, the implementation of data fusion techniques to combine radar and optical data is forward-looking approach. It allows to use the strengths of both types of data and could increase the accuracy of the estimates. There are a large number of different techniques for remote sensing data fusion. However, many of them cannot be used to merge passive optical and active SAR remote sensing data due to significant differences in data characteristics and information content it provide. Pohl and van Genderen (2016) came to the conclusion that wavelet fusion and high-pass filtering approaches are equally suitable to merge optical/SAR data. Some authors (Zhang et al. 2010) suggested using linear regression to combine multispectral and SAR images. Therefore, to compare the effectiveness of different data fusion approaches for modelling of forest AGB, we decided to use such fusion techniques as multiple linear regression (MLR), high-pass filtering (HPF), intensity hue saturation (IHS), wavelet transformation (WT) and the hybrid method WT + IHS.

High-Pass Filtering

HPF is one of the first developed image merge methods (Schowengerdt 1980, 2007a, b; Chavez et al. 1991; Pohl and van Genderen 2016), which has been used for image fusion for more than 30 years. It was primarily designed to improve the resolution of multispectral images by transferring spatial details derived from a higher resolution PAN or SAR image to a lower resolution multispectral image. This method is performed in three stages (Pohl and van Genderen 2016):

High-pass filtering the high-resolution SAR image

Add the high-pass filtered image to each multispectral band using individual weights depending on the standard deviation of the MS bands

Match the histograms of the fused image to the original MS bands

Thus, this method extracts high-frequency information from the SAR image, which is then added to each spectral channel of the multispectral image.

Figure 1a shows the result of merging the PALSAR image (HH mode) with RapidEye multispectral image using HPF method.

Figure 1.

Results visualization of merging the PALSAR band (HH mode) with RapidEye multispectral image using various data fusion approaches

Intensity Hue Saturation (HIS) Transform

IHS transform is one of the most popular methods of merging remote sensing images. It uses a mathematical colour model based on a cylindrical or spherical coordinate system. This method effectively separates spatial (I) and spectral (H, S) information from a standard RGB image (Pohl and van Genderen 2016).

There are two ways to use IHS transformation to merge images: direct and substitutional. The first way is to directly convert the three spectral channels of the image into I, H and S. The second method involves converting the three spectral channels from the RGB into IHS colour model, where colour aspects are separated by its average brightness. The hue and saturation in this case are related to the surface reflectivity or composition. The SAR image then replaces one of the components, and reverse transformation from IHS to RGB converts the data into their original colour model to produce a new synthesized image.

Figure 1b shows the result of merging the PALSAR image (HH mode) with a multispectral RapidEye image using IHS method.

Wavelet Transformation (WT)

Wavelet transform is another powerful mathematical tool used for signal processing. It is used to decompose a function or signal into several components (coefficients). When using this method to merge images, the main idea is to decompose the original images into a number of fragments using direct wavelet transform. The information is merged on the basis of the obtained coefficients of these fragments, and the inverse wavelet transform is applied to synthesize new image. This method is suitable for merging images from different sources with varied physical backgrounds (such as radar and optical images) because it decomposes images into different types of coefficients, preserving the source information. Based on these coefficients, a new image can be synthesized using inverse wavelet transform (Pajares and Cruz 2004).

Figure 1c shows the result of merging the PALSAR image (HH mode) with RapidEye multispectral image using WT method.

Hybrid Wavelet – IHS fusion (WHIS)

Hybrid data merging methods are widely used to compromise between spatial and spectral optimization. WHIS combines the IHS transformation approach with methods used to merge data with different spatial resolution, such as WT (Chibani and Houacine 2002; Gonzalez-Audicana et al. 2004; Zhang and Hong 2005). During wavelet-IHS transformation (WIHS), the multispectral data are converted to IHS colour model. Then, the intensity component I is decomposed using WT method. The SAR image is matched to I and then also decomposed by the WT method. The decomposed component I is replaced by the SAR decompositions, and the inverse IHS transformation is performed (Pohl and van Genderen 2016).

Figure 1d shows the result of merging the PALSAR image (HH mode) with RapidEye multispectral image using WHIS method.

Modelling of aboveground forest biomass

The field data set of forest AGB was divided into two samples with the proportion of 70/30. The first sample (70% of data) was used to build models, which accuracy was further assessed using the second sample (30% of data). Regression analysis methods were used to find a set of relationships between forest AGB and varied combinations of remote sensing data for each approach. As a final result, one model that has the highest accuracy was selected for each approach.

In case of using HPF approach, the best result was achieved for merged HH PALSAR band with 4th RapidEye spectral band. It is described by the following equation (Fig. 2): AGB=9877.69×e47.08×HPF(HH+B4) where:

HPF(HH + B4) – the signal from fused HH PALSAR band with 4th RapidEye spectral band using HPF approach.

Figure 2.

Relation of forest AGB with the merged RapidEye 4th spectral band and HH PALSAR band using HPF approach

In case of using IHS approach, the best result was achieved for merged HH PALSAR band with 4th RapidEye spectral band. It is described by the following equation (Fig. 3): AGB=1.3701e+16×IHS(HH+B4)11.15 where:

IHS(HH+B4) – the signal from fused HH PALSAR band with 4th RapidEye spectral band using IHS approach.

Figure 3.

Relation of forest AGB with the merged RapidEye 4st spectral band and HH PALSAR band using IHS approach

In case of using WT approach, the best result was achieved for merged HH PALSAR band with 4th RapidEye spectral band. It is described by the following equation (Fig. 4): AGB=35110+e59.62×WT(HH+B4) where:

WT (HH+B4) – the signal from fused HH PALSAR band with 4th RapidEye spectral band using WT approach.

Figure 4.

Relation of forest AGB with the merged RapidEye 4th spectral band and HH PALSAR band using WT approach

In case of using hybrid WIHS approach, the best result was achieved for merged HH PALSAR band with 4th RapidEye spectral band. It is described by the following equation (Fig. 5): AGB=19133.31×e53.01×WIHS(HH+B4) where:

WIHS (HH+B4) – the signal from fused HH PALSAR band with 4th RapidEye spectral band using WIHS approach.

Figure 5.

Relation of forest AGB with the merged RapidEye 4th spectral band and HH PALSAR band using WIHS approach

In case of using MLR approach, the best result was achieved for merged HH PALSAR band with 4th RapidEye spectral band. It is described by the following equation: AGB=308.532530.68B4+211.55HH where:

B4 is the reflection coefficient of the 4th spectral band of the RapidEye image and HH is the amplitude of the reflected radar signal in the HH mode.

The accuracy of the obtained models was assessed with a test data set, which was not used for the models building. The modelled data were compared with the data of field measurements, and the correlations between measured and modelled data were calculated for each model (Tab. 2).

Parameters of the models for the accuracy estimation

Fusion techniques R2 Correlation MMA
MLR (RapidEye, 4th band ++ PALSAR, HH band) 0.698 0.837 0.575
IHS (RapidEye, 4st band + PALSAR, HH band) 0.530 0.731 0.591
HPF (RapidEye, 4th band + + PALSAR, HH band) 0.643 0.804 0.652
WT (RapidEye, 4th band + PALSAR, HH band) 0.370 0.614 0.623
WIHS (RapidEye, 4th band + +PALSAR, HH band) 0.476 0.694 0.604

The correlation shows the relationship between the measured and modelled values. The index lays in a range from −1 to 1. Higher correlation means better model performance. A correlation of 0 indicates no relationship between the measured and modelled values.

Also, to assess the accuracy of each model, the min–max accuracy (MMA) was calculated: MMA=mean(min(actuals,predicteds)max(actuals,predicteds))

This parameter shows how close the predicted values are to the actual ones. The MMA value ranges from 0 to 1, where a value of 1 indicates a perfect match between actual and predicted values. So, higher MMA score means better model performance.

Among the different methods of data fusion used in the research, the HPF method (correlation 0.804, MMA 0.652) showed the best result with a significant advantage over other methods.

Conclusions

Nowadays, data from both passive and active remote sensing are becoming more and more available. They have different nature, and accordingly, they have different advantages and disadvantages. Therefore, their joint using for the land cover studying and vegetation parameters assessing (including forest cover) can significantly improve the results. This research aimed to study and compare various approaches of optical and SAR images merging for forest AGB modelling. Five models for estimating forest AGB were built and analysed using data from test area in Chernihiv region (Ukrainian Polissya). Obtained results confirm conclusions from previous studies (Santoro et al. 2013) about low accuracy of aboveground biomass modelling using SAR data due to the speckle effect. The merging of optical and SAR data significantly increases the accuracy of the simulation, and among all the data fusion approaches used in the study, the method of high-pass filtering (HPF) has shown the greatest efficiency.

eISSN:
2199-5907
Language:
English
Publication timeframe:
4 times per year
Journal Subjects:
Life Sciences, Plant Science, Medicine, Veterinary Medicine