Flower thinning is one of the most important agronomical operations conducted in apple orchard management. Generally, a portion of the flowers must be removed early in the growing season to ensure that the flowers and the subsequent fruits are present in appropriate amounts. The quantity to be thinned depends on the bloom intensity. This procedure can be performed with manual or with mechanical methods or through the spraying of chemicals products (Xiao et al., 2014; Dias et al., 2018). The two last methods are used to reduce costs. Manual thinning requires more money and time but may be necessary if chemical thinning proves to be inadequate because of its inability to consider the variability between individual trees (Aggelopoulou et al., 2010). In order to perform a thinning operation correctly, it is necessary to estimate the density of the blooming charge. Usually, this procedure is carried out by specialized operators trained in the procedure. In order to reduce the time required for this assessment, ground and remote-sensing technologies can be used to automate it. Indeed, with the use of optical sensors installed on vehicles (aerial or terrestrial), it is possible to obtain punctual information about a crop, such as bloom charge, canopy volume, distance between plants, number of plants, or health status (Rosell et al., 2009; Di Gennaro et al., 2016; Albetis et al., 2017; Gallo et al., 2017; Ristorto et al., 2017). In order to find new solutions for automating crop-monitoring activities, several types of sensors (including LiDAR and OptRx™) have been tested by several authors. The LiDAR (Light Detection and Ranging) sensor is able to assess the canopy volume via the emission of a laser beam (Bietresato et al., 2016; Vidoni et al., 2017), while active light sensors are able to provides information about the status of the crop (Mazzetto et al., 2010; D’Auria et al., 2016). Active optical sensors, such as OptRx™, are able to generate a light at three known wavelengths (670, 730, and 780 nm) and record the light reflected by the target. Given the value of the recorded reflectance, the OptRx™ can compute two vegetation indices (VIs): the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Red Edge Index (NDRE). Recent studies reveal the application of optical devices in apple orchards for different purposes. A Charge-Coupled Device (CCD camera) was tested and used by Gongal et al. (2018) to develop a machine vision system to estimate the apple fruit size in the tree canopy. Dias et al. (2018) and Hočevar et al. (2013) used multiple cameras and an industrial color camera with the aim of detecting apple flowers through a deep convolution network and to estimate the apple flower charge by image analysis. This article describes a mobile laboratory equipped with several sensors, called the ByeLab (Bionic Eye Laboratory), and an evaluation of its capability of providing objective information about the bloom charge in apple orchards with the aim of determining if it could be used to manage flower-thinning activities.
The main equipment used for the crop-monitoring activities in this study is the ByeLab. This prototype mobile laboratory (Figure 1) is a tracked bin-carrier (NEO Alpin by Windegger S.r.l., Lana, Bolzano, Italy) which has been modified with a metal structure and had several sensors installed. The vehicle is powered electrically, controlled via wireless remote control, and has compact dimensions. These features ensure its inter-row, high-agility performance as well as its transmission of low vibrations. For this experiment, the mobile laboratory was equipped with the following:
a GNSS-RTK system (GEOMAX Zenith 35) with 20 Hz of sampling, placed on the top of the vehicle; three OptRx™ AgLeader optical sensors with 10 Hz of sampling, placed 0.8, 1.6, and 2.4 m from the ground; two LiDARs (SICK LMS111) with 50 Hz of sampling, placed at 0.95 and 2.5 m from the ground; an inertial measurement unit (IMU) (LMRK 10 AHRS) with 10 Hz of sampling; a control unit system.
OptRx™ and LiDAR sensors were installed in order to ensure proper canopy coverage at a 3.5-m height, the common height for apple training systems in productive orchards in South Tyrol. OptRx™ sensors were used because, as already demonstrated in other crop-monitoring applications, they are of high utility in precision agriculture applications because of their cheapness, robustness, and quick real-time responses (Maharlooei et al., 2014). These optical sensors were used to collect data related to the reflectance of the monitored canopy, while LiDARs were used to investigate canopy thickness. The roll, pitch, and yaw angles acquired by an inertial measurement unit (IMU) were used to correct the ByeLab’s trajectory and to adjust all the acquisitions. Indeed, the terrain roughness (because of the presence of grass, stones, and holes) affected the vehicle tracking, causing slight noises in the collected data. All the collected data were georeferenced by an GNSS-RTK system.
As the installed sensors had different acquisition frequencies, all the collected data were first synchronized by a procedure implemented in the LabView® code running on the control unit. Then the recorded data were post-processed by dedicated interpretative algorithms developed in MatLab® environment, as explained in the next section. The testing area consisted of one orchard row of plants of the
In order to better evaluate the features of the flower reflectance signatures, some preliminary investigations were performed before running the ByeLab field tests. To this aim, a portable spectrophotometer (Jaz Ocean Optics Spectrometers) was used. The information collected by this tool was then used to better understand the reflectance behavior of different plant organs (petals, corolla, flowers, leaves, trunks). Considering the acquisitions on red, near infrared (NIR), and RedEdge wavelengths, because the same information would be acquired by the OptRx™, three VIs were tested: NDVI, NDRE, and White Flower Index (WFI). The WFI is an index able to discriminate the white components, i.e. flowers, from the green ones (vegetation). It is calculated through the following equation:
The data obtained by the OptRx™ assessment were compared and validated with the manual flower count.
The information related to the canopy thickness collected by the LiDARs mounted on the ByeLab was validated using a terrestrial laser scanner (TLS) (model CAM2 FARO). The raw data collected by TLS were elaborated upon with FARO Scene 7.1.1 software in order to obtain a 3D model to compare with the data obtained by LiDARs.
Thanks to the LabView synchronization, all data collected by the different sensors were stored automatically in a unique record instead of a single file data set. Each acquired file, using a post-processing procedure implemented in MatLab® environment, was interpreted and translated into information. Generally, data acquisitions were performed separately for each semi-row investigated (up to step number 4 below) and merged together during the elaboration procedures to compose the entire row. The procedure followed is described in more detail below:
Removal of partially completed records: The analysis starts with the loading and verification of the data stored in the temporary memory of the on-board control unit. The algorithm code expects that each record will be structured in eight strings. Should some data be lacking, the script deletes the entire record and proceeds with the next step, during which specific functions and parameters complete the reading of the files and the checking of the expected size of the measurements of each sensor. At this stage, the file data sets are structured in columns containing:
a one-time string, where progressive acquisition time has been reported; two LiDAR strings, where measurements of angles and distances from both LiDARs have been reported (high and low); one IMU string, where angle of pitch, roll, and yaw have been reported; three VI strings, where the reflectance values collected for the red, NIR, and RedEdge wavelengths have been reported; and one GNSS string, where data related to vehicle’s coordinates and speed have been reported. If the data process procedure finds records without the expected number of strings, the entire record is removed. The use of incomplete records causes the failure of the data processing. Point cloud analysis and noises filtering: The second step of the algorithm procedure foresees the displacement of each point collected by the two LiDARs into the space. As it knows the distances between the LiDARs and targets as well as the scanning angle, the system is able to build a point cloud related to the scanned semi-row. The obtained data set is corrected using the information on roll, pitch, and yaw collected by the IMU. Thanks to this procedure, all the noises caused by terrain unevenness, such as changes in the routing or displacements of the metal frame where the sensors are installed, are corrected. In order to reduce the risk of estimation errors, the algorithm carries out a noise-filtering procedure. As a result, the collected point cloud is cut at a height lower than 0.3 m and higher than 3.8 m. Consequently, for the following procedures, the algorithm considers only the effective canopy wall without taking into consideration the eventual herbaceous layer or outliers above the canopy. The goal of this filtering is the removal of data that may generate disturbances in the analytical procedures and, at the same time, to reduce the computation time required by the process. Roto-translation of the point cloud to a local system of reference and the merging of the two semi-canopies: After the point cloud noise correction, the semi-row data sets are then plotted into the space according to the coordinates collected by the RTK-GNSS unit. To perform the union between the left side and right side of the canopy, a roto-translation of the point cloud is necessary; therefore, the data set is plotted in a local reference system. Consequently, both semi-canopies are displaced following the same direction. To define a proper intersection plane between the two canopy sides, it is necessary to define, through a manual selection of common points present on both point clouds, a potential center line of the apple row. For this process, the poles along the row are used as references. Thanks to this operation, the origin of the reconstructed canopy is arbitrarily set-up in the bottom left corner of the center line plane in order that the x-, y-, and z-axes correspond to the depth, length, and height of the row, respectively. In the end, the algorithm calculates the distance of each point to the center line in order to obtain the canopy thickness (Figure 4). Overlay of the OptRx™ acquisitions: When the point cloud is placed in the local system of reference, the algorithm overlays the information collected by the OptRx™ sensors. Thanks to the data synchronization performed during the acquisition and through the parameters used for the previous roto-translation, the analytical procedure is able to carry out a precise overlapping of both detections. At the end of this procedure, the merging of the points collected by the LiDAR and OptRx™ sensors is obtained (Figure 5). Canopy segmentation for data assessment: By using the y- and z-planes of the combined data set, a reference grid is overlapped temporarily on the cloud point. The bottom left corner of the grid is placed at coordinate (0, 0.4) of the local reference system. The offset in the positioning of the grid is performed in order to have the barycenter of each cell at the same height as the OptRx™ sensors. Therefore, the grid is composed of three rows of cells of 1 m in length and 0.8 m in height, and the total number of cells depends on the length of the monitored row. The application of this grid permits the identification of several samples of the synchronized data set with which carry out the thickness and VI analysis. Thickness and bloom charge assessment: The last step of the data analysis concerns the assessment of the canopy thickness and the evaluation of the blooming charge. To obtain this information, the average value of the distances related to the portion of the point cloud and VI inside each cell of the overlapped grid are computed. As a final output, a summary table, together with a thematic map, is obtained for each monitored row. Both outputs consider the left and right sides of the row from its origin, while the table reports the numerical values of detection results for each cell, and the maps describe the same information using a qualitative approach. Indeed, thanks to a predefined set of colors and histogram lengths, the information related to canopy thickness and VI can be shown. The colors represent the classification of the VI values (white and yellow represent high and moderate blooming charges, respectively, while cyan and green represent moderate and high amounts of leaves, respectively), while the histogram heights represent the canopy thickness along the monitored row.
From the analysis of the spectral signature (Figure 6), it is possible to see that for the same plant organs (flowers and leaves) collected from different trees, the spectrums always have similar patterns but different values when the same wavelength is considered. This result probably occurs because during the in-field survey, the acquisition, even if performed under the same operative conditions and using the same procedure, may be affected by human errors during data acquisition. Indeed, slight movements of the probe over the sample or different distances between the probe and target affected the final acquisition of the spectrum.
By calculating the three VIs using the reflectance records collected by the OptRx™, it has been possible to determine that the NDVI is the most suitable parameter to use to discriminate leaves from flowers. Moreover, it was observed that low and high NDVI values correspond to the flowers and leaves, respectively (Figures 7 and 8).
Considering the values of the NDVI associated to the flowers and leaves and the elaboration of the data collected via the ByeLab, we expect a negative correlation between the NDVI value and number of flowers: the lower the NDVI value, the higher the blooming charge. In Figure 9, the potential behavior between the number of flowers and NDVI value is reported. The obtained results (Figure 10) showed very poor correlation between NDVI values and number of flowers, where the expected negative correlation is not highlighted. This is probably due to the fact that the manual counting of the flowers gives too detailed information about the blooming charge, while the OptRx™ sensor returns averaged values of the VI of the monitored surface. Moreover, the value of NDVI could be influenced by background noise. Nevertheless, considering the different blooming periods, the NDVI showed difference between the full blooming period and the others (pre- and post-blooming). Comparing the TLS and ByeLab (LiDAR) scans, there is a slight stretching of the point cloud collected by the ByeLab. Consequently, all the measurements collected by the system are not precise (Figure 11), perhaps because of a time delay in the communications with Li-DAR and the Global Position System (GPS) as a result of a difference in acquisition frequencies. In addition, it is possible that the characteristics of the two survey systems themselves could cause the abovementioned inaccuracy: the TLS carries out static measurements, while the ByeLab does dynamic assessments. However, from the point cloud collected by the proposed mobile laboratory, it is possible to obtain, even if approximately, important information, such as the change in training, the thickness of the canopy, the identification of small plants, gaps between plants, and the poles of the irrigation and training systems. All this information can be used to recreate a virtual 3D orchard. Another output of the analytical procedure is the qualitative descriptive map (Figure 12), which is related to the coverage of the flowers and the thickness of the canopy at three different heights. This remarkable qualitative result was obtained by merging the LiDAR and OptRx™ data. White and yellow colors have been associated with low NDVI values, that is, a high number of flowers, whereas cyan and green colors have been associated with high NDVI values, that is, a low-blooming charge. The system’s output highlights that in the pre-blooming period, that is, the absence of flowers, the predominant color of the map is cyan and not green. This result could be due to the fact that the leaves are still small and cannot be perceived clearly by the sensors. In full bloom, the predominant colors are white and yellow because the flowers are out and cover a large portion of the green vegetative surface of the plants. Meanwhile, in the post-blooming period, the cyan and green classification are the predominant because the presence of the flowers is very low, and, on the contrary, the leaf volume is very large. Therefore, during this acquisition, the sensors acquire reflectance values that permit the system to calculate NDVI values around 0.7, which represent healthy and green apple vegetation.
The results obtained from this analysis are not comparable with those present in the literature. Several authors (Hočevar et al. 2013; Xiao et al., 2014; Dias et al., 2018) have estimated the blooming charge of apple tree plants using methods based on image analysis. In their research, the images were usually acquired with mono and/or multi spectral cameras mounted on unmanned aerial vehicle (UAV) systems, tractors, or static supports. The use of these sensors determines overestimations of the bloom charge assessment because of trees overlapping, high reflection on leaves, background noise, and the low resolution of the used camera. The system described here gives back information about the blooming charge assessment using cheap active sensor already available on the market, which returns NDVI values suitable to be used directly on the computation procedures. Thanks to the use of the OptRx™, sensors issues related to surrounded light conditions have been overcome. Besides this, thanks to the RTK-GNSS installed on the ByeLab, the developed system is able to georeference all the acquisition and gives high detailed information on the placement of the blooms.
Despite the homogeneous behavior of the NDVI flower response, the OptRx™ sensors were not able to highlight a significant correlation between the number of flowers and the NDVI values. Thus, mono-dimensional sensors—such as the OptRx™—do not seem suitable for the estimation of the flower charge. To increase the reliability of the system, a higher number of sensors per side (4 or more instead of 3) will be investigated in future field tests. However, the combination of LiDAR and OptRx™ data has improved the quality of detection. In fact, by merging the data collected by the two types of sensors, it is possible to build descriptive maps providing information on canopy thickness and VI values, which can be used as references for automated machines conducting thinning or pruning operations according to a site-specific approach.
In order to propose future applications of a system that is able to provide information about the condition of the crop in real time, it is necessary to evaluate the amount of time required for computation, that is, if it is possible to use such a system in practice. It would be useful to evaluate the use of different algorithms and to study the correlations obtained by setting filters that give greater importance to the lower NDVI values found in the investigated areas. To overcome the problem of the mono-dimensional scans obtained through the OptRx™ sensors, bidimensional scans with pixel matrix generation, such as those provided by a multi-camera set-up, should be considered and tested together with the LiDARs. In this case, we expect the system to be capable of coloring each point of the acquired point cloud with the information collected by the cameras. Therefore, thanks to this approach, it should be possible to pinpoint the origin of each point of the cloud and consider only needs to be assessed.