-
Paper 688 - Session title: Land Cover 2
14:30 Mapping CORINE Land Cover with Multitemporal Sentinel-1 SAR using Random Forests
Balzter, Heiko (1); Cole, Beth (1); Christian, Thiel (2); Chris, Schmullius (2); Rodriguez-Veiga, Pedro (1) 1: University of Leicester, United Kingdom; 2: Friedrich-Schiller-Universität Jena, Germany
Show abstract
The European CORINE land cover mapping scheme is a standardized classification system with 44 land cover and land use classes. It is used by the European Environment Agency to report large-scale land cover change with a minimum mapping unit of 5 ha every six years and operationally mapped by its member states. The most commonly applied method to map CORINE land cover change is by visual interpretation of optical/near-infrared satellite imagery.
The Sentinel-1A satellite carries a C-band Synthetic Aperture Radar (SAR) and was launched in 2014 by the European Space Agency as the first operational Copernicus mission.
This study is the first investigation of Sentinel-1 for CORINE land cover mapping. Two of the first Sentinel-1A images acquired in May and December 2014 during its ramp-up phase over Thuringia in Germany are analysed. 27 hybrid level 2/3 CORINE classes are defined. 16 of these were present at the study site and classified based on randomly selected training pixels from the CORINE 2006 map. Sentinel-1 HH and HV polarisation (May) and VV and VH polarisation (December), and image texture are used as inputs to a Random Forest classification. In addition, a Digital Terrain Model (DTM), a Canopy Height Model (CHM) and slope and aspect maps from the Shuttle Radar Topography Mission (SRTM) are used as input bands to account for geomorphometric features of the landscape.
If a Sentinel-1 Convoy Mission with a bistatic SAR was launched, in future such elevation data could be delivered from the Sentinel-1 convoy bistatic Interferometric Wide-Swath Mode.
When augmented by elevation data from radar interferometry, Sentinel-1 is able to discriminate several CORINE land cover classes, making it useful for monitoring land cover of cloud-covered regions.
[Authors] [ Overview programme] [ Keywords]
-
Paper 864 - Session title: Land Cover 2
13:10 Assessing, comparing and integrating global land cover maps for different user applications
Tsendbazar, Nandin-Erdene (1); de Bruin, Sytze (1); Herold, Martin (1); Mora, Brice (2) 1: Wageningen University, The Netherlands,; 2: GOFC-GOLD Land Cover Office, Wageningen University, The Netherlands
Show abstract
Global land cover (GLC) maps and assessments of their accuracy provide important information for different user applications such as climate models, dynamic vegetation models, hydrological models, and carbon (stock) models. Users have different requirements on global land cover (GLC) maps as well as their accuracy assessments. For example, climate modellers, typically use GLC maps at 1-km spatial resolution or coarser whereas this resolution is too coarse for GLC change studies to detect small-scale changes, e.g. by forest logging. Moreover, depending on the user applications, confusion between certain classes can have strong impact on specific applications compared with another application i.e., confusion between water and snow/ice may not be as important for biomass estimation, as it is for albedo estimation. Therefore, generation and assessment of GLC maps should account for different user requirements and perspectives.
To date, a number of GLC maps have been produced and reference datasets were created for their calibration and validation. Despite great potential, the role of existing reference datasets in applications outside the intended use has been very limited. In efforts to improve the usage, international initiatives such as GOFC-GOLD released several reference datasets to public. However considering the different characteristics of reference datasets arising from various validation strategies, the suitability of such existing reference datasets in different applications needs to be assessed.
This presentation addresses the requirements and perspectives of the users of GLC maps in (1) assessing the suitability of existing reference datasets; (2) assessing and comparing the accuracy of recent GLC maps; and (3) developing multiple GLC maps for different users while integrating existing maps and reference datasets.
First, we analysed metadata information of current GLC reference datasets and assesses potential uses of reference datasets in the context of four GLC user groups, i.e., climate modellers, global forest change analysts, global agricultural monitoring community and map producers. We identified the LC-CCI, GOFC-GOLD, FAO-FRA and Geo-Wiki reference datasets as the ones supporting the broadest range of applications.
Secondly, we utilized the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). These maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using a weighted accuracy assessment procedure. Overall accuracies ranged from 61.3 ± 1.5% to 71.4 ± 1.3%. Weighted accuracy assessments resulted in with increased overall accuracies (80–93%) since not all class confusion errors are important for specific applications. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.
Third, we integrated recent GLC maps namely Globcover-2009, LC-CCI-2010, MODIS-2010 and Globeland30 and available reference datasets from GOFC-GOLD reference data portal and Geo-Wiki platform to create improved GLC maps. Multiple integrated GLC maps are created fitting the legends to different user requirements. Up to 13% improvement was obtained for the integrated GLC maps.
Our results demonstrate the necessity to account for the requirements and perspectives of user applications of GLC maps both in generating and assessing GLC maps. Furthermore, we demonstrate the added value of re-using available reference datasets for assessing, comparing, and creating an improved GLC maps.
[Authors] [ Overview programme] [ Keywords]
-
Paper 1002 - Session title: Land Cover 2
14:10 Sentinel 1 and 2 data for automatic update of land cover maps at national scale
Mitraka, Zina; Carbone, Francesco; Boutsia, Konstantina; Del Frate, Fabio; Schiavon, Giovanni University of Tor Vergata, Italy
Show abstract
The launch of the Sentinel 1 and 2, coupled with the free distribution of the data, can have a significant impact on the production and updates of land cover maps. However, for the full exploitation of the data and their application to nationwide extensions, robust automatic procedures need to be also available. The effectiveness of standard MLP (Multi-Layer Perceptrons) neural networks (NN) for automatic pixel-based classification of satellite images has been already shown in various papers either for SAR or optical data [1]. More recently, in [2] and [3] PCNN (Pulse Coupled NN) have been also introduced for unsupervised change detection applications. PCNN is a relatively new technique based on the implementation of the mechanisms underlying the visual cortex of small mammals. In principle, the algorithm generates, step by step in an iterative scheme, a specific signature of the scene, depending on both the values associated with single pixels and on the contextual information. A measure of the correlation between the preceding and the subsequent signatures is able to suggest intervened changes within the the selected area of interest.
In this paper we present a new methodology based on the combination of MLP-NN and PCNN for the production and the update of land cover maps addressing the whole Italian territory using Sentinel data. After having divided the whole area of interest in 240 tiles of 1500x1500 pixels, the MLP-NN is used to generate the “Master” classification map of each tile. Landsat data have been considered for this first implementation. The PCNN algorithm is subsequently applied to detect the changed areas. To this aim, PCNN use multitemporal pairs of either SAR or optical images. A data fusion approach has been then elaborated to merge the results obtained by the different types of images. Once the changed areas have been detected a new classification with MLP-NN is performed to assign the new land cover classes.
To assure enough robustness and accuracy a restricted number of land cover classes has been considered so far: forest, built areas, water, other natural surfaces. The final results are encouraging: first of all a consistent land cover map of the whole Italian territory with a spatial resolution of 30 m has been produced with an overall accuracy of about 92%. Moreover the PCNN procedure allowed us to update the maps using a very high level of automation and keeping the same final accuracy. Finally, it is important to underline that the methodology can be easily extended to very high resolution images in order to improve the detail of the generated maps.
[1] F. Pacifici,F. Del Frate,W. J. Emery, P. Gamba, J. Chanussot, “Urban mapping using coarse SAR and optical data: outcome of the 2007 GRS-S data fusion contest,” IEEE Geoscience and Remote Sensing Letters, vol 5, n. 3, pp. 331-335, July 2008
[2] F. Pacifici and F. Del Frate, “Automatic Change Detection in Very High Resolution Images with Pulse-Coupled Neural Networks,” IEEE Geoscience and Remote Sensing Letters, vol 7, n. 1, pp. 58-62, January 2010
[3] C. Pratola, F. Del Frate, G. Schiavon, D. Solimini, “Toward fully automatic detection of changes in suburban areas from VHR SAR images by combining multiple neural network models,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, n. 4, pp. 2055-2066, April 2013
[Authors] [ Overview programme] [ Keywords]
-
Paper 1068 - Session title: Land Cover 2
13:30 Towards Fully Automated Sentinel-2 Type Based Continuous Land Monitoring
Sturm, Kevin (1); Riffler, Michael (1); Scheckel, Sebastian (1); Vuolo, Francesco (2); Atzberger, Clement (2); Haas, Eva Maria (1) 1: GeoVille, Austria; 2: University of Natural Resources and Life Sciences (BOKU), Austria
Show abstract
The successful launch of the first Sentinel-2 satellite on 23 June 2015 opened a new chapter for European and global land monitoring. To date most of the efforts for land cover monitoring are implemented with technologies focusing on the detection of changes from single-date “snapshots”, often several years apart, with different standards and lacking comparability. Due to the time lag between satellite image acquisition and provision of the final mapping results, outputs are in most cases already outdated at the date of their publication.
New European directives and national legislation are demanding more regular, more detailed and more consistent information to serve existing and planned obligations. The availability of a homogenous, European-wide operational land monitoring capacity based on a recurrent flow of image data is an indispensable public necessity, needed for political decisions, effective administration and successful governance.
The goal of the Austrian funded FFG LandMon project is to generate the know-how and technology to enable a fully automated Sentinel-2 type based, continuous land monitoring capacity. The land mapping is based on an S-2 type, bottom-of-atmosphere (BoA), cloud-free, synthetic multi-temporal image test bed databaseover a heterogeneous landscape in central Europe. In comparison to traditional approaches that solely rely on static multispectral information, the developed LandMon approach employs a combination of temporally dense, consistent multispectral index data and biophysical indicators for fully automated identification and monitoring of primary land cover types (built-up, forest, grassland, bare soil, water, etc.). The newly developed processing chains are developed for a computing environment capable of high data load processing to allow large scale automated land cover mapping.
[Authors] [ Overview programme] [ Keywords]
-
Paper 1374 - Session title: Land Cover 2
13:50 Landscan project
Ronczyk, Levente (1); Czigany, Szabolcs (1); Zavalnij, Bogdan (1); Holecz, Francesco (2) 1: University of Pecs, Hungary; 2: sarmap sa
Show abstract
Universityof Pécs and sarmap launchedlast year the Landscan project, which aims to develop an operational remote sensing-based service at national scale for land monitoring purposes using Sentinel-1/-2 data and taking advantage of the supercomputing facilities in Hungary. In order to provide a cuting edge service the dedicated remote sensing solution – based on sarmap technology – take advantage of the HPC facility of University of Pecs.
Landscan is organized four basic building blocks:
I. Identification of stakeholders and definition of stakeholder needs.
This component of the project focuses on the clear definition of end-user needs. User survey analyses rely on interviews, where the main targeted groups are the water and disaster management authorities, and different actors of the farming businesses. Personal interviews refine the stakeholders’ product demands.
II. Demonstration site definition and ground data collection for calibration/validation.
Ground data collection for five crop types characteristic for Central and Eastern Europe have been studied in SW Hungary during the vegetative season in 2015. Two farmers have been involved in data collection. Soil types are different for the two farmlands, one (Mohács) having sand-rich Fluvisols, while the one around Bicsérd is characterized by silt-rich Chernozem (Mollisols). At he Bicsérd site selected ground parameters, ivluding leaf area index, crop height, soil and crop mositure contents were measured regularly. Measurement intervals were adjusted to the returning times of Sentinel-1 satellite. At both sites the crop calendar was provided by the farmers, and were then integrated into a geodatabese together with the SAR image information.
III. Definition of remote sensing processing chain.
Following the identified user needs special classification schemes were evaluated based on SARMAP solution. Multi-temporal and multi-sensor data analysis methods were used to classify and process SAR data. The key parameters of the classification are the temporal descriptors that play a dominant role during the assignment of the raw pixels or objects into meaningful classes. Additionally, hard classifiers, combination of SAR, ground and optical data were defined for the automated image processing chain. The theoretical background of the classification is the combination of optical and radar data, alongside with the wise use of appropriate statistical methods.
IV. Implementation and testing of the remote sensing processing chain on supercomputer.
The MAPscape sofware, used for data processing and developed by SARMAP, was installed on the inrastructure of the Hungarian National Information Infrastructure Development Institute (NIIF). The image processing chain runs in a fully automated way in the supercomputers of NIIF and is optimized for parallel processing and GPU acceleration.
In summary, our presentation aims to highlight the difficulties on the above listed SAR data processing steps, and puts emphasis on the development challenges of novel services primarily based on SAR data and supercomputing. Due to the Sentinel Program stakeholders and end-users will substantially benefit of the SAR-provided services at national levels.
[Authors] [ Overview programme] [ Keywords]