just this once, we are ahead of time. Well, nearly. We had promised the 2016 land cover map or France before the end of first term of 2017. It exists and is available here. It’s resolution is 10m, with the same 17 classes nomenclature that we used for Landsat landcover map of 2014..The map is mainly based on the Sentinel-2 data acquired from end 2015 to end 2016, but we have also processed the LANDSAT 8 data. We will provide some details below.In terms of quality, the statistics indicate something quite similar to what we were getting with Landsat-8. The improvements made by Sentinel-2 are not insignificant (just look at the map), but they are not visible on the overall statistics for the whole territory.We have also produced regularized vector maps with a minimum mapping unit of 0.1 ha, correponding to 20m resolution.
- Validity: This is the number of valid dates (no cloud, no shadow) used to recognize the class of each pixel. You will see that there is a big difference between the areas because of the satellite tracks and, of course, because of the weather.
- Trust: This is the probability of the class chosen for each pixel. The classification algorithm makes a majority vote from 100 individual classifiers. Confidence is the percentage of classifiers who voted for the majority class. With 17 classes, this probability could be less than 6%, but overall it is well above 40%. We note that confidence is lower in the West, around the Mediterranean and in the big cities. Inversely, it is raised on the main crops of the centre of France and on the forests (one recognizes the forest of Landes on the map of confidence).
Below the image display, we find the validation statistics. First, there are the overall percentages:
- OA: Overall Accuracy, that gives the percentage of well-classified pixels
- Kappa: statistical index similar to the OA, but corrected downwards to take into account that a random classification would also have well-ranked pixels.
For each class, we also display the FScore, a metric whose optimum is 1 and which combines over-detection and under-detection.As artificialized surface classes are very confused with each other, we have added the FScore of the four classes altogether. For annual crops (summer and winter), the classification procedure (described below) does not allow us to validate with the same set of data as for the rest of the classes. So we provide the FScore with the 2 classes together, but also measured with separate classes using a restricted dataset.
FiIe download :
We planned to jointly use the Landsat-8 and Sentinel-2 series to have a very fine characterization of the temporal evolutions of the observed surfaces. Unfortunately, we were confronted with geometrical registration differences and, above all, with loss of definition problems due to the resolution difference between Landsat (30 m) and Sentinel-2 (10 and 20 m depending on the bands).We had to make a choice between quantity of data and geometric precision. Since the results obtained with Sentinel-2 alone are satisfactory, we decided not to use the Landsat-8 data as planned. They were still used to solve another problem described below.
CropMix is the small name given by Arthur Vincent and Vincent Thierion to the technique used in iota2 to make the classification of annual crops without reference data for the current year. The approach was first prototyped during Dawa Derksen’s graduation internship.For the prototype products we used the Land Parcel Identification System (LPIS) which provides us with a large number of learning samples. The disadvantage of the LPISs that it is available too late for the annual update of the land cover map. As an example, we do not have the version 2015 so far. So for the 2016 land cover map, we could not rely on the LPIS 2016 or even 2015, and the last available version is that of 2014 (Edit : in November 2017, We are still waiting for the next versions!). So we had to develop a new technique (domain adaptation in machine learning jargon) to use the 2014 LPIS. An additional difficulty is added to the problem because in 2014 there was no Sentinel-2 image, since the satellite was not launched yet. As a result, we used Landsat-8 images from 2014 to transfer learning to Landsat-8 imagery from 2016, and then made a new transfer operation to use this learning on 2016 Sentinel-2 data.There is still the problem of validation of this result, because the LPISis also useful to validate our classifications. For this task, we have been able to rely on the presence of reference data which, if they were not in sufficient quantity for learning, at least allow independent validation. These are the statistics that accompany the published map. You will see that the results are quite acceptable.