
Three years ago, we started the phase-0 study of a mission named Sentinel-HR : S-HR would be a satellite mission to complement Sentinel-2 new generation (S2-NG) with
- better resolution images (2 m),
- a lower revisit (around 20 days),
- 4 spectral bands (Blue, green, red and near-infrared),
- and the capacity to make stereoscopic observations
In that study, one of our assumptions was that a lower revisit frequency than that of Sentinel-2 was sufficient. This seems possible because, in most cases, the high frequency features of the images (the details) do not change as quickly as the low frequency features (the colors).
During that study, Julien Michel spent quite some time in verifying this assumption. To do so, he studied if it is possible to fuse the time series from S2-NG and from S-HR in order to get hybrid time series with the high resolution of S-HR (2 m), and the high revisit of S2 (5 days) or S2-NG (3 days). These results have already been shown in our report one year ago, but we are not fully sure everyone reads reports with 150 pages.
In this first article, we show the methodology, validation data set and first results, and in a second post, we study the influence of the revisit frequency. A second post studies the influence of the revisit.
Data sets
Different techniques allow to merge time series of S2NG and VHR data, to get at the same time the frequent revisit of S2NG, and the very high resolution of the S-HR, with a good accuracy. Our study compared different methods, using Sentinel-2 and VENµS images to simulate S-HR and S2NG data. As VENµS images are acquired every second day, for each time series, one image every 20 days was used for the data fusion, and the other ones to validate the fused data.
The following series gives an example of the data set we used (which has been published [4]). We have such data on 110 sites, observed by VENµS for two years.

Data fusion methods
For sake of conciseness we do not detail the methods here, as they are fully described in the report. We studied five methods :
Two trivial methods for reference:
- GF : Gap filling the VHR data, without using any information from the S2 or S2NG sensors. It is just a temporal interpolation, performed pixel wise
- Zoom : Zooming the S2NG or S2 using resampling techniques. In that case S-HR data are not used.
Three more methods where tested:
- STARFM [3] or *FM is a classical multi-linear approach to data fusion, that uses information from both missions
- CARN [2] is an efficient single image super-resolution method, that only uses the S-HR data as reference data for the machine learning, while the inference of super-resolution images is obtained with the S2 (NG) data.
- DDI is a guided interpolation approach. Neural networks are trained to learn how to interpolate the S2(NG) data at S2(NG) resolution, and then, the same neural network is used to interpolate the VHR data at full resolution. The DDI method was tested with two different normalizations. DDI, developed at CESBIO ash not been published yet, but should be published soon.
Results
The results displayed in next figure show that differences lower than 0.01 with the reference data set have been reached in the red, green and blue bands with the DDI methods. It has been tested for uniform pixels, where temporal variations are preponderant, and with the 25% of pixels that include the highest gradients, for which the spatial features are more important. The results are not as accurate for the near infra-red bands because of larger differences in the spectral response functions between VENµS and Sentinel-2. Standard deviations of 0.01 are close to the performances of atmospheric correction on Sentinel-2 images. It may therefore be considered as a really good result.

The next figure shows, for just one case, the differences observed in the fused data depending on the methods. The final report includes much more results.

It is interesting to note that the gap filling methods predicts the contours very well, but the linear interpolation in time is not able to capture the complex evolution of landscape colors, while the other methods manage to provide the low frequency evolution, but do not obtain the contours as well as the gap filling. However, the DDI methods does quite a good job for both criteria, as shown by the statistics above. These results show that our assumption of the possibility of a lower revisit for the Sentinel-HR is correct. Of course, in terms of cost, a constellation with a revisit of 20 days is probably 4 times cheaper than the same constellation with 5 days revisit. In a second post we studied if we could further reduce the revisit.
[1] Julien Michel, Olivier Hagolle, Jean-Marc Delvit, Martin Thierry, Lebegue Laurent, et al.. Sentinel-HR Phase 0 Report. CNES – Centre national d’études spatiales; CESBIO. 2022. https://hal.science/hal-03643411
[2] Li, Yawei, Agustrsson, Eirikur, Gu, Shuhang, et al. Carn: Convolutional anchored regression network for fast and accurate single image super-resolution. In : Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 2018. p. 0-0.
[3]Hilker, T., Wulder, M. A., Coops, N. C., Seitz, N., White, J. C., Gao, F., … & Stenhouse, G. (2009). Generation of dense time series synthetic Landsat data through data blending with MODIS using a spatial and temporal adaptive reflectance fusion model. Remote Sensing of Environment, 113(9), 1988-1999.
[4] Michel, J., Vinasco-Salinas, J., Inglada, J., & Hagolle, O. (2022). SEN2VENµS, a dataset for the training of Sentinel-2 super-resolution algorithms. Data, 7(7), 96.