How did I miss the blog animated by Remi Cresson, from "la maison de la télédétection" in Montpellier (House of Remote Sensing) ? It is well written, very funny and above all it shows impressive results from our favourite optical satellites, exactly what I intended to do in multitemp blog when I had time to take time to write my posts.My colleagues know I am quite sceptic regarding the deep learning buzz we have had these last years, but this one really impressed me.Claude Shannon was the first to demonstrate that, in absence of noise, you can reconstruct exactly a sampled signal if its maximum frequency is lower than half the sampling frequency. If the landscape has frequencies above the Shannon limits then several signals can correspond to the sampled one.An image is a sampled signal, and Sentinel-2 images for instance are sampled with a sampling frequency of 1/10 m -1 . In our classical reconstructions (when we zoom), we suppose the landscape respects Shannon's criterion, only has low frequencies, and therefore we produce blurred zoomed images, but there are an infinity of other solutions, even if most of them would correspond to a landscape that cannot exist.The machine learning procedure used by Remy tries to find a plausible solution, by providing examples of low and high resolution images to a convolutional neural net. For that, he used one of our Sentinel-2 syntheses and compared it to a SPOT6 Pa sharpened image at 1.5m resolution, acquired during the same period. Remi shows that the network is able to show very plausible images even out of the SPOT6 footprint (see the image above).. Although it is possible to spot a few "errors", and the high resolution images appear a bit noisy, the result is almost magical. I am just wondering how far from the learning high resolution image the extrapolation is possible.So the best is to have a look to the images generated by Rémi. Don't forget to bookmark his blog, and maybe test the module by yourself using the Orfeo ToolBox. And you can also watch the funny video.