How can brightness temperatures be reconstructed from the signals received by antenna arrays (as in the case of SMOS) ?
Two methods have already been presented in a previous post : a widely used method (based on linear algebra & Fourier theory) and a much more recent alternative method (based on deep learning techniques and currently being studied in the framework of a CNES research and technology project).

Preliminary results have shown that the deep learning approach is very promising especially with regard to aliases. The latter are artefacts related to the antenna spacing, which reduces the usable field of view. As antenna spacing is usually subject to technical constraints, the criteria for an alias-free field of view cannot not always be met. However, we have found that, using simulated data as an input, the deep neural network method significantly increases the extent of this alias-free field of view. Much more details in a new paper just published !

Next step is to integrate various sources of error into the performance study, including radiometric noise, radio frequency interferences (RFI), Faraday rotation angle … Stay tuned.