To have a deep learning network that accepts 4D low-resolution images and produces 4D high-resolution images.
1- Get the 4D low-resolution images through running a degradation model on the ground-truth 4D images dataset by estimating various blur kernels as well as real noise distributions. Most current approaches rely on paired low and high-resolution images to train the network in a fully supervised manner. However, such image pairs are not available in real-world applications. Thus, the aim here is instead to learn super-resolution from unpaired data and without any restricting assumptions on the input image formation.
2- Use GAN (Generative Adversarial Network) framework to perform the up-sampling within the network using transposed convolutions. The aim here is for the network to take “low-spatial-resolution images (for example 512x512) produced from objective 1” as input and predicts “high-spatial-resolution images (for example 1024x1024)” as an output in a coarse-to-fine fashion. The GAN framework should learn the correlations between LR and HR 4D volume of images using the following Loss functions: pixel loss, perceptual loss, and adversarial loss.
I have some written codes (implemented but not trained) that perform the following:
A degradation model that takes 4D light images as input and produces Downsampled (degraded) 4D light field images.
A GAN (Generative Adversarial Network) framework that accepts 4D light field dataset and learns the correlations between LR and HR 4D volume of images.
A high dimensional (4D) residual network that performs the up-sampling within the network using transposed convolutions of 4D images dataset.