- This event has passed.
Ph.D. Thesis Defense of Sanjay Viswanath
February 16, 2022 @ 3:00 pm - 5:00 pm UTC+0
Advisor: Prof.. Muthuvel Arigovindan
Title: Spatially Adaptive Regularization for Image Restoration
Thesis Examiners: Prof. Suyash Awate, IIT Bombay, and Prof. Ajit Rajwade, IIT Bombay
Defense Examiner: Prof. Ajit Rajwade, IIT Bombay
Date and Time: 16th February (Wednesday): 3:00 pm – 5:00 pm
Venue: Microsoft Teams Live
Microsoft Teams meeting link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_NjBkZTE1NmEtNzQ5Ny00NzJkLTllNTgtM2ViNWZiZDQzNzA4%40thread.v2/0?context=%7b%22Tid%22%3a%226f15cd97-f6a7-41e3-b2c5-ad4193976476%22%2c%22Oid%22%3a%22d7e91daa-7e70-4e9c-b565-b900dfd5b5b5%22%7d
Summary: Image restoration/reconstruction refers to the estimation of underlying image from measurements generated by imaging devices. This problem is generally ill-posed due to the fact that measurements are corrupted because of the physical limitations of the imaging device, and the inherent noise involved in the measurement process. There are three main classes of methods in the current literature. The first class of methods are based on regularization framework that enforces an ad-hoc prior on the restored image. The second class of methods use regression-based learning paradigms, where a training set of clean images and the corresponding distorted measurements are used to generate a trained prior. The third class of methods adopt trained priors similar to the ones utilized in second class of methods, but within the regularization framework. This third class of methods, the trained regularization methods, are getting increasing attention because of their versatility as regularization methods, while also encompassing natural priors obtained from training. However, the need for training data can limit their applicability. In this thesis, we propose spatially adaptive regularization methods where the adaptation information is retrieved from the measured data that undergoes reconstruction. Due to the adaptation, the enforced prior is more natural than the existing regularization methods. At the same time, our methods do not require training data.
In the first part, we propose a novel regularization method that adaptively combines the well-known second order regularization, called Hessian-Schatten (HSN) norm regularization, and first order TV (TV-1) functionals with spatially varying weights. The relative weight involved in combining the first- and second-order terms becomes an image, and this weight is determined through minimization of a composite cost function, without user intervention.
Our contributions in this part can be summarized as follows:
• We construct a composite regularization functional containing two parts: (i) the first part is constructed as the sum of TV-1 and HSN with spatially varying relative weights; (ii) the second part is an additional regularization term for preventing rapid spurious variations in the relative weights. The total composite cost functional is convex with respect to either the required image or the relative weight, but it is non-convex jointly.
• We construct a block coordinate descent method involving minimizations w.r.t. the required image and the relative weight alternatively with the following structure: the minimization w.r.t. the required image is carried out using Alternating Direction Method of Multipliers (ADMM) , and the minimization w.r.t. the relative weight is carried out as a single step exact minimization using a formula that we derive.
• Since the total cost is non-convex, the reconstruction results are highly dependent on the initialization for the block-coordinate descent method. We handle this problem using a multi-resolution approach, where a series of coarse-to-fine reconstructions are performed by minimization of cost functionals defined through upsampling operators. Here, minimization w.r.t. the relative weight and the required image is carried out alternatively, as we progress from coarse to final resolution levels. At the final resolution level, the above-mentioned block coordinate descent method is applied.
• Note that the sub-problem of minimization w.r.t. to the required image involves spatially varying relative weights. Further, this sub-minimization problem in the above-mentioned multi-resolution loop involves upsampling operators. Hence, the original ADMM method proposed by Papafitsoros et al. turns out to be unsuitable. We propose an improved variable splitting method and computational formulas to handle this issue.
• We prove that the overall block coordinate descent method converges to a local minimum of the total cost function using Zangwill’s convergence theorem.
We name our method as Combined Order Regularization with Optimal Spatial Adaptation (COROSA). We provide restoration examples involving deconvolution of TIRF images and reconstruction of Magnetic Resonance Imaging (MRI) images from under-sampled Fourier data. We demonstrate that COROSA outperforms existing regularization methods and selected deep learning methods.
In the second part, we make COROSA more adaptive by replacing the HSN with a spatially varying weighted combination of Eigenvalues of the Hessian. This means that the resulting regularization will be in the form of a spatially varying weighted sum of three terms involving the gradient and two Eigenvalues of Hessian. This allows the functional to restore fine image structures through directional weighting, in terms of the local Eigenvalues. We again adopt a BCD scheme that alternates between the spatially varying weight estimation and image computation, as done in the first part. However, both steps are more complex with the new form. The first task of weight estimation is more complex as it involves three terms. The second task of image computation is more complex, because there is no known proximal operator for regularization involving unequally weighted Hessian Eigenvalues. We solve the first problem by constructing a novel iterative method, and the second problem by deriving a novel proximal formula. Here too, we adopt a multi-resolution approach to initialize the BCD method. We call our method the Hessian Combined Order Regularization with Optimal Spatial Adaptation (H-COROSA). We experimentally compare H-COROSA with well-known regularization methods and selected learning based methods for MRI reconstruction from under-sampled Fourier data.
Compressive Sensing based methods have shown the advantage of l0-based sparsity enforcing functionals in restoration. For practical applications, lp, 0 <p ≤1 functionals have been found to perform better than l1 functionals. In the last part, we propose an lp-based generalization of the previous COROSA and H-COROSA formulations. We replace the corresponding l1 based functionals with lp norm enforced on the combined multi-order functionals. Additionally for H-COROSA, we also consider three forms of penalty for the spatial weights. We construct an iteration scheme that is a merging of the majorization-minimization method for lp norm and BCD method used in the first two parts of the thesis. Again, we use a similar multi-resolution method for initialization. We demonstrate the advantage of using lp norm using MRI reconstruction examples involving severe undersampling in Fourier domain.
ALL ARE CORDIALLY INVITED