Loading Events

« All Events

  • This event has passed.

PhD Thesis Colloquium of Pravin Nair @11am

June 7, 2022 @ 4:30 PM - 5:30 PM IST

Date: June 7, 2022.

Time: 11-12 am.

Venue: MS Teams (online).

Link: https://tinyurl.com/2p8pwa3c

Title: Provably convergent algorithms for denoiser driven image regularization.

Abstract: Some fundamental reconstruction tasks in image processing can be posed as an inverse problem where we are required to invert a given forward model. For example, in deblurring and superresolution, the ground-truth image needs to be estimated from blurred or low-resolution images, whereas in CT and MR imaging, we need to reconstruct a high-resolution image from few linear measurements. Such inverse problems are invariably ill-posed—they exhibit non-unique solutions, and the process of direct inversion is unstable. Some kind of image model (or prior) on the ground-truth is required to regularize the inversion process. For example, a classical solution involves minimizing f+g, where the loss term f is derived from the forward models and the regularizer g is used to constrain the search space. The challenge is to come up with a formula for g that can yield high-fidelity reconstructions. This has been the center of research activity in image reconstruction for the last two decades.

“Regularization using denoising” is a recent breakthrough in which a powerful denoiser is used for regularization purpose, instead of having to specify some hand-crafted g (however the loss f is used). This was empirically shown to yield significantly better results than staple f+g minimization. In fact, the results are generally comparable and often superior to state-of-the-art deep learning methods. In this thesis, we study two such popular models for image regularization—Plug-and-Play (PnP) and Regularization by Denoising (RED). In particular, we focus on the convergence aspect of these iterative algorithms which is not well understood even for simple denoisers. This is important since lack of convergence guarantee can result in spurious reconstructions in imaging applications. The contributions of this thesis in this regard are as follows:

(1) We show that for a class of non-symmetric linear denoisers that includes kernel denoisers such as nonlocal means, one can associate a convex regularizer $g$ with the denoiser. More precisely, we show that any such linear denoiser can be expressed as the proximal operator of convex function, provided we work with a non-standard inner product (not the Euclidean inner product). A direct implication of this observation is that (a simple variant of) the PnP algorithm based on this linear denoiser amounts to solving an optimization problem of the form f+g, though it was not originally conceived this way. Consequently, if f is convex, both objective and iterate convergence are guaranteed for the PnP algorithm. Apart from the convergence guarantee that it brings in, we go on to show that this observation has algorithmic value as well. For example, in the case of linear inverse problems such as superresolution, deblurring and inpainting (where f is quadratic), we can reduce the problem of minimizing f+g to a linear system. In particular, we show how using Krylov solvers we can solve this system efficiently in just few iterations. Surprisingly, the reconstructions are found to be comparable with state-of-the-art deep learning methods. To the best of our knowledge, the possibility of achieving near state-of-the-art image reconstructions using a linear solver has not been demonstrated before.

(2) In general, state-of-the-art PnP and RED algorithms work with trained CNN denoisers such as DnCNN. Unlike linear denoisers, it is difficult to place PnP and RED algorithms within an optimization framework for CNN denoisers. Nonetheless, we can still try understanding the convergence of iterates, i.e., do these algorithms stabilize eventually. Again, for convex loss f, we show that this question can be resolved using the theory of monotone operators, namely, that nonexpansivity of the denoiser is sufficient for iterate convergence of PnP and RED. Using numerical examples, we show that existing CNN denoisers are not nonexpansive and can cause PnP and RED algorithms to diverge. The question is can we train nonexpansive denoisers? Unfortunately, this is computationally challenging—simply checking nonexpansivity of a CNN is known to be intractable. As a result, existing algorithms for training nonexpansive CNNs either cannot guarantee nonexpansivity or are computation intensive. We show that this problem can be solved by moving away from CNN denoisers to unfolded deep denoisers. In particular, we are able to construct unfolded networks that are efficiently trainable and come with convergence guarantees for PnP and RED, and whose regularization capacity can be matched with CNN denoisers.

We will discuss our findings in greater detail during the colloquium and present numerical results to validate of our theoretical findings.

Details

Date:
June 7, 2022
Time:
4:30 PM - 5:30 PM IST