- This event has passed.
PhD Thesis Defense of Mr. Praveen Kumar Pokala @ 11am
January 5, 2022 @ 11:00 am - 12:30 pm UTC+0
Title: Robust Nonconvex Penalties for Solving Sparse Linear Inverse Problems and Applications to Computational Imaging
Thesis Supervisor: Prof. Chandra Sekhar Seelamantula
Examiner: Prof. Suyash Awate, IIT Bombay
Venue: MS Teams (Click here to join the meeting)
Date and time: January 5, 2022; 11 AM onward
Abstract: Sparse linear inverse problems require the solution to the l-0-regularized least-squares cost, which is not computationally tractable. Approximate and computationally tractable solutions are obtained by employing convex/nonconvex relaxations of the l-0-pseudonorm. One such approximation is obtained by considering the l-1-norm, which is a convex relaxation of the l-0-pseudonorm. However, l-1 regularization is known to result in biased estimates due to over-relaxation of the l-0-pseudonorm but it comes with the advantage of convexity of the regularized least-squares cost. Several nonconvex approximations of the l-0 pseudonorm have been proposed to overcome the bias introduced by the l-1-norm and to ensure better sparsity. However, certain aspects of nonconvex sparse regularization have not been explored. Some of these are as follows:
Nonconvex sparse priors have been explored in the synthesis-sparse framework, but not in the analysis-sparse framework due to the unavailability of proximal operators in closed-form in the analysis setting.
Existing nonconvex approaches attach the same regularization weights across all the components of a sparse vector and treat them as fixed hyperparameters. Considering different weights for the entries and adapting them iteratively is likely to result in a superior performance.
Prior learning networks based on deep-unfolded architectures for solving nonconvex penalties have not been explored.
This thesis addresses the above aspects in three parts and considers applications to various computational imaging problems.
Part-1: Nonconvex Analysis-sparse Recovery
In this part, we solve the analysis-sparse recovery problem based on three regularization approaches:
Convexity-preserving nonconvex regularization: We propose the analysis variants of the generalized Moreau envelope and generalized minimax concave penalty (GMCP) over a complex domain. Since the cost is a real-valued function defined over a complex domain, it is nonholomorphic, i.e., it does not satisfy Cauchy-Riemann (CR) conditions. To circumvent this problem, we rely upon on Wirtinger calculus to derive the proximal operator for the analysis l-1 prior and develop an efficient optimization strategy employing projected proximal algorithms. The projection transform maps the analysis-sparse recovery problem into an equivalent constrained synthesis-sparse formulation.
Nonconvex sparse regularization: We consider the problem of nonconvex analysis sparse recovery in which the signal is assumed to be sparse in a redundant analysis operator. Standard nonconvex sparsity promoting priors do not have a proximal operator in closed-form under a redundant analysis operator and therefore, proximal approaches cannot be applied directly. This led us to develop two alternatives — Moreau envelope regularization and projected transformation.
Generalized weighted l-1 regularization: We develop a generalized weighted l-1 regularization strategy, which allows for efficient weight-update strategies for iteratively reweighted l-1-minimization under tight frames. Further, we impose sufficient conditions on the weight function that leads to a reweighting strategy, which follows the interpretation originally given by Candès et al., but is more efficient than theirs. Since the objective function is nonholomorphic, we resort to Wirtinger calculus for deriving the update equations. We develop an algorithm called generalized iteratively reweighted soft-thresholding algorithm (GIRSTA) and its fast variant, namely, generalized fast iteratively reweighted soft-thresholding algorithm (GFIRSTA). We provide convergence guarantees for GIRSTA and empirical convergence results for GFIRSTA.
We demonstrate the efficacy of the proposed regularization strategies in comparison with the benchmark techniques considering compressive-sensing magnetic resonance image (CS-MRI) reconstruction under a redundant analysis operator, more specifically, shift-invariant discrete wavelet transform (SIDWT).
Part-2: Weighted Minimax Concave p-pseudonorm Minimization
In this part, we develop techniques for accurate low-rank plus sparse matrix decomposition (LSD) and low-rank matrix recovery. We proposed weighted minimax-concave penalty (WMCP) as the nonconvex regularizer and show that it admits a certain equivalent representation that is more amenable to weight adaptation. Similarly, an equivalent representation to the weighted matrix gamma norm (WMGN) enables weight adaptation for the low-rank part. The optimization algorithms are based on the alternating direction method of multipliers. The optimization frameworks relying on the two penalties, WMCP and WMGN, coupled with a novel iterative weight-update strategy, result in accurate low-rank plus sparse matrix decomposition and low-rank matrix recovery techniques. Further, we derive an algorithm, namely, iteratively reweighted MGN (iReMaGaN) algorithm, which has a superior low-rank matrix recovery performance. The proposed algorithms are shown to satisfy descent properties and convergence guarantees. On the applications front, we consider the problems of foreground-background separation and image denoising. Simulations and validations on standard datasets show that the proposed techniques outperform the benchmark techniques. Next, we extended the idea to obtain a generalized l-p-penalty, namely, minimax concave p-pseudonorm (MCpN) based on a novel p-Huber function as the sparsity promoting function, and its weighted counterpart, weighted MCpN (WMCpN) as a regularizer for solving the sparse linear inverse problem. WMCpN is a generalization of which several penalties, namely, l-1-norm, minimax concave penalty (MCP), l-p penalty, weighted l-1-norm, and weighted l-p penalty become special cases. However, MCpN and WMCpN regularizers do not have closed-form proximal operators, which makes the optimization problem challenging. To overcome this hurdle, we develop an equivalent representation that is more amenable to optimization and allows for an analytical weight-update strategy. MCpN is a special case of WMCpN where all the weights are fixed and equal. The optimization algorithms are based on the alternating direction method of multipliers. Considering the application of interferometric phase estimation, we demonstrate that MCpN and WMCpN result in accurate interferometric phase estimation. Simulations and experimental validations on standard datasets show that the proposed techniques outperform the benchmark techniques.
Part-3: Nonconvex Sparse Regularization and Deep-Unfolding
In the final part, we transition from fixed analytical priors to data-driven priors. To begin with, we develop a deep-unfolded architecture, namely, FirmNet, for sparse recovery. FirmNet has two parameters — one that controls the noise variance, and the other that allows for explicit sparsity control. We show that FirmNet is better than Learned-ISTA (LISTA) by at least three-fold in terms of the probability of error in support (PES), and about 2 to 4 dB higher reconstruction SNR. Further, we solve the problem of reflectivity inversion, which deals with estimating the subsurface structure from seismic data through FirmNet. As an application, we consider the problem of seismic reflectivity inversion. We demonstrate the efficacy of FirmNet over the benchmark techniques for the reflectivity inversion problem by testing on synthetic 1-D seismic traces and 2-D wedge models. We also report validations on simulated 2-D Marmousi2 model and real data from the Penobscot 3D survey off the coast of Nova Scotia, Canada. Next, we propose convolutional FirmNet (ConFirmNet), which is an extension of the FirmNet approach to solve the problem of convolutional sparse coding. As an application, we build a ConFirmNet based sparse autoencoder (ConFirmNet-SAE) and demonstrate suitability for image denoising and inpainting. Further, we also show that training ConFirmNet-SAE with the Huber loss imparts robustness to outliers. ConFirmNet-SAE also proves to be robust to mismatch between training and test noise conditions than convolutional learned iterative soft-thresholding algorithm (CLISTA). Finally, we propose a sparse recovery formulation that employs a nonuniform, nonconvex synthesis sparse model comprising a combination of convex and nonconvex regularizers, which results in accurate approximations of the l-0 pseudo-norm. The resulting iterative optimization employs proximal averaging. When unfolded, the iterations give rise to a nonuniform sparse proximal average network (NuSPAN) that can be optimized in a data-driven fashion. We demonstrate the efficacy of NuSPAN also for solving the problem of seismic reflectivity inversion.
Biography of the candidate: Praveen Kumar Pokala received his B.Tech. degree in Electronics and Telecommunication Engineering from Jawaharlal Nehru Technological University, Hyderabad, India, in 2006 and M. Tech degree in Signal Processing from Indian Institute of Technology (IIT), Guwahati, India, in 2009. Subsequently, he worked as an Assistant Professor in LPU university, Jalandhar, India and GITAM university, Hyderabad, India. He is currently pursuing Ph.D. in the Department of Electrical Engineering, Indian Institute of Science, Bangalore. His current research interests are machine learning, deep learning, and nonconvex optimization algorithms, with applications to inverse problems in computational imaging. He is presently a Senior Lead Engineer at Qualcomm R&D, Bangalore.
All are invited.