Loading Events

« All Events

  • This event has passed.

PhD Thesis Colloquium of Nareddy Kartheek Kumar Reddy

June 24 @ 4:30 PM - 5:30 PM IST

Name of the Candidate: Mr. Nareddy Kartheek Kumar Reddy
Research Supervisor: Prof. Chandra Sekhar Seelamantula
Date and time: June 24, 2024; 4.30 PM
Coffee/tea will be served during the talk.
Venue: Multimedia Classroom (MMCR), EE Department, IISc.
Title: Tight Frames, Non-convex Regularizers, and Quantized Neural Networks for Solving Linear Inverse Problems
Abstract:
The recovery of a signal/image from compressed measurements involves formulating an optimization problem and solving it using an efficient algorithm. The optimization objective involves data fidelity, which is responsible for ensuring conformity of the reconstructed signal to the measurement, and a regularization term to enforce desired priors on  the signal. More recently, the optimization based solvers have been replaced by deep neural networks.
This thesis considers three aspects of inverse problems in computational imaging: (i) Choice of data-fidelity term for compressed-sensing image recovery; (ii) Non-convex regularizers in the context of linear inverse problems; and (iii) Explainable deep-unfolded networks and the effect of quantization of model parameters.
Part-1: Tight-Frame-Based Data Fidelity for Compressed Sensing
The choice of the sensing matrix is crucial in compressed sensing. Random Gaussian sensing matrices satisfy the restricted isometry property, which is crucial for solving the sparse recovery problem using convex optimization techniques. However, tight-frame sensing matrices result in minimum mean-squared-error recovery given oracle knowledge of the support of the sparse vector. If the sensing matrix is not tight, could one achieve the recovery performance assured by a tight frame by suitably designing the recovery strategy? ­    This is the key question addressed in this part of the thesis.  We consider the analysis-sparse l1-minimization problem with a generalized l2-norm-based data-fidelity and show that it effectively corresponds to using a tight-frame sensing matrix. The new formulation offers improved performance bounds when the number of non-zeros is large. One could develop a tight-frame variant of a known sparse recovery algorithm using the proposed formalism. We solve the analysis-sparse recovery problem in an unconstrained setting using proximal methods. Within the tight-frame sensing framework, we rescale the gradients of the data-fidelity loss in the iterative updates to further improve the accuracy of analysis-sparse recovery. Experimental results show that the proposed algorithms offer superior analysis-sparse recovery performance. Proceeding further, we also develop deep-unfolded variants, with a convolutional neural network as the sparsifying operator. On the application front, we consider compressed sensing image recovery. Experimental validations on Set11, BSD68, Urban100, and DIV2K datasets show that the proposed techniques outperform the state-of-the-art techniques, where the performance is measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM).
Part 2: Proximal Averaging Methods for Image Restoration and Recovery
Sparse recovery methods are iterative and most techniques typically rely on proximal gradient methods. While the commonly used sparsity promoting penalty is the l1-norm, which is convex, alternatives such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) penalty have also been employed to obtain superior results. Combining various penalties to achieve robust sparse recovery is possible, but the challenge lies in optimal parameter selection. Given the connection between deep networks and unrolling of iterative algorithms, it is possible to unify the unfolded networks arising from different formulations. We propose an ensemble of proximal networks for sparse recovery, where the ensemble weights are learnt in a data-driven fashion. The proposed network performs superior to or on par with the individual networks in the ensemble for synthetic data under various noise levels and sparsity conditions. We demonstrate an application to image denoising based on the convolutional sparse coding formulation.
Part 3: Deep Unfolded Networks, Quantization, and Explainability
Deep-unfolded networks (DUNs) have set new performance benchmarks in compressed sensing and image restoration. DUNs are built from conventional iterative algorithms, where an iteration is transformed into a layer/block of a network with learnable parameters. This work focuses on enhancing the explainability of DUNs by investigating potential reasons behind their superior performance over traditional iterative methods. Our findings reveal that the learned matrices in DUNs are unstable because their singular values exceed unity. However, the overall DUN gives rise to a recovery accuracy higher than the optimisation techniques. This goes to show that although the linear/affine components of the DUN are unstable, the overall network is stable, which leads us to conclude that it is the nonlinearities, more precisely, the activation functions, that are responsible for restoring stability. This study illustrates an intriguing property of deep unfolded networks, which is not observed in standard optimization schemes.
We also consider quantization of the network weights for efficient model deployment in resource-constrained devices. Quantization makes neural networks efficient both in terms of memory and computation during inference and also renders them compatible with low-precision hardware deployment. Our learning algorithm is based on a variant of the ADAM optimizer in which the quantizer is part of the forward pass. The gradients of the loss function are evaluated corresponding to the quantized weights while doing a book-keeping of the high-precision weights. We demonstrate applications for compressed image recovery and magnetic resonance image reconstruction. The proposed approach offers superior reconstruction accuracy and quality than state-of-the-art unfolding techniques, and the performance degradation is minimal even when the weights are subjected to extreme quantization.
Impact of the research: The novel techniques proposed in this thesis led to improved accuracy in linear inverse problems — sparse signals recovery, compressed image recovery, image deconvolution, and image denoising. The tight-frame based algorithms require fewer iterations to converge, thus reducing the reconstruction time. The quantized neural networks, on the other hand, improved the inference time and reduced the model footprint for efficient deployment on the edge. Analysis of deep-unfolded networks has shown that the learnt weights follow a Gaussian distribution suggesting more efficient initialisation schemes than weights derived from ISTA. We also identified potential local instabilities in a deep learning setting, which are avoided in a conventional optimization setting. The role of the nonlinearity is to restore stability. The analysis showed that while deep unfolded networks have potential instabilities, they can be useful for solving inverse problems.
Biography of the Candidate:
Nareddy Kartheek Kumar Reddy is a PhD student in the Spectrum Lab, Department of Electrical Engineering at the Indian Institute of Science (IISc). He received a Bachelor of Technology (Honors) degree from Indian Institute of Technology Kharagpur in 2016. Subsequently, he worked as a Senior Engineer at Honeywell Technology Solutions from 2016 to 2018, where he focused on developing device drivers for SD card and NAND Flash devices which went into production in Honeywell’s flagship weather radar RDR7000.
Kartheek joined IISc as a Masters student in Signal Processing, and subsequently upgraded to PhD after receiving the prestigious Prime Minister’s Research Fellowship in 2019. He is twice recipient of the Qualcomm Innovation Fellowship, once during 2020 & again in 2023. Kartheek enjoys traveling, reading books and manga, watching anime, and playing video games in his leisure time.

Details

Date:
June 24
Time:
4:30 PM - 5:30 PM IST

Venue

MMCR, Hall C 241, 1st floor, EE department