- This event has passed.

# PhD Thesis Defense of Dhruv Jawali

## January 25, 2023 @ 4:30 PM - 5:30 PM IST

Advisors: Prof. Chandra Sekhar Seelamantula (EE) & Prof. Supratim Ray (CNS)

Examiner: Prof. Vikram M. Gadre (EE), IIT Bombay

Title of the thesis:

**Learning Filters, Filterbanks, Wavelets, and Multiscale Representations**Date & Time: January 25, 2023; 11:00 AM onward (Coffee will be served during the defense)

Venue: Multimedia Classroom (MMCR), Department of Electrical Engineering, IISc

**Abstract:**

The problem of filter design is ubiquitous. Frequency selective filters are used in speech/audio processing, image analysis, convolutional neural networks for tasks such as denoising, deblurring/deconvolution, enhancement, compression, etc. While traditional filter design methods use a structured optimization formulation, the advent of deep learning techniques and associated tools and toolkits enables the learning of filters through data-driven optimization. In this thesis, we consider the filter design problem in a learning setting in both data-dependent and data-independent flavors. Data-dependent filters have properties governed by a downstream task, for instance, filters in a convolutional dictionary used for the task of image denoising. On the contrary, data-independent filters have constraints imposed on their frequency responses, such as lowpass, having diamond-shaped support, satisfying perfect reconstruction property, ability to generate wavelet functions, etc.

The contributions of this thesis are four-fold: (i) the formulation of filter, filterbank, and wavelet design as regression problems, allowing them to be designed in a learning framework; (ii) the design of contourlet-based scattering networks for image classification; (iii) the design of a deep unfolded network using composite regularization techniques for solving inverse problems in image processing; and (iv) a multiscale dictionary learning algorithm that learns one or more multiscale generator kernels to parsimoniously explain certain neural recordings. We begin by developing learning approaches for designing filters having data-independent specifications, for instance, filters with a specified frequency response, including an ideal filter. The problem of designing such filters is formulated as a regression problem, using a training set comprising cosine signals with frequencies sampled uniformly at random. The filters are optimized using the mean-squared error loss, and generalization bounds are provided. We demonstrate the applicability of our approach for filters such as lowpass, bandpass, and highpass in 1-D, and diamond, fan and checkerboard support filters in 2-D. We then show how the methodology extends easily for designing 1-D and 2-D cosine modulated filterbanks.

Second, we consider the problems of 1-D filterbank and wavelet design through learning. Wavelets have proven to be highly successful in several signal and image processing applications. Wavelet design has been an active field of research for over two decades, with the problem often being approached analytically. We draw a parallel between convolutional autoencoders and wavelet multiresolution approximation and show how the learning angle provides a coherent computational framework for solving the design problem. We design data-independent wavelets by interpreting the corresponding perfect reconstruction filterbanks as autoencoders (what we refer to as “filterbank autoencoders”), which precludes the need for customized datasets. In fact, we show that it is possible to design them efficiently using high-dimensional Gaussian vectors as training data. Generalization bounds show that a near-zero training loss implies that the learnt filters satisfy the perfect reconstruction property with a very high probability. We show that desirable properties of a wavelet such as orthogonality, compact support, smoothness, symmetry, and vanishing moments can all be incorporated into the proposed framework by means of architectural constraints or by introducing suitable regularization functionals to the MSE cost. Notably, our approach not only recovers the well-known Daubechies family of orthogonal wavelets and the Cohen-Daubechies-Feauveau (CDF) family of symmetric biorthogonal wavelets, which are used in JPEG-2000 compression, but also learns new wavelets outside these families.

Third, we extend the ideas used for 1-D filterbank and wavelet learning to 2-D filterbank and wavelet design. A variety of efficient representations of natural images, such as wavelets and contourlets can be formulated as corresponding filterbank design problems. The design constraints on the continuous-domain wavelets have corresponding filter-domain manifestations. While most learning problems require specialized datasets, we employ 2-D random Gaussian matrices as training data and optimize filter coefficients considering the MSE loss. Design specifications such as orthogonality of the filterbank, perfect reconstruction property, symmetry, and vanishing moments are enforced through an appropriate parameterization of the convolutional units. We demonstrate several examples of learning biorthogonal and orthogonal filterbanks and wavelets having a specified number of vanishing moments, both point vanishing moments and directional vanishing moments, and symmetry constraints. Sparse recovery via composite regularization is an interesting approach proposed recently in the literature. One could design non-convex regularizers through a convex combination of sparsity-promoting penalties with known proximal operators. We develop a new algorithm, namely, convolutional proximal-averaged thresholding algorithm (C-PATA) for {\it composite-regularized} convolutional sparse coding (CR-CSC) based on the recently proposed idea of proximal averaging. We develop an autoencoder structure based on the deep-unfolding of C-PATA iterations into neural network layers, which results in the composite-regularized neural network (CoRNet) architecture. The convolutional learned iterative soft-thresholding algorithm becomes a special case of CoRNet. We demonstrate the efficacy of CoRNet considering applications to image denoising and inpainting, and compare the performance with state-of-the-art techniques such as BM3D, convolutional LISTA, and fast and flexible convolutional sparse coding (FFCSC). The data-independent filter design technique is employed to learn a contourlet transform used within a hybrid scattering network. Hybrid scattering networks are convolutional neural networks (CNNs) where the first few layers implement a fixed windowed scattering transform, while the rest of the network is learned. Scattering networks outperform state-of-the-art deep learning models for limited-data classification tasks although the performance gains are not much for large datasets. The 2-D Morlet filterbank used in Mallat’s scattering network is replaced by a contourlet filterbank, which provides sparser representations and better frequency-domain directional separation. The contourlet transform comprises a multiresolution pyramidal filterbank cascaded with directional filters. We construct directional filters using diamond-shaped quincunx filterbanks and consider two pyramidal filter variants — square-shaped, and filters with radially isotropic frequency domain support. The performance of all variants is evaluated for natural image classification tasks on CIFAR-10 and ImageNet datasets. We show that the radial contourlet variant achieves competitive performance compared with the Morlet scattering transform on large-dataset classification tasks while performing better for the limited-dataset scenario. We then switch over to the problem of learning data-dependent filters for sparse recovery by employing a combination of sparsity promoting regularizers. Sparse recovery via such composite regularization approaches is an interesting framework proposed recently in the literature. One could design non-convex regularizers through a convex combination of sparsity-promoting penalties with known proximal operators. We developed a new algorithm, namely, convolutional proximal-averaged thresholding algorithm (C-PATA) for composite-regularized convolutional sparse coding (CR-CSC) based on proximal averaging. We develop an autoencoder structure based on the deep-unfolding of C-PATA iterations into neural network layers, which results in the composite-regularized neural network (CoRNet) architecture. The convolutional learned iterative soft-thresholding algorithm becomes a special case of CoRNet. We demonstrate the efficacy of CoRNet considering applications to image denoising and inpainting and compare the performance with state-of-the-art techniques such as BM3D, convolutional LISTA, and fast and flexible convolutional sparse coding (FFCSC). Finally, we conclude by developing a data-dependent method to learn filters generating a multiscale convolutional dictionary. First, the multiscale convolutional dictionary learning (MCDL) algorithm is proposed to extract a representative waveform shape from a given dataset. The proposed algorithm is based on the popularly used convolutional dictionary learning formulation with a crucial difference — we assume that the learned atoms are scaled versions of a single generator kernel. We evaluate kernel recovery for synthetic data under noiseless and noisy data conditions. A smoothness regularizer on the learned atom is used to aid better kernel recovery under noisy conditions. Kernel recovery is shown to be robust to model choices of scales and the assumed support size of the kernel without any restrictive assumptions. The proposed approach is applied to visualizing the typical patterns present within human electrocorticogram (ECoG) measurements. The validation is carried out using publicly available ECoG data recorded from a single Parkinson’s disease patient. This thesis thus presents a cogent framework for learning filters, filterbanks, wavelets, convolutional and multiscale dictionaries.

**Biography of the candidate:**Dhruv Jawali received the Bachelor of Technology (B. Tech) degree from the Department of Computer Science and Engineering, National Institute of Technology Goa, India, in 2014. He worked as a software developer at the Samsung Research Institute, Bangalore during 2014-2015. He enrolled into the PhD program at the IISc Mathematics Initiative (IMI) Department, Indian Institute of Science (IISc) in August 2015, and has been working at the Spectrum Lab, Department of Electrical Engineering ever since. His research interests include wavelet theory, deep neural networks, and sparse signal processing. He is currently employed as an instructor at Scaler Academy specializing in Data Science and Machine Learning.

**All are invited.**