### Lecturer

Jared Tanner

School of Mathematics

University of Edinburgh

Jared.Tanner@ed.ac.uk

### Schedule and place

**ATTENTION : new auditorium and new schedule : the lecture of November 29 is cancelled !!!**

This course will take place on December 1, 3, 6, 8, 10, 2010 (9:15-12:15) at K.U.Leuven :

Dec. 1, 6, 8, and 10 : auditorium Tweede hoofdwet, Thermotechnisch Instituut Kasteelpark Arenberg 41, 3001 Heverlee (Number 41 on the map)

Dec. 3 : Auditorium Arenberg Kasteel, Kasteelpark Arenberg 1, 3001 Heverlee (number 1 on the map)

See the auditorium map Δ.

## Description

A revolution is underway in how information is collected and analysed. Traditionally, vast amounts of data are compiled, the essential features are extracted, and the remaining data disposed of. For instance, the resolution of consumer grade digital cameras is rated in
terms of *millions* of pixels (points) measured, whereas the resulting images are typically smaller by a factor of hundreds. To diminish this highly inefficient model, techniques have been developed that learn from their earlier measurements, and in doing so try to determine
the essential features with fewer adaptive measurements. Unfortunately, adaptive measurements must be taken sequentially, which limits the
usefulness of such techniques in our modern real-time distributed computing environment.

In 2004 the Compressed Sensing paradigm was introduced by Donoho as well as Candes and Tao. From completely non-adaptive measurements, this technique recovers the dominant information in data at the minimum possible sampling rate achievable by learning algorithms. This is possible by replacing traditional point samples with randomised sampling and a novel reconstruction algorithm. The underlying principal's of this technique has since been extended to other notion where some underlying simplicity can be exploited. For instance, matrix completion is a new technique where unknown entries in a matrix can be predicted from a small subset of its entries, provided the matrix can be well approximated by a low rank matrix.

This course will: review methods from numerical harmonic analysis and approximation theory that give sparse/simple representations, introduce compressed sensing and methods by which the fundamental results can be proven, and explore extensions of compressed sensing and sparse approximation to matrix completion.

**Session 1:** A tour of numerical harmonic analysis. Review classical approximation properties of fourier series and orthogonal polynomials. Introduce multiscale techniques, wavelets, and higher dimensional extensions such as curvelets and shearlets.

**Session 2:** Basis Pursuit as a method to construct data adaptive representations. Introduction to generic methods of analysis from
sparse approximation: coherence, cumulative coherence, and the restricted isometry property. Prove basic results for linear programming and
one step thresholding.

**Session 3:** Introduction to compressed sensing and encoder/decoder pairs. Analyze greedy decoders Orthogonal Matching Pursuit, Iterated Hard Thresholding, and two-step greedy methods.

**Session 4:** The phase transition framework for decoders. Establish bounds on the restricted isometry constants for matrix ensembles: Gaussian, Fourier, and sparse discrete matrices.

**Session 5:** Geometric models of sparsity. Analysis of the linear programming decoder, nonnegativity uniqueness, bound constraints, and nonconvex models.

**Session 6:** Introduction to matrix completion. Low rank approximation, sampling and subspace coherence, and applications.