Short Course: SC-1b: Low-Dimensional Models for High-Dimensional Data: From Linear to Nonlinear, Convex to Nonconvex, and Shallow to Deep (Part 2 of 3)
Sam Buchanan, Yi Ma, Qing Qu, John Wright, Yuqian Zhang, Zhihui Zhu
The course will start by introducing fundamental linear low-dimensional models (e.g., basic sparse and low-rank models) and convex relaxation approaches with motivating engineering applications, followed by a suite of scalable and efficient optimization methods. Based on these developments, we will introduce nonlinear low-dimensional models for several fundamental learning and inverse problems (e.g., dictionary learning and sparse blind deconvolution), and nonconvex approaches from a symmetry and geometric perspective, followed by their guaranteed correctness and efficient nonconvex optimization. Building upon these results, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models, as well as leading to new principles for designing deep networks for learning low-dimensional structures, with both clear interpretability and practical benefits.