Skip to main content
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
    Length: 4:14:55
Short Course 26 May 2022

The course will start by introducing fundamental linear low-dimensional models (e.g., basic sparse and low-rank models) and convex relaxation approaches with motivating engineering applications, followed by a suite of scalable and efficient optimization methods. Based on these developments, we will introduce nonlinear low-dimensional models for several fundamental learning and inverse problems (e.g., dictionary learning and sparse blind deconvolution), and nonconvex approaches from a symmetry and geometric perspective, followed by their guaranteed correctness and efficient nonconvex optimization. Building upon these results, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models, as well as leading to new principles for designing deep networks for learning low-dimensional structures, with both clear interpretability and practical benefits.