-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 02:26:30
Automated Machine Learning (AutoML) is an emerging field that has potential to impact how we build models in Speech and Language Processing (SLP). As an umbrella term that includes topics like hyperparameter optimization and neural architecture search, AutoML has recently become mainstream at major conferences such as NeurIPS, ICML, and ICLR. The inaugural AutoML Conference was started in 2022, and with this community effort, we expect that deep learning software frameworks will begin to include AutoML functionality in the near future. What does this mean to SLP? Currently, models are often built in an ad hoc process: we might borrow default hyperparameters from previous work and try a few variant architectures, but it is never guaranteed that final trained model is optimal. Automation can introduce rigor in this model-building process. For example, hyperparameter optimization can help SLP researchers find reasonably accurate models under limited computation budget, leading to fairer comparison of proposed and baseline methods. Similarly, neural architecture search can help SLP developers discover models with the desired speed-accuracy tradeoffs for deployment. This tutorial will summarize the main AutoML techniques and illustrate how to apply them to improve the SLP model-building process. The goal is to provide the audience with the necessary background to follow and use AutoML research in their own work. In the first part of the tutorial, we will explain the basics of hyperparameter optimization and neural architecture search, covering representative algorithms such as Bayesian Optimization, Evolutionary Strategies, Asynchronous Hyperband, and DARTS. In the second part, we will discuss practical issues of applying AutoML to SLP, including evaluation, multiple objectives, carbon footprint, and software design best practices.