Follow the Regularized Leader and Mirror Descent

Rian Neogi

Abstract

In previous talks, we have seen how the multiplicative weights method and gradient descent solve the regret minimization problem. In this talk we will go over a meta-algorithm called Follow the Regularized Leader (FTRL). We will show how FTRL generalizes both multiplicative weights and gradient descent. We will also talk about the Mirror Descent meta-algorithm, and show its equivalence with FTRL.

Date
Jan 26, 2024 12:00 PM
Location
MC6029