Regret bounds for FTRL and Mirror Descent

Rian Neogi

Abstract

In the previous talk, I introduced FTRL and Mirror Descent and showed how they generalize two well-known algorithms of Multiplicative Weights and Gradient Descent. In this talk, I will show that FTRL and Mirror Descent are in fact equivalent in the sense that they produce the same sequence of predictions. Moreover, I will go over some regret bounds for these algorithms, that will generalize the regret bounds we get for multiplicative weights and gradient descent.

Date
Feb 2, 2024 12:00 PM
Location
MC6029