In previous talks, we have seen how the multiplicative weights method and gradient descent solve the regret minimization problem. In this talk we will go over a meta-algorithm called Follow the Regularized Leader (FTRL). We will show how FTRL generalizes both multiplicative weights and gradient descent. We will also talk about the Mirror Descent meta-algorithm, and show its equivalence with FTRL.