In the previous talk, I introduced FTRL and Mirror Descent and showed how they generalize two well-known algorithms of Multiplicative Weights and Gradient Descent. In this talk, I will show that FTRL and Mirror Descent are in fact equivalent in the sense that they produce the same sequence of predictions. Moreover, I will go over some regret bounds for these algorithms, that will generalize the regret bounds we get for multiplicative weights and gradient descent.