The purpose of this talk is to look at specific algorithms for online convex optimization attaining sublinear regret. In this way, we can get more familiar with the online learning setup and with the ingredients necessary to attain sublinear regret. We will look at two examples in particular, Online Subgradient Descent, following Chapter~2 on Orabonas text, and Follow-the-Leader with quadratic cost functions, following Section~2.2 of Shalev-Shwartzs text