Lectures | Date | Subjects Covered | Lecture Supplementary Material |
Lecture 27-end | Mar. 16 - Apr3 |
Penalty, Barrier, Augmented Lagrangian Methods SQP Methods |
|
Lecture 24-6 | Mar. 9,11,13 | Interior-Point Methods |
Derive the dual (D) and dual of the dual (DD) for the Max-Cut problem. Derive a log-barrier method for solving the dual (D). This gives rise to the primal-dual interior-point method. present examples using MATLAB. |
Lecture 23 | Mar. 6 | Duality | Dual for an abstract program using partial order induced by a cone, K. |
Lecture 22 | Mar. 4 | KKT optimality conditions and CQs |
Derive the KKT opt conditions using the weakest constraint
qualification: T(F,x)=L(x), i.e. the tangent and linearizing cones are
equal.
Then the KKT conditions comes from the optimality conditions: gradient of f(x) is in the polar of T(F,x). (Follows, since the polar of L(x) is the cone generated by the gradients, by Farkas Lemma.) |
Lecture 21 | Mar. 2 | Hyperplane separation theorem and applications |
Basic Separation Theorem (and proof); Application to proving the Lemma: K=K^++ iff K is a closed convex cone (ccc) |
Lecture 19-20 | Feb. 25-27 | Optimality Conditions for Constrained Problems |
Special case of Lagrange multiplier theorem for equality constraints
proved using the Implicit Function Theorem and compared to the simplex
method; Extension of Fermat (Geometric optimality conditions); tangent cones; Farkas Lemma |
Lecture 18 | Feb. 23 | Optimality Conditions for Constrained Problems | Extension of Fermat (Geometric optimality conditions); tangent cones; Farkas Lemma |
Lecture 16-17 | Feb. 9-11 | Numerical Methods for large-scale nonlinear optimization |
Conjugate Gradient Methods (outline); Inexact Newton Methods (outline);
Derivative Free Methods (outline); Nonlinear Least Squares Problems
Nonlinear Equations (Inexact Newton Methods, Homotopy Methods) |
Lecture 15 | Feb. 6 | Numerical Methods for large-scale nonlinear optimization | Conjugate Gradient Methods (outline); Inexact Newton Methods (outline); Derivative Free Methods (outline); Nonlinear Least Squares Problems |
Lecture 14 | Feb. 4 | Trust Region Algorithms |
the hard case for the TRS.
Solving linear systems of equations/Cholesky factorization paradox files: runAs.m; Asolve.m; runeps.m; Aoneseps.m. |
Lecture 13 | Feb. 2 | Trust Region Algorithms | Details on the trust region subproblem algorithm ( a survey paper) including modification of the root finding equation and outline of the hard case. |
Lecture 10-12 | Jan. 26-30 | Unconstrained Minimization-Rn |
proof of quadratic convergence for Newton's method line search algorithms and trust region methods |
Lecture 9 | Jan. 23 | Unconstrained Minimization-Rn |
line search algorithms convergence using: sufficient decrease; and
sufficiently long steps
rates of convergence start of quadratic convergence proof for Newton's method |
Lecture 8 | Jan. 21 | Unconstrained Minimization-Rn |
derivation of quasi-Newton updates (and differentiation of a function
of a matrix variable);
sufficient decrease in line search methods (Wolfe condition I) |
Lecture 7 | Jan. 19 |
Unconstrained Minimization-Rn
( Supplementary NOTES); |
derivation of Newton's method using quadratic model
quasi-Newton methods and the secant equation |
Lecture 6 | Jan. 16 |
Unconstrained Minimization-Rn
( Supplementary NOTES); |
deflected/scaled steepest descent (SD); best scaling SD derivation of
Newton's method.
MATLAB example - Newton's method scale free behaviour of Newton's method |
Lecture 5 | Jan. 14 |
Unconstrained Minimization-Rn
( Supplementary NOTES); |
Definitions: condition numbers; ill-conditioned problems;
convex sets/functions; and,
characterizations of convex functions: using first/second derivatives
and using epigraph; cone of convex functions
convex functions: stationary points; global minimima; convex level sets Overview of algorithms: (i) line search; (ii) trust region Method of Steepest Descent with derivation using Lagrange multipliers |
Lecture 4 | Jan. 12 |
Unconstrained Minimization-Rn
( Supplementary NOTES); ( Complete Supplementary course notes I; Complete Supplementary course notes II ) |
WWW links to
NEOS,
(
WWW Form for unconstr NMTR);
LP and NLP FAQs
Application: Prove (outline only) the arithmetic-geometric mean (AGM) inequality using unconstrained minimization summary: first and second order necessary/sufficient optimality conditions |
Lecture 3 | Jan. 9 | Unconstrained Minimization - Rn (WIKI!!) |
directional derivative, curvature, linear model, direction of steepest
descent
first and second order necessary optimality conditions second order sufficient optimality conditions convexity and global minimima, characterizations of convex functions optimality conditions and attainment for a quadratic function on Rn Application: Prove the arithmetic-geometric mean (AGM) inequality using unconstrained minimization |
Lecture 2 | Jan. 7 | Unconstrained Minimization |
Fundamentals Unconstrained Opt. cont... (Chapter 2 - complete chapter
except for R-Rates of Convergence) pgs 11-17,19-24,26-29
Recognizing Solutions Definitions: Frechet derivative, gradient, Hessian, Taylor's Theorem, order notation (big and little O) Example of data fitting using nonlinear least squares Definitions of local/global/strict minima. |
Lecture 1 | Jan. 5 | Introduction to Continuous Optimization |
Introduction (Chapter 1, pgs 1-4,6,8)
Examples of applications. Mathematical Formulation: example, level sets Dichotomies: continuous and discrete optimization; local and global optima; linear and nonlinear optimization; convex and nonconvex optimization (new paradigm); stochastic and deterministic optimization Definitions: convexity (sets, functions) |