# Performance of first-order methods for smooth convex minimization: a novel approach

@article{Drori2014PerformanceOF, title={Performance of first-order methods for smooth convex minimization: a novel approach}, author={Yoel Drori and Marc Teboulle}, journal={Mathematical Programming}, year={2014}, volume={145}, pages={451-482} }

We introduce a novel approach for analyzing the worst-case performance of first-order black-box optimization methods. We focus on smooth unconstrained convex minimization over the Euclidean space. Our approach relies on the observation that by definition, the worst-case behavior of a black-box optimization method is by itself an optimization problem, which we call the performance estimation problem (PEP). We formulate and analyze the PEP for two classes of first-order algorithms. We first apply… Expand

#### 133 Citations

Efficient first-order methods for convex minimization: a constructive approach

- Mathematics, Computer Science
- Math. Program.
- 2020

We describe a novel constructive technique for devising efficient first-order methods for a wide range of large-scale convex minimization settings, including smooth, non-smooth, and strongly convex… Expand

Convex interpolation and performance estimation of first-order methods for convex optimization

- Computer Science, Mathematics
- 2017

This thesis forms a generic optimization problem looking for the worst-case scenarios of first-order methods in convex optimization, and transforms PEPs into solvable finite-dimensional semidefinite programs, from which one obtains worst- Case guarantees and worst- case functions, along with the corresponding explicit proofs. Expand

Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization

- Mathematics, Computer Science
- SIAM J. Optim.
- 2017

A new analytical worst-case guarantee is presented for the proximal point algorithm that is twice better than previously known, and the standard worst- case guarantee for the conditional gradient method is improved by more than a factor of two. Expand

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

- Mathematics, Computer Science
- Math. Program.
- 2017

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex… Expand

An optimal gradient method for smooth (possibly strongly) convex minimization

- Computer Science
- ArXiv
- 2021

We present an optimal gradient method for smooth (possibly strongly) convex optimization. The method is optimal in the sense that its worst-case bound exactly matches the lower bound on the oracle… Expand

Performance estimation toolbox (PESTO): Automated worst-case analysis of first-order optimization methods

- Computer Science
- 2017 IEEE 56th Annual Conference on Decision and Control (CDC)
- 2017

A Matlab toolbox that automatically computes tight worst-case performance guarantees for a broad class of first-order methods for convex optimization, which includes those performing explicit, projected, proximal, conditional and inexact (sub)gradient steps. Expand

Analysis of Optimization Algorithms via Sum-of-Squares

- Mathematics, Computer Science
- J. Optim. Theory Appl.
- 2021

A new framework for unifying and systematizing the performance analysis of first-order black-box optimization algorithms for unconstrained convex minimization is introduced, based on sum-of-squares optimization, which allows to introduce a hierarchy of semidefinite programs (SDPs) that give increasingly better convergence bounds for higher levels of the hierarchy. Expand

Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation

- Mathematics
- 2017

We provide new tools for worst-case performance analysis of the gradient (or steepest descent) method of Cauchy for smooth strongly convex functions, and Newton's method for self-concordant… Expand

Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization

- Mathematics, Computer Science
- J. Optim. Theory Appl.
- 2018

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is… Expand

Potential-based analyses of first-order methods for constrained and composite optimization

- Mathematics
- 2019

We propose potential-based analyses for first-order algorithms applied to constrained and composite minimization problems. We first propose ``idealized'' frameworks for algorithms in the strongly and… Expand

#### References

SHOWING 1-10 OF 27 REFERENCES

Primal-dual first-order methods with O (1/e) iteration-complexity for cone programming.

- Mathematics
- 2011

In this paper we consider the general cone programming problem, and propose primal-dual convex (smooth and/or nonsmooth) minimization reformulations for it. We then discuss first-order methods… Expand

Primal-dual first-order methods with $${\mathcal {O}(1/\epsilon)}$$ iteration-complexity for cone programming

- Mathematics, Computer Science
- Math. Program.
- 2011

This paper discusses first-order methods suitable for solving primal-dual convex and nonsmooth minimization reformulations of the cone programming problem, and proposes a variant of Nesterov’s optimal method which has outperformed the latter one in the authors' computational experiments. Expand

Fine tuning Nesterov’s steepest descent algorithm for differentiable convex programming

- Mathematics, Computer Science
- Math. Program.
- 2013

The first order algorithm for convex programming described by Nesterov in his book is modified, and an adaptive procedure for estimating a strong convexity constant for the function is developed. Expand

Lectures on modern convex optimization - analysis, algorithms, and engineering applications

- Computer Science, Mathematics
- MPS-SIAM series on optimization
- 2001

The authors present the basic theory of state-of-the-art polynomial time interior point methods for linear, conic quadratic, and semidefinite programming as well as their numerous applications in engineering. Expand

Introductory Lectures on Convex Optimization - A Basic Course

- Computer Science
- Applied Optimization
- 2004

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. Expand

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

- Mathematics, Computer Science
- SIAM J. Imaging Sci.
- 2009

A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Expand

Lectures on modern convex optimization

- Computer Science
- 1987

A comprehensive introduction to the subject, this book shows in detail how such problems can be solved in many different fields, and proves the vanishing of a determinant whose ... Expand

Some methods of speeding up the convergence of iteration methods

- Mathematics
- 1964

Abstract For the solution of the functional equation P (x) = 0 (1) (where P is an operator, usually linear, from B into B, and B is a Banach space) iteration methods are generally used. These consist… Expand

Quadratic Matrix Programming

- Mathematics, Computer Science
- SIAM J. Optim.
- 2007

This work constructs a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and shows that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints. Expand

Improved Algorithms for Convex Minimization in Relative Scale

- Mathematics, Computer Science
- SIAM J. Optim.
- 2011

Two modifications to Nesterov's algorithms for minimizing convex functions in relative scale are proposed, based on a bisection technique and leads to improved theoretical iteration complexity, and the second is a heuristic for avoiding restarting behavior. Expand