Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

Philippe Toint, Coralia Cartis, N. I. M. Gould

Research output: Contribution to journalArticle

Abstract

We provide sharp worst-case evaluation complexity bounds for nonconvex
minimization problems with general inexpensive constraints, i.e. problems
where the cost of evaluating/enforcing of the (possibly nonconvex or even
disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds for nonconvex unconstrained and convexly-constrained problems. It is shown that, given an accuracy level
$\epsilon$, a degree of highest available Lipschitz continuous derivatives $p$
and a desired optimality order $q$ between one and $p$, a conceptual
regularization algorithm requires no more than
$O(\epsilon^{-\frac{p+1}{p-q+1}})$ evaluations of the objective function and
its derivatives to compute a suitably approximate $q$-th order minimizer. With
an appropriate choice of the regularization, a similar result also holds if
the $p$-th derivative is merely Holder rather than Lipschitz
continuous. We provide an example that shows that the above complexity bound is sharp for unconstrained and a wide class of constrained problems; we also give reasons for the optimality of regularization methods from a worst-case
complexity point of view, within a large class of algorithms that use the same
derivative information.
Original languageEnglish
Number of pages31
JournalSIAM Journal on Optimization
Publication statusAccepted/In press - 2020

Fingerprint

Nonconvex Optimization
Derivatives
Evaluation
Arbitrary
Derivative
Optimality
Objective function
Regularization Method
Minimizer
Low Complexity
Lipschitz
Regularization
Costs
Class

Keywords

  • nonlinear optimization
  • complexity theory

Cite this

@article{f3e62413808747898e259354be18f31b,
title = "Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints",
abstract = "We provide sharp worst-case evaluation complexity bounds for nonconvexminimization problems with general inexpensive constraints, i.e. problemswhere the cost of evaluating/enforcing of the (possibly nonconvex or evendisconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds for nonconvex unconstrained and convexly-constrained problems. It is shown that, given an accuracy level$\epsilon$, a degree of highest available Lipschitz continuous derivatives $p$and a desired optimality order $q$ between one and $p$, a conceptualregularization algorithm requires no more than$O(\epsilon^{-\frac{p+1}{p-q+1}})$ evaluations of the objective function andits derivatives to compute a suitably approximate $q$-th order minimizer. Withan appropriate choice of the regularization, a similar result also holds ifthe $p$-th derivative is merely Holder rather than Lipschitzcontinuous. We provide an example that shows that the above complexity bound is sharp for unconstrained and a wide class of constrained problems; we also give reasons for the optimality of regularization methods from a worst-casecomplexity point of view, within a large class of algorithms that use the samederivative information.",
keywords = "nonlinear optimization, complexity theory",
author = "Philippe Toint and Coralia Cartis and Gould, {N. I. M.}",
year = "2020",
language = "English",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics",

}

TY - JOUR

T1 - Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints

AU - Toint, Philippe

AU - Cartis, Coralia

AU - Gould, N. I. M.

PY - 2020

Y1 - 2020

N2 - We provide sharp worst-case evaluation complexity bounds for nonconvexminimization problems with general inexpensive constraints, i.e. problemswhere the cost of evaluating/enforcing of the (possibly nonconvex or evendisconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds for nonconvex unconstrained and convexly-constrained problems. It is shown that, given an accuracy level$\epsilon$, a degree of highest available Lipschitz continuous derivatives $p$and a desired optimality order $q$ between one and $p$, a conceptualregularization algorithm requires no more than$O(\epsilon^{-\frac{p+1}{p-q+1}})$ evaluations of the objective function andits derivatives to compute a suitably approximate $q$-th order minimizer. Withan appropriate choice of the regularization, a similar result also holds ifthe $p$-th derivative is merely Holder rather than Lipschitzcontinuous. We provide an example that shows that the above complexity bound is sharp for unconstrained and a wide class of constrained problems; we also give reasons for the optimality of regularization methods from a worst-casecomplexity point of view, within a large class of algorithms that use the samederivative information.

AB - We provide sharp worst-case evaluation complexity bounds for nonconvexminimization problems with general inexpensive constraints, i.e. problemswhere the cost of evaluating/enforcing of the (possibly nonconvex or evendisconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds for nonconvex unconstrained and convexly-constrained problems. It is shown that, given an accuracy level$\epsilon$, a degree of highest available Lipschitz continuous derivatives $p$and a desired optimality order $q$ between one and $p$, a conceptualregularization algorithm requires no more than$O(\epsilon^{-\frac{p+1}{p-q+1}})$ evaluations of the objective function andits derivatives to compute a suitably approximate $q$-th order minimizer. Withan appropriate choice of the regularization, a similar result also holds ifthe $p$-th derivative is merely Holder rather than Lipschitzcontinuous. We provide an example that shows that the above complexity bound is sharp for unconstrained and a wide class of constrained problems; we also give reasons for the optimality of regularization methods from a worst-casecomplexity point of view, within a large class of algorithms that use the samederivative information.

KW - nonlinear optimization

KW - complexity theory

M3 - Article

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

ER -