Published online by Cambridge University Press: 14 June 2019
In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimization. We provide a review and perspectives on developments in these methods, with an emphasis on highlighting recent developments and on unifying treatment of such problems in the non-linear optimization and machine learning literature. We categorize methods based on assumed properties of the black-box functions, as well as features of the methods. We first overview the primary setting of deterministic methods applied to unconstrained, non-convex optimization problems where the objective function is defined by a deterministic black-box oracle. We then discuss developments in randomized methods, methods that assume some additional structure about the objective (including convexity, separability and general non-smooth compositions), methods for problems where the output of the black-box oracle is stochastic, and methods for handling different types of constraints.
 $O(\log n)$
                   regret for the multi-armed bandit problem’, Adv. Appl. Probab. 
               27, 1054–1078.Google Scholar
$O(\log n)$
                   regret for the multi-armed bandit problem’, Adv. Appl. Probab. 
               27, 1054–1078.Google Scholar ${\mathcal{X}}$
                  -armed bandits’, J. Mach. Learn. Res. 
               12, 1655–1695.Google Scholar
${\mathcal{X}}$
                  -armed bandits’, J. Mach. Learn. Res. 
               12, 1655–1695.Google Scholar ${\mathcal{V}}{\mathcal{U}}$
                  -decomposition,
${\mathcal{V}}{\mathcal{U}}$
                  -decomposition, 
                      ${\mathcal{U}}$
                  -gradient, and
${\mathcal{U}}$
                  -gradient, and 
                      ${\mathcal{U}}$
                  -Hessian approximations’, SIAM J. Optim. 
               24, 1890–1913.Google Scholar
${\mathcal{U}}$
                  -Hessian approximations’, SIAM J. Optim. 
               24, 1890–1913.Google Scholar ${\mathcal{V}}{\mathcal{U}}$
               -algorithm for convex finite-max problems. arXiv:1903.11184
            Google Scholar
${\mathcal{V}}{\mathcal{U}}$
               -algorithm for convex finite-max problems. arXiv:1903.11184
            Google Scholar $\ell _{1}$
                   nonconvex optimization’, SIAM J. Optim. 
               26, 2540–2563.Google Scholar
$\ell _{1}$
                   nonconvex optimization’, SIAM J. Optim. 
               26, 2540–2563.Google Scholar $\ell _{\infty }$
                   penalty function’, SIAM J. Optim. 
               20, 1–29.Google Scholar
$\ell _{\infty }$
                   penalty function’, SIAM J. Optim. 
               20, 1–29.Google Scholar