Delaunay-based Derivative-free Optimization via Global Surrogates (Δ-DOGS)
(work supported by AFOSR grant FA 9550-12-1-0046 in collaboration with Prof David Meyer)
Alongside derivative-based methods, which scale better to higher-dimensional problems, derivative-free methods play an essential role in the optimization of many practical engineering systems, especially those in which function evaluations are determined by statistical averaging, and those for which the function of interest is nonconvex in the adjustable parameters. We are in the process of developing an entire new family of surrogate-based derivative-free optimization schemes. The idea unifying this efficient and (under the appropriate assumptions) provably-globally-convergent family of schemes is the the minimization of a search function which linearly combines a computationally-inexpensive “surrogate” (that is, an interpolation, or in some cases a regression, of recent function evaluations - we generally favor some variant of polyharmonic splines for this purpose), to summarize the trends evident in the data available thus far, with a synthetic piecewise-quadratic “uncertainty function” (built on the framework of a Delaunay triangulation of existing datapoints), to characterize the reliability of the surrogate by quantifying the distance of any given point in parameter space to the nearest function evaluations.
Schemes developed in the family thus far include:
-
Δ-DOGS, the original scheme of this class, designed for bound or linear constraints [BCB16],
-
Δ-DOGS(C), designed for convex feasible domains defined by computable constraint functions [BB16],
-
Δ-DOGS(Ω), designed for nonconvex (even, disconnected!) feasible domains defined by computable constraint functions [ABB_omega],
-
Δ-DOGS(Z) and Δ-DOGS(Ω Z), which accelerate convergence by coordinating function evaluations by restricting them at each iteration to lie on a Cartesian grid which is successively refined as convergence is approached [BB_Z],
-
Δ-DOGS(𝚲), which modifies the Δ-DOGS(Z) idea by using a dense lattice (derived from an n-dimensional sphere packing), instead of a Cartesian grid, to coordinate the search [ABB_Lambda] ,
-
α -DOGS, which is specifically designed to handle efficiently approximate function evaluations obtained via statistical sampling of an ergodic function, the uncertainty of which is reduced as the sampling time T is increased (that is, the scheme automatically increases the accuracy of the function evaluations performed as convergence is approached) [BB_alpha], and
-
α -DOGS(extended), designed to simultaneous increase the sampling time T (as in α -DOGS), and refine the numerical approximation (reducing dx and/or dt), as convergence is approached [bab_alphaX].
Related efforts in this project include:
-
gradient-based acceleration of the (globally-convergent) algorithms in the Δ-DOGS family, combining derivative-free global exploration with derivative-based local refinement, [abb17]
-
the development of the Multivariate Adaptive Polyharmonic Splines (MAPS) method, which scales the parameter domain under consideration based on the variation seen in data computed thus far in the optimization, thereby obtaining significantly smoother interpolants [abmb17],
-
the judicious use of MAPS to identify and, for a number of iterations, neglect the less significant parameters in a given optimization problem, thereby speeding convergence, [ABB_reduction]
-
introduction of some practical data-based approaches to Uncertainty Quantification (UQ), and
-
the development of methods to optimize an objective function within a feasible domain defined only via binary oracle calls (feasible or not), not computable constraint functions.
Ongoing practical applications of this effort being explored in our lab include
-
the design of airfoils and hydrofoils [abmb17], and
-
the design of low-storage implicit/explicit Runge–Kutta (IMEXRK) schemes for high performance computing (HPC) problems like the numerical simulation of turbulence [ACBB_IMEXRK_via_DDOGS].