This work funded in part by US Dept of Energy, MICS Division
Global optimization problems are solved with either exact methods
(e.g. branch-and-bound), which guarantee that a solution is found
within a finite time limit, or with heuristic methods, which
typically provide weak asymptotic guarantees but often generate
near-optimal solutions quickly. Heuristic methods are important
because we lack robust rigorous methods that can efficiently optimize
broad classes of problems. For example, the structure of many
global optimization problems is unknown or too complex to apply
analytic optimization approaches. These two classes of optimization
methods illustrate the trade-off between the ability to find the
exact answer quickly and the ability to generate near-optimal
solutions quickly; both of these capabilities are essential for
practical global optimization methods.
Project Objectives
We aim to to develop new, innovative methods that represent
better algorithmic trade-offs between the ability to find a global
optimum and the ability to generate near-optimal solutions
quickly. Our focus will be on a variety of algorithmic factors that
fundamentally affect the practical application these methods,
including (a) termination rules that provide practical confidence
guarantees, (b) effective use of domain-specific information, (c)
methods to handle constraints, (d) the effective use of all function
evaluations (when their cost dominates the cost of optimization),
and (e) the role of parallelism (for large scale optimization
problems). The algorithms that we develop will be incorporated into
the SGOPT Global Optimization Library.
Our algorithmic development will focus on domain-independent
methods that are applicable to a wide range of applications. We plan
to develop and evaluate global optimization methods on applications
with continuous and combinatorial search domains in order to provide
a practical evaluation of our analyses. Our present plans are to
focus on protein folding and scheduling. We have extensive experience
with protein folding through our prior work with the computational
biology project funded by MICS. This project has recently begun to
focus on practical folding methods, including methods for off-lattice
models, so there will be a collaboration with this project. With
Cindy Phillips, we have developed a close collaboration with
the Pantex Processing Modeling Team which has developed
scheduling algorithms for stockpile maintenance. We will closely
collaborate with the MICS Robust Discrete Optimization
Project and the members of the PICO Project on this application.
We have also developed new hybrid algorithms between EAs and direct
search methods. In particular, we have experimentally evaluated
hybrids with pattern search methods, which appear superior to hybrid
that use the Solis-Wets search method. Additionally, we have developed
new hybridization methods that set the initial step length for the
local search method using population statistics from the EA. We have
begun applying these methods to a drug docking application and
a production planning application for Pantex.
These global optimization methods were developed within the SGOPT
global optimization library. We have redesigned the optimization
framework in SGOPT to enable the formulation of constrained optimization
problems. Additionally, this framework enables the transparent
parallelization of function evaluations, so parallelism can be
provided without affecting the algorithmic design.
Other recent optimization work: we developed new
approximation algorithms for simple protein lattice models that include
both hydrophobic attraction and hydrophobic repulsion energy terms. We
also developed a new IP formulation for the HP protein lattice model
that provides a tighter formulation than previous IP formulations. We
applied nonlinear optimization methods to solve a classification
problem for retinal image processing. New publications: 3 journal, 3
refereed conferences, 1 invited conference.
Recent Work (FY98)
We have analyzed a new class of evolutionary algorithms called
evolutionary pattern search that provably converge to stationary points
on continuously differentiable functions. These evolutionary
algorithms (EAs) carefully adapt the step length of the mutation
operator in response to the success of prior mutation steps. Our
analysis offers the first proof stationary-point convergence for EAs.
We have performed a preliminary empirical analysis which confirms their
robust convergence properties and indicates potential limitations for
their use in global optimization.
External Collaborators
Related Links
Last modified: Mon Jan 4 23:29:03 MST 1999