Next: Introduction
Up: From Genetic Algorithms To
Previous: Contents
-
The first figure shows a typical objective function. The second shows the
information available to the optimization algorithm.
-
The discovery of the local maxima at A and C result in a modification of
the objective function such that the global maximum at B is no longer visible.
-
The idea of iterative deepening: The algorithm has a reasonable solution
whenever the user wants to terminate. When there is more time available,
it keeps exploring the space in ever increasing detail.
-
Voronoi diagrams: Each region consists of points that are closest to the
point inside that region.
-
Execution of the distant point algorithm in one dimension. First the largest
interval between the existing points is found, then a random point within that
interval is returned.
-
The comparison of spheres and half spheres in different number of dimensions
-
Starting with large steps is helpful for ignoring small local maxima at the
initial stage of the search.
-
The probability of success of a random step in two dimensions as a function of
the step size and the distance to the peak. Dashed circles are contour lines,
the solid circle is the potential points the step can take.
-
How shrinking helps on ridges. The heavy arcs show the points that are better
than the current point. When the radius is smaller, as shown in the blow-up on
the right, this arc gets larger.
-
The ridge problem: The concentric ellipses are the contour map of an objective
function. A hill-climbing algorithm restricted to moves in a single dimension
has to go through many zig-zags to reach the optimum.
-
The number of combinations of main directions increase exponentially with the
number of dimensions.
-
Rosenbrock's saddle. The function has a ridge along y = x^2
that rises
slowly as it is traversed from (-1, 1) to (1, 1). The arrows show the steps
taken by the improved Local-Optimize
. Note how the steps shrink in
order to discover a new direction, and grow to speed up once the direction is
established.
-
An illustration of how Procedure 4-1 uses the vectors u and v.
In (A) the step vector v keeps growing in the same direction as long as
better points are found. u accumulates the moves made in this
direction. Once v fails, the algorithm tries new directions. In (B) a
new direction has been found. The combination u+v is tried and
found to give a better point. In this case u starts accumulating moves
in this new direction. In (C) u+v did not give a better point.
Thus we start moving in the last direction we know that worked, in this case
v.
-
Comparison of the two versions of Local-Optimize
from chapter 3 and
chapter 4 on Rosenbrock's saddle. Note that both axes are logarithmic. The
old version converges in 8782 iterations, the new version finds a better value
in 197 iterations.
-
De Jong's functions. The graphs are rotated and modified by order preserving
transforms for clarity.
-
Comparison of five algorithms on De Jong's functions. The vertical bars
represent the number of function evaluations to find the global optimum. The
scale is logarithmic and the numbers are averaged over ten runs. No bar was
drawn when an algorithm could not find the global optimum consistently.
-
The coordinate registration problem in image guided surgery. The figures show
two examples of the laser scan data (horizontal lines) overlaid on the MRI data
(the skulls). The problem is to find the transformation that will align these
two data sets.
-
The comparison of Procedure 2-1 with Powell's method on the coordinate
registration problem. The y axis shows the root mean squared distance in
millimeters, the x axis is the number of function evaluations. The results are
averaged over ten runs.
-
The Cycloheptadecane molecule has 51 atoms. The three coordinates for each
atom are the input to the objective function. Thus the problem has 153
dimensions.
-
Comparison of Procedure 2-1 with Powell's method on the molecule energy
minimization problem. The first example starts from a near optimal
configuration. The second example starts with a random configuration. The
lower lines are the traces of Procedure 2-1.
-
The problem of non-separability in refraction tomography. The variation in
just the value at node A effects a part of the domain for one source and the
remaining part for the other.
- Two possible uses of Local-Optimize
. In the first example it
is used to refine the search after the genetic algorithm was stuck. In the
second example it is started after the genetic algorithm found a good region in
a few hundred iterations
-
The performance of my algorithm compared to simulated annealing on the
traveling salesman problem averaged over ten runs. The x axis shows the number
of operations tried and the y axis shows the distance of the best discovered
path.
Deniz Yuret
Tue Apr 1 21:38:29 EST 1997