next up previous contents
Next: How to find Up: Using the local Previous: Parallel vs serial

How to use the local maxima

There is no way to avoid local maxima (after all, among them we may find the global one we are looking for). Thus the optimization algorithm should make use of the information gained by the ones discovered. The main objective of adding diversity to the evaluation function, or modifying it, is to mark the discovered local maxima and avoid them. Doing this directly, without modifications to the objective function, gives better results.

There are several different ways to use the positions of local maxima. The algorithm can keep them for use in future cross-overs. This will give good results in problems where combining several good solutions leads to a better solution. However as the number of local maxima increases, this results in a high number of function evaluations. Alternatively, the algorithm can just use the locations of local maxima to avoid getting trapped in them again. A simple approach would be to take a random step when stuck at a local maximum, much like simulated annealing. Then one can hope that the search will evolve into a different region of the space instead of converging back to the same point. This has the advantage of staying around good solutions. However, it is not clear how far to jump when stuck. This approach also has the disadvantage of ignoring all but the last one of the local maxima. The method I will describe in this section uses the locations of all local maxima in order to keep away from them. This can be achieved by recording the local maxima as they are discovered, and directing the search to unexplored regions of the space. This suggests Procedure 2-1.

  

  
Figure: The idea of iterative deepening: The algorithm has a reasonable solution whenever the user wants to terminate. When there is more time available, it keeps exploring the space in ever increasing detail.

Local-Optimize can be any popular optimization procedure that will converge to local maxima. One such procedure is the subject of the next few chapters. Distant-Point selects a new point in an unexplored region of the space using the local maxima discovered so far that are stored in X. How this can be done efficiently is the topic of the next section.

Note that this algorithm does not modify the evaluation function in any way. It just restarts the search in unexplored regions of the space. Figure gif shows how this results in a uniform exploration of the search space.

There is one more little mystery in the algorithm. The loop between lines 2 and 4 never terminates. This is reminiscent of the idea of iterative deepening in computer chess literature. Contemporary chess programs are designed in such a way that they keep searching until the time is up. Thus, they always have to have a reasonable move ready from what they have discovered so far. Global optimization, like chess, is a problem of searching a very large space. Thus one can never be sure of the answer until the whole space is exhausted. This is practically impossible in many cases. A good search algorithm should save the reasonable solutions it has found, and keep searching for better ones as time permits.



next up previous contents
Next: How to find Up: Using the local Previous: Parallel vs serial



Deniz Yuret
Tue Apr 1 21:38:29 EST 1997