next up previous contents
Next: Changing the scale Up: Using the local Previous: How to find

How does nature do it

There are various other approaches to the problem of local maxima inspired by natural phenomena. In this section Hillis' co-evolving parasites [Hillis, 1990] inspired by biology, and the simulated annealing algorithm [Kirkpatrick et al., 1983] inspired by physics, will be discussed.

Hillis used simulated evolution to study the problem of finding optimum sorting networks. To tackle the local maxima problem he proposed to use co-evolving parasites. In his implementation, the evolution is not limited to the sorting networks. The sorting problems also evolve to become more difficult. This is possibly how nature manages to keep its diversity. The problem facing each species is never constant, the environment keeps changing, and more importantly the rival species keep adapting. When the predator develops a better weapon, the prey evolves better defense mechanisms. As a result nature has an adaptive sequence of problems and solutions. The domain Hillis chose is particularly suitable for this approach, because it provides many problem instances. However, his method is not applicable in general. For example, if we want to optimize the parameters for the design of an aircraft, there is nothing corresponding to the multiple problem instances to evolve.

Simulated annealing is one of the popular methods for optimization in spaces with a lot of local maxima. It allows non-optimal moves in a probabilistic fashion. Many other algorithms for optimization belong to the class of greedy algorithms [Cormen et al., 1990]. The algorithms in this class always make the choice that looks best at the moment. They hold on to the best point found so far and never step down when climbing a hill. This is exactly the reason why greedy algorithms get stuck at local maxima. Simulated annealing, on the other hand, by not taking the greedy approach, is able to jump out of local maxima. However, this sacrifices efficiency. Probabilistically allowing non-optimal moves does not seem like the best use of information gained from the search.

There is another interesting aspect to simulated annealing. It uses a schedule of temperature which determines the gradual change in the probability. Optimization starts at a very ``hot'' state, in which there is a high probability of making non-optimal moves. It slowly ``cools down'' as the optimization proceeds, turning itself into a hill climbing algorithm in the limit. This results in a gradual shift from a coarse-grained search in the beginning towards a fine-grained search in the end. The idea of moving from a coarse-grained to a fine-grained search is the key to the following chapter. However, this progression is achieved by decreasing the size of the steps taken in the search space rather than allowing for steps that may take the evaluation function down like simulated annealing.



next up previous contents
Next: Changing the scale Up: Using the local Previous: How to find



Deniz Yuret
Tue Apr 1 21:38:29 EST 1997