Iterated local search

From HandWiki
Iterated local search kicks a solution out from a local optimum

Iterated Local Search[1][2] (ILS) is a term in applied mathematics and computer science defining a modification of local search or hill climbing methods for solving discrete optimization problems.

Local search methods can get stuck in a local minimum, where no improving neighbors are available.

A simple modification consists of iterating calls to the local search routine, each time starting from a different initial configuration. This is called repeated local search, and implies that the knowledge obtained during the previous local search phases is not used. Learning implies that the previous history, for example the memory about the previously found local minima, is mined to produce better and better starting points for local search.

The implicit assumption is that of a clustered distribution of local minima: when minimizing a function, determining good local minima is easier when starting from a local minimum with a low value than when starting from a random point. The only caveat is to avoid confinement in a given attraction basin, so that the kick to transform a local minimizer into the starting point for the next run has to be appropriately strong, but not too strong to avoid reverting to memory-less random restarts.

Iterated Local Search is based on building a sequence of locally optimal solutions by:

  1. perturbing the current local minimum;
  2. applying local search after starting from the modified solution.

The perturbation strength has to be sufficient to lead the trajectory to a different attraction basin leading to a different local optimum.

Perturbation Algorithm

Finding the perturbation algorithm for ILS is not an easy task. The main aim is not to get stuck at the same local minimum and in order to ensure this property, the undo operation is forbidden. Despite this, a good permutation has to consider a lot of values, since there exist two kind of bad permutations:

  1. too weak: fall back to the same local minimum
  2. too strong: random restart

Benchmark Perturbation

The procedure consists in fixing a number of values for the perturbation such that these values are significant for the instance: on average probability and not rare. After that, on runtime it will be possible to check the benchmark plot in order to get an average idea on the instances passed.

Adaptive Perturbation

Since there is no function a priori that tells which one is the most suitable value for a given perturbation, the best criterion is to get it adaptive. For instance Battiti and Protasi proposed[3] a reactive search algorithm for MAX-SAT which fits perfectly into the ILS framework. They perform a "directed" perturbation scheme which is implemented by a tabu search algorithm and after each perturbation they apply a standard local descent algorithm. Another way of adapting the perturbation is to change deterministically its strength during the search.

Optimizing Perturbation

Another procedure is to optimize a sub-part of the problem while keeping the not-undo property active. If this procedure is possible, all solutions generated after the perturbations tend to be very good. Furthermore the new parts are optimized too.

Applications

The method has been applied to several combinatorial optimization problems including the Job Shop Scheduling problems,[4][5] Flow-Shop Problems,[6] Vehicle Routing Problems[7] as well as many others.

References

  1. Lourenço, H.R.; Martin O.; Stützle T. (2010). "Iterated Local Search: Framework and Applications". Handbook of Metaheuristics. Kluwer Academic Publishers, International Series in Operations Research & Management Science. 146. 363–397. doi:10.1007/978-1-4419-1665-5_12. ISBN 978-1-4419-1663-1. https://www.springer.com/business+%26+management/operations+research/book/978-1-4419-1663-1. 
  2. Lourenço, H.R.; Martin O.; Stützle T. (2003). "Iterated Local Search". Handbook of Metaheuristics. Kluwer Academic Publishers, International Series in Operations Research & Management Science 57: 321–353. https://www.springer.com/mathematics/book/978-1-4020-7263-5. 
  3. Battiti, Roberto; Protasi, Marco (1997-01-01). "Reactive search, a history-sensitive heuristic for MAX-SAT". ACM Journal of Experimental Algorithmics 2: 2–es. doi:10.1145/264216.264220. ISSN 1084-6654. 
  4. Lourenço, H.R.; Zwijnenburg M. (1996). "Combining the Large-Step Optimization with Tabu-Search: Application to the Job-Shop Scheduling Problem". Meta-Heuristics. Kluwer Academic Publishers. Springer. 219–236. doi:10.1007/978-1-4613-1361-8_14. ISBN 9780792397007. https://www.springer.com/business+%26+management/operations+research/book/978-0-7923-9700-7. 
  5. Lourenço, H.R. (1995). "Job-Shop Scheduling: computational study of local search and large-step optimization methods". European Journal of Operational Research 83 (2): 347–364. doi:10.1016/0377-2217(95)00012-F. 
  6. Juan, A.A.; Lourenço, H.; Mateo, M.; Luo, R.; Castella, Q. (2013). "Using Iterated Local Search for solving the Flow-Shop Problem: parametrization, randomization and parallelization issues". International Transactions in Operational Research. http://www.wiley.com/WileyCDA/WileyTitle/productCd-ITOR.html. 
  7. Penna, Puca H.V.; Satori Ochi, L.; Subramanian, A. (2013). "An Iterated Local Search heuristic for the Heterogeneous Fleet Vehicle Routing Problem". Journal of Heuristics 19 (2): 201–232. doi:10.1007/s10732-011-9186-y. 

[1]