[AIM] Move Acceptance in Metaheuristics and Parameter Setting Issues

Day 3

Outline

  • Performance Analysis/Comparison of Algorithms
  • Move Acceptance
    • Nature of components
      • Stochastic vs non-stochastic
      • basic vs threshold
    • Nature of parameter setting
      • static move acceptance
      • dynamic move acceptance
      • adaptive move acceptance
  • Parameter tuning vs Parameter control

Performance analysis


Can we say A is much better than B and C?
No. Though for instance 1 the average, standard deviation, median and minimum of algorithm A are all best.

If we have more instance, now a bit stronger conclusion can be made. But if you want to fully convince people, you should make experiments in all instance.
Well, you will definitely not doing these things,

Different classification of move acceptance


For different classification of move acceptance, we can classify them from two categories.
From nature of components, it has non-stochastic and stochastic move acceptance. In non-stochastic, there is non-stochastic basic and threshold.
From nature of parameter setting, it can be classified to basic, dynamic and adaptive move acceptance

Non-stochastic basic move acceptance

Compares the solution quality of the current solution directly to the quality of another solution.

static move acceptance

static move acceptance doesn’t have any parameter or if they do, their parameter settings are fixed:
  • All Moves
  • Improving moves only
  • Improving and Equal
  • Late acceptance: compares the quality of the solution with that of the solution accepted/visited L iterations previously St-L, and accepts the move if and only if f(s’)≤ f(St - L)

Dynamic move acceptance

Dynamic move acceptance introduces a (set of) parameter(s) whose value(s) changes using a predefined scheme with respect to time/step during the search process.
  • For example, the list size of the Late Acceptance could be increased in time linearly (having an upper and lower limit)

Adaptive move acceptance

Adaptive move acceptance uses an adaptation mechanism, receiving feedback during the search process, for controlling the setting(s) of parameter(s)
  • For example, use a fuzzy control system to decide on the list length for Late Acceptance changing that length whenever appropriate to a value decided by the fuzzy system adaptively

Non-stochastic threshold move acceptance

static–record to record travel (RRT)


Dynamic–Great Deluge or Flex Deluge


Adaptive–Extended Great Deluge or Modified Great Deluge


Stochastic move acceptance


You will random a number, and compare this number with another value. If the random number r is smaller than the value P, no matter whether the current fitness value is larger than best fitness value or not, it will accept the solution

static

Naive Acceptance: P is fixed, e.g. if improving P=1.0, else P=0.5

Dynamic

Simulated Annealing: P changes in time with respect to the difference in the quality of current and previous solutions

Adaptive

Simulated Annealing with reheating: P is modified time to time causing partial restart – increasing the probability of acceptance of non-improving solutions

Simulated Annealing

  • A stochastic local search algorithm and easy to implement
  • Achieves good performance given sufficient running time
  • requires a good parameter setting for improved performance

Accepting moves using SA

  • Improving moves are accepted
  • Worsening moves are accepted using the Metropolis criterion at a given temperature T

Cooling/Annealing

Temperature T is slowly decreased:
  • T is initially high - many inferior moves are accepted
  • T is decreasing - inferior moves are nearly always rejected
  • As the temperature T decreases, the probability of accepting worsening moves decreases.
  • Why?

Cooling strategy–Geometric cooling


alpha is typically in the interval [0.9, 0.99]
Other cooling schedule:

Reheating: if get stuck for a while increase the current temperature with a certain rate.

Metaheuristics and Setting of parameters

Almost all metaheuristics introduce a set of parameters which either needs to be tuned or controlled
  • Parameter tuning: finding the best initial settings for a set of parameters before the search process starts(offline). E.g. fixing the mutation strength in ILS, tabu tenure in Tabu Search
  • Parameter control: Managing the settings of parameters during the search process(online)

Parameter tuning methods

  • No tuning(random decision)
  • Sequential tuning: fix parameter values successively
  • Design of experiements
  • Meta-optimisation: use a metaheuristic to obtain “optimal” parameter settings
  • There are software tools for automated parameter tuning of metaheuristics

Design of experiments


Summary

It is not trivial which metaheuristic optimisation/search algorithm will perform better than the other one on a given problem domain
Move acceptance methods as a part of local search metaheuristics can be used for escaping from the local optima, enabling acceptance of non-improving solutions
Many (meta)heuristic optimisation/search algorithms come with parameters which often require an initial setting(tuning)
  • parameter tuning is possible, however it is time consuming
  • There is a range of different techniques varying from manual/semi-automated experimental design methods to automated tuning
  • parameter control as an alternative to parameter tuning changes parameter values during the run of algorithm
  • There is no guidance indicating which method is the best, however many studies show that parameter tuning/control often does improve the performance of an algorithm as compared to the variant where it is not used

评论

此博客中的热门博文

[MLE] Linear Classification

[AIM] MetaHeuristics

[CS231] Neural Networks