In computer science and mathematical optimization, a metaheuristic is a higherlevel procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.^{[1]}^{[2]} Metaheuristics sample a set of solutions which is too large to be completely sampled. Metaheuristics may make few assumptions about the optimization problem being solved, and so they may be usable for a variety of problems.^{[3]}
Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems.^{[3]} Many metaheuristics implement some form of stochastic optimization, so that the solution found is dependent on the set of random variables generated.^{[2]} In combinatorial optimization, by searching over a large set of feasible solutions, metaheuristics can often find good solutions with less computational effort than optimization algorithms, iterative methods, or simple heuristics.^{[3]} As such, they are useful approaches for optimization problems.^{[2]} Several books and survey papers have been published on the subject.^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}
Most literature on metaheuristics is experimental in nature, describing empirical results based on computer experiments with the algorithms. But some formal theoretical results are also available, often on convergence and the possibility of finding the global optimum.^{[3]} Many metaheuristic methods have been published with claims of novelty and practical efficacy. While the field also features highquality research, many of the publications have been of poor quality; flaws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature.^{[7]}
YouTube Encyclopedic

1/5Views:49 155585 1895 76595610 824

✪ heuristic vs. algorithm

✪ Richard Feynman Computer Heuristics Lecture

✪ Visualization of metaheuristics for the travelling salesman problem

✪ Evolutionary Algorithms

✪ CS3 lecture 42: Heuristics  Simulated Annealing  Richard Buckland (draft) UNSW COMP2911
Transcription
Contents
Properties
These are properties that characterize most metaheuristics:^{[3]}
 Metaheuristics are strategies that guide the search process.
 The goal is to efficiently explore the search space in order to find near–optimal solutions.
 Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes.
 Metaheuristic algorithms are approximate and usually nondeterministic.
 Metaheuristics are not problemspecific.
Classification
There are a wide variety of metaheuristics^{[2]} and a number of properties with respect to which to classify them.^{[3]}
Local search vs. global search
One approach is to characterize the type of search strategy.^{[3]} One type of search strategy is an improvement on simple local search algorithms. A well known local search algorithm is the hill climbing method which is used to find local optimums. However, hill climbing does not guarantee finding global optimum solutions.
Many metaheuristic ideas were proposed to improve local search heuristic in order to find better solutions. Such metaheuristics include simulated annealing, tabu search, iterated local search, variable neighborhood search, and GRASP.^{[3]} These metaheuristics can both be classified as local searchbased or global search metaheuristics.
Other global search metaheuristic that are not local searchbased are usually populationbased metaheuristics. Such metaheuristics include ant colony optimization, evolutionary computation, particle swarm optimization, and genetic algorithms.^{[3]}
Singlesolution vs. populationbased
Another classification dimension is single solution vs populationbased searches.^{[3]}^{[6]} Single solution approaches focus on modifying and improving a single candidate solution; single solution metaheuristics include simulated annealing, iterated local search, variable neighborhood search, and guided local search.^{[6]} Populationbased approaches maintain and improve multiple candidate solutions, often using population characteristics to guide the search; population based metaheuristics include evolutionary computation, genetic algorithms, and particle swarm optimization.^{[6]} Another category of metaheuristics is Swarm intelligence which is a collective behavior of decentralized, selforganized agents in a population or swarm. Ant colony optimization,^{[9]} particle swarm optimization,^{[6]} social cognitive optimization, Harris hawks optimization,^{[10]} penguins search optimization algorithm, and artificial bee colony ^{[11]} algorithms are examples of this category.
Hybridization and memetic algorithms
A hybrid metaheuristic is one which combines a metaheuristic with other optimization approaches, such as algorithms from mathematical programming, constraint programming, and machine learning. Both components of a hybrid metaheuristic may run concurrently and exchange information to guide the search.
On the other hand, Memetic algorithms^{[12]} represent the synergy of evolutionary or any populationbased approach with separate individual learning or local improvement procedures for problem search. An example of memetic algorithm is the use of a local search algorithm instead of a basic mutation operator in evolutionary algorithms.
Parallel metaheuristics
A parallel metaheuristic is one which uses the techniques of parallel programming to run multiple metaheuristic searches in parallel; these may range from simple distributed schemes to concurrent search runs that interact to improve the overall solution.
Natureinspired metaheuristics
A very active area of research is the design of natureinspired metaheuristics. Many recent metaheuristics, especially evolutionary computationbased algorithms, are inspired by natural systems. Nature acts as a source of concepts, mechanisms and principles for designing of artificial computing systems to deal with complex computational problems.^{[13]} Such metaheuristics include simulated annealing, evolutionary algorithms, ant colony optimization and particle swarm optimization. A large number of more recent metaphorinspired metaheuristics have started to attract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor.
Applications
Metaheuristics are used for combinatorial optimization in which an optimal solution is sought over a discrete searchspace. An example problem is the travelling salesman problem where the searchspace of candidate solutions grows faster than exponentially as the size of the problem increases, which makes an exhaustive search for the optimal solution infeasible. Additionally, multidimensional combinatorial problems, including most design problems in engineering^{[14]}^{[15]}^{[16]} such as formfinding and behaviorfinding, suffer from the curse of dimensionality, which also makes them infeasible for exhaustive search or analytical methods. Metaheuristics are also widely used for jobshop scheduling and job selection problems.^{[17]} Popular metaheuristics for combinatorial problems include simulated annealing by Kirkpatrick et al.,^{[18]} genetic algorithms by Holland et al.,^{[19]} scatter search^{[20]} and tabu search^{[21]} by Glover. Literature review on metaheuristic optimization,^{[22]} suggested that it was Fred Glover who coined the word metaheuristics.^{[23]}
Contributions
Many different metaheuristics are in existence and new variants are continually being proposed. Some of the most significant contributions to the field are:
 1952: Robbins and Monro work on stochastic optimization methods.^{[24]}
 1954: Barricelli carry out the first simulations of the evolution process and use them on general optimization problems.^{[25]}
 1963: Rastrigin proposes random search.^{[26]}
 1965: Matyas proposes random optimization.^{[27]}
 1965: Nelder and Mead propose a simplex heuristic, which was shown by Powell to converge to nonstationary points on some problems.^{[28]}
 1965: Ingo Rechenberg discovers the first Evolution Strategies algorithm.^{[29]}
 1966: Fogel et al. propose evolutionary programming.^{[30]}
 1970: Hastings proposes the Metropolis–Hastings algorithm.^{[31]}
 1970: Cavicchio proposes adaptation of control parameters for an optimizer.^{[32]}
 1970: Kernighan and Lin propose a graph partitioning method, related to variabledepth search and prohibitionbased (tabu) search.^{[33]}
 1975: Holland proposes the genetic algorithm.^{[19]}
 1977: Glover proposes scatter search.^{[20]}
 1978: Mercer and Sampson propose a metaplan for tuning an optimizer's parameters by using another optimizer.^{[34]}
 1980: Smith describes genetic programming.^{[35]}
 1983: Kirkpatrick et al. propose simulated annealing.^{[18]}
 1986: Glover proposes tabu search, first mention of the term metaheuristic.^{[21]}
 1989: Moscato proposes memetic algorithms.^{[12]}
 1992: Dorigo introduces ant colony optimization in his Phd Thesis.^{[9]}
 1995: Wolpert and Macready prove the no free lunch theorems.^{[36]}^{[37]}^{[38]}^{[39]}
See also
 Stochastic search
 Metaoptimization
 Matheuristics
 Hyperheuristics
 Swarm intelligence
 Genetic algorithms
 Simulated annealing
 Workforce modeling
References
 ^ R. Balamurugan; A.M. Natarajan; K. Premalatha (2015). "StellarMass Black Hole Optimization for Biclustering Microarray Gene Expression Data". Applied Artificial Intelligence an International Journal. 29 (4): 353–381. doi:10.1080/08839514.2015.1016391.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} Bianchi, Leonora; Marco Dorigo; Luca Maria Gambardella; Walter J. Gutjahr (2009). "A survey on metaheuristics for stochastic combinatorial optimization". Natural Computing. 8 (2): 239–287. doi:10.1007/s1104700890984.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} ^{f} ^{g} ^{h} ^{i} ^{j} ^{k} Blum, C.; Roli, A. (2003). "Metaheuristics in combinatorial optimization: Overview and conceptual comparison". 35 (3). ACM Computing Surveys: 268–308.
 ^ Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Kluwer Academic Publishers. ISBN 9780201157673.
 ^ Glover, F.; Kochenberger, G.A. (2003). Handbook of metaheuristics. 57. Springer, International Series in Operations Research & Management Science. ISBN 9781402072635.
 ^ ^{a} ^{b} ^{c} ^{d} ^{e} Talbi, EG. (2009). Metaheuristics: from design to implementation. Wiley. ISBN 9780470278581.
 ^ Sörensen, Kenneth (2015). "Metaheuristics—the metaphor exposed" (PDF). International Transactions in Operational Research. 22: 3–18. CiteSeerX 10.1.1.470.3422. doi:10.1111/itor.12001. Archived from the original (PDF) on 20131102.
 ^ Classification of metaheuristics
 ^ ^{a} ^{b} M. Dorigo, Optimization, Learning and Natural Algorithms, PhD thesis, Politecnico di Milano, Italie, 1992.
 ^ Heidari, Ali Asghar; Mirjalili, Seyedali; Faris, Hossam; Aljarah, Ibrahim; Mafarja, Majdi; Chen, Huiling (2019). "Harris hawks optimization: Algorithm and applications". Future Generation Computer Systems. 97: 849–872. doi:10.1016/j.future.2019.02.028. ISSN 0167739X.
 ^ Karaboga, Dervis (2010). "Artificial bee colony algorithm". Scholarpedia. 5 (3): 6915. Bibcode:2010SchpJ...5.6915K. doi:10.4249/scholarpedia.6915.
 ^ ^{a} ^{b} Moscato, P. (1989). "On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts: Towards Memetic Algorithms". Caltech Concurrent Computation Program (report 826).
 ^ Harifi, Sasan; Khalilian, Madjid; Mohammadzadeh, Javad; Ebrahimnejad, Sadoullah (20190225). "Emperor Penguins Colony: a new metaheuristic algorithm for optimization". Evolutionary Intelligence. doi:10.1007/s1206501900212x. ISSN 18645917.
 ^ Tomoiagă B, Chindriş M, Sumper A, SudriaAndreu A, VillafafilaRobles R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGAII. Energies. 2013; 6(3):1439–1455.
 ^ Ganesan, T.; Elamvazuthi, I.; Ku Shaari, Ku Zilati; Vasant, P. (20130301). "Swarm intelligence and gravitational search algorithm for multiobjective optimization of synthesis gas production". Applied Energy. 103: 368–374. doi:10.1016/j.apenergy.2012.09.059.
 ^ Ganesan, T.; Elamvazuthi, I.; Vasant, P. (20111101). Evolutionary normalboundary intersection (ENBI) method for multiobjective optimization of green sand mould system. 2011 IEEE International Conference on Control System, Computing and Engineering (ICCSCE). pp. 86–91. doi:10.1109/ICCSCE.2011.6190501. ISBN 9781457716423.
 ^ Mohammad Hossein Zarei, Mehdi Davvari, Farhad Kolahan, and Kuan Yew Wong, “Simultaneous Selection and Scheduling with SequenceDependent Setup Times, Lateness Penalties, and Machine Availability Constraint: Heuristic Approaches”, International Journal of Industrial Engineering Computations, vol. 7 no. 1, pp. 147–160, 2016. DOI: 10.5267/j.ijiec.2015.7.001
 ^ ^{a} ^{b} Kirkpatrick, S.; Gelatt Jr., C.D.; Vecchi, M.P. (1983). "Optimization by Simulated Annealing". Science. 220 (4598): 671–680. Bibcode:1983Sci...220..671K. CiteSeerX 10.1.1.123.7607. doi:10.1126/science.220.4598.671. PMID 17813860.
 ^ ^{a} ^{b} Holland, J.H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. ISBN 9780262082136.
 ^ ^{a} ^{b} Glover, Fred (1977). "Heuristics for Integer programming Using Surrogate Constraints". Decision Sciences. 8 (1): 156–166. CiteSeerX 10.1.1.302.4071. doi:10.1111/j.15405915.1977.tb01074.x.
 ^ ^{a} ^{b} Glover, F. (1986). "Future Paths for Integer Programming and Links to Artificial Intelligence". Computers and Operations Research. 13 (5): 533–549. doi:10.1016/03050548(86)900481.
 ^ X. S. Yang, Metaheuristic optimization, Scholarpedia, 6(8):11472 (2011).
 ^ Glover F., (1986). Future paths for integer programming and links to artificial intelligence, Computers and Operations Research, 13, 533–549 (1986).
 ^ Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics. 22 (3): 400–407. doi:10.1214/aoms/1177729586.
 ^ Barricelli, N.A. (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68.
 ^ Rastrigin, L.A. (1963). "The convergence of the random search method in the extremal control of a many parameter system". Automation and Remote Control. 24 (10): 1337–1342.
 ^ Matyas, J. (1965). "Random optimization". Automation and Remote Control. 26 (2): 246–253.
 ^ Nelder, J.A.; Mead, R. (1965). "A simplex method for function minimization". Computer Journal. 7 (4): 308–313. doi:10.1093/comjnl/7.4.308.
 ^ Rechenberg, Ingo (1965). "Cybernetic Solution Path of an Experimental Problem". Royal Aircraft Establishment, Library Translation.
 ^ Fogel, L.; Owens, A.J.; Walsh, M.J. (1966). Artificial Intelligence through Simulated Evolution. Wiley. ISBN 9780471265160.
 ^ Hastings, W.K. (1970). "Monte Carlo Sampling Methods Using Markov Chains and Their Applications". Biometrika. 57 (1): 97–109. Bibcode:1970Bimka..57...97H. doi:10.1093/biomet/57.1.97.
 ^ Cavicchio, D.J. (1970). "Adaptive search using simulated evolution". Technical Report. University of Michigan, Computer and Communication Sciences Department. hdl:2027.42/4042.
 ^ Kernighan, B.W.; Lin, S. (1970). "An efficient heuristic procedure for partitioning graphs". Bell System Technical Journal. 49 (2): 291–307. doi:10.1002/j.15387305.1970.tb01770.x.
 ^ Mercer, R.E.; Sampson, J.R. (1978). "Adaptive search using a reproductive metaplan". Kybernetes. 7 (3): 215–228. doi:10.1108/eb005486.
 ^ Smith, S.F. (1980). A Learning System Based on Genetic Adaptive Algorithms (PhD Thesis). University of Pittsburgh.
 ^ Wolpert, D.H.; Macready, W.G. (1995). "No free lunch theorems for search" (PDF). Technical Report SFITR9502010. Santa Fe Institute.
 ^ Igel, Christian, Toussaint, Marc (Jun 2003). "On classes of functions for which No Free Lunch results hold". Information Processing Letters. 86 (6): 317–321. arXiv:cs/0108011. doi:10.1016/S00200190(03)002229. ISSN 00200190.CS1 maint: Multiple names: authors list (link)
 ^ Auger, Anne, Teytaud, Olivier (2010). "Continuous Lunches Are Free Plus the Design of Optimal Optimization Algorithms". Algorithmica. 57 (1): 121–146. CiteSeerX 10.1.1.186.6007. doi:10.1007/s0045300892445. ISSN 01784617.CS1 maint: Multiple names: authors list (link)
 ^ Stefan Droste; Thomas Jansen; Ingo Wegener (2002). "Optimization with Randomized Search Heuristics – The (A)NFL Theorem, Realistic Scenarios, and Difficult Functions". Theoretical Computer Science. 287 (1): 131–144. CiteSeerX 10.1.1.35.5850. doi:10.1016/s03043975(02)000944.
Further reading
 Sörensen, Kenneth; Sevaux, Marc; Glover, Fred (20170116). "A History of Metaheuristics" (PDF). In Martí, Rafael; Panos, Pardalos; Resende, Mauricio (eds.). Handbook of Heuristics. Springer. ISBN 9783319071237.
External links
 Fred Glover and Kenneth Sörensen (ed.). "Metaheuristics". Scholarpedia.
 EU/ME forum for researchers in the field.