To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Reward-based selection

From Wikipedia, the free encyclopedia

Reward-based selection is a technique used in evolutionary algorithms for selecting potentially useful solutions for recombination. The probability of being selected for an individual is proportional to the cumulative reward obtained by the individual. The cumulative reward can be computed as a sum of the individual reward and the reward inherited from parents.

YouTube Encyclopedic

  • 1/3
    Views:
    254 801
    305 128
    535 758
  • Problem-Solving Techniques #13: Weighted Scoring Model
  • How to Redeem Chase Points For MAX VALUE (Beginner’s Guide)
  • How to Use Excel to Calculate Probabilities : Advanced Microsoft Excel

Transcription

Description

Reward-based selection can be used within Multi-armed bandit framework for Multi-objective optimization to obtain a better approximation of the Pareto front. [1]

The newborn and its parents receive a reward , if was selected for new population , otherwise the reward is zero. Several reward definitions are possible:

  • 1. , if the newborn individual was selected for new population .
  • 2. , where is the rank of newly inserted individual in the population of individuals. Rank can be computed using a well-known non-dominated sorting procedure.[2]
  • 3. , where is the hypervolume indicator contribution of the individual to the population . The reward if the newly inserted individual improves the quality of the population, which is measured as its hypervolume contribution in the objective space.
  • 4. A relaxation of the above reward, involving a rank-based penalization for points for -th dominated Pareto front:

Reward-based selection can quickly identify the most fruitful directions of search by maximizing the cumulative reward of individuals.

See also

References

  1. ^ Loshchilov, I.; M. Schoenauer; M. Sebag (2011). "Not all parents are equal for MO-CMA-ES" (PDF). Evolutionary Multi-Criterion Optimization 2011 (EMO 2011). Springer Verlag, LNCS 6576. pp. 31–45. Archived from the original (PDF) on 2012-06-04.
  2. ^ Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. (2002). "A fast and elitist multi-objective genetic algorithm: NSGA-II". IEEE Transactions on Evolutionary Computation. 6 (2): 182–197. CiteSeerX 10.1.1.17.7771. doi:10.1109/4235.996017.
This page was last edited on 28 September 2023, at 21:02
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.