English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Constructing Low Star Discrepancy Point Sets with Genetic Algorithms

MPS-Authors
/persons/resource/persons45750

Doerr,  Carola
Algorithms and Complexity, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Doerr, C., & De Rainville, F.-M. (2013). Constructing Low Star Discrepancy Point Sets with Genetic Algorithms. In C. Blum, & E. Alba (Eds.), GECCO'13 (pp. 789-796). New York, NY: ACM.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0015-7910-5
Abstract
The recently active research area of black-box complexity revealed that for many optimization problems the best possible black-box optimization algorithm is significantly faster than all known evolutionary approaches. While it is not to be expected that a general-purpose heuristic competes with a problem-tailored algorithm, it still makes sense to look for the reasons for this discrepancy. In this work, we exhibit one possible reason---most optimal black-box algorithms profit also from solutions that are inferior to the previous-best one, whereas evolutionary approaches guided by the ``survival of the fittest'' paradigm often ignore such solutions. Trying to overcome this shortcoming, we design a simple genetic algorithm that first creates λ offspring from a single parent by mutation with a mutation probability that is k times larger than the usual one. From the best of these offspring (which often is worse than the parent) and the parent itself, we generate further offspring via a uniform crossover operator that takes bits from the winner offspring with probability 1/k only. A rigorous runtime analysis proves that our new algorithm for suitable parameter choices on the \onemax test function class is asymptotically faster (in terms of the number of fitness evaluations) than what has been shown for (μ \stackrel+}{, λ) EAs. This is the first time that crossover is shown to give an advantage for the \onemax class that is larger than a constant factor. Using a fitness-dependent choice of k and~λ, the optimization time can be reduced further to linear in~n. Our experimental analysis on several test function classes shows advantages already for small problem sizes and broad parameter ranges. Also, a simple self-adaptive choice of these parameters gives surprisingly good results.