# Item

ITEM ACTIONSEXPORT

Released

Conference Paper

#### Optimizing Monotone Functions Can Be Difficult

##### External Resource

No external resources are shared

##### Fulltext (restricted access)

There are currently no full texts shared for your IP range.

##### Fulltext (public)

There are no public fulltexts stored in PuRe

##### Supplementary Material (public)

There is no public supplementary material available

##### Citation

Doerr, B., Jansen, T., Sudholt, D., Winzen, C., & Zarges, C. (2010). Optimizing
Monotone Functions Can Be Difficult. In R. Schaefer, C. Cotta, J. Kolodziej, & G. Rudolph (*Parallel Problem Solving from Nature, PPSN XI* (pp. 42-51). Berlin: Springer. doi:10.1007/978-3-642-15844-5_5.

Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-16AC-C

##### Abstract

Extending previous analyses on function classes like linear functions, we
analyze how the simple (1+1) evolutionary algorithm optimizes pseudo-Boolean
functions that are strictly monotone.
Contrary to what one would expect, not all of these functions are easy to
optimize. The choice of the constant $c$ in the mutation probability $p(n) =
c/n$ can make a decisive difference.
We show that if $c < 1$, then the \EA finds the optimum of every such
function in $\Theta(n \log n)$ iterations. For $c=1$, we can still prove an
upper bound of $O(n^{3/2})$.
However, for $c > 33$, we present a strictly monotone function such that the
\EA with overwhelming
probability does not find the optimum within $2^{\Omega(n)}$
iterations. This is the first time that we observe that a constant factor
change of the mutation probability changes the run-time by more than constant
factors.