When to use Evolutionary Algorithms?

Introduction


In many optimization problems where the derivatives of the objective or constraints are not easy to calculate (the formulation is not available and provided by a simulation or very complex formula) and the number of dimensions is very large so that finite-difference estimate of derivatives is not economical, or the space is not always continuous (e.g., discrete optimization problems), an evolutionary algorithm (EA) can be found beneficial. The main idea behind almost all EAs is to sample the search space and move the solutions in that space to potentially better locations.


History

Image result for genetic algorithmPerhaps the first algorithm in the family was invented by John Henry Holland inspired by how chromosomes share information and develop (evolve) through an evolution process to improve. The genetic algorithm was the outcome of that research. The idea is simple, at each iteration, candidate solutions of the optimization problem at hand are "combined" and "mutated" to generate potentially better solutions. After all, this process has worked fine for animals and evolution. As the algorithm is quite simple and doesn't need any objective formula or derivatives, it could be easily applied to many optimization problems. After the first model, many more operators and techniques were added to the main algorithm to deal with constraints, multiple objectives, hybridization, and so on.

Types

There have been many types of evolutionary algorithms, such as genetic algorithm, evolutionary strategy, particle swarm optimization, ant colony, and many others. Depending on the optimization problem, the best algorithm for the optimization may differ. For example, for rather small continuous space optimization without constraints or with linear constraints, CMAES might be one of the best choices.

When to be used

Evolutionary algorithms are known to be effective in the exploration of the search space, while for exploitation there are many other alternatives. Exploration is the act of searching for regions in the search space where potentially high-quality solutions can be found (i.e., global search). Exploitation, however, focuses on the local information.

I have put together the following guideline to choose an appropriate optimization method for different problems.




A: an Evolutionary Algorithm
B: Gradient descend
C: Use A for exploration and then B
D: Finite Difference method to calculate derivatives and use gradient descend
E: Use A for exploration and then D

Higher-order of derivatives can be also helpful for better exploitation in combination with the gradient descent, hence, may boost B and D.

Which EA to use

The diagram shows under what condition which method may be the best choice.



When to use which?

Note that if the constraints and the objectives are all linear or quadratic then linear program or quadratic program are more effective than other methods, including all EA variants.

Algorithms details

Evolutionary Strategy (ES) is an EA for continuous space optimization problems (optimization problems with continuous variables). It uses a population of candidate solutions that are evolved according to an adaptive normal distribution. The adaptation process updates the mean and the variance of the normal distribution according to the quality of the solutions in the population. Matlab source code can be found here. See this article for more information.

Covariance Matrix Adaptation ES (CMAES): rather than generating normal distributions with variances only, the algorithm learns the covariance of the solutions to find best paths towards better solutions. This is equivalent to the estimation of second-order information of the search space. Hence, CMAES is slower than ES. A very good Matlab code can be found here. Python code can be found here.

Particle swarm optimization (PSO) is a population-based optimization method. It is inspired by swarm intelligence, in which each candidate solution "fly" over the search space with some speed. The speed direction and magnitude are controlled to (hopefully) get the "particle" to a better solution. It has information sharing: each particle knows about the best solution others have experienced in their lifetime. The method is quite fast but it has some theoretical issues (see this paper). For Matlab, here is a good place to start. For Python, see this. See also this library, that can come in handy.

Differential Evolution (DE) is another EA. I found this tutorial useful.

Genetic Algorithm (GA) is the chief amongst these algorithms and the pioneer one. See this page.


Like always, this is only my opinion based on my experience and only a point of discussion. I would love to discuss this topic further in the comments.


By R. Bonyadi




Comments

Popular Posts