Evolutionary Algorithms – Directing the undirected


This is a followup to my previous post on the same topic : http://rogeralsing.com/2010/07/29/evolutionary-algorithms-problems-with-directed-evolution/

I started thinking about possible solutions after I published my last post, and I think I might have something that could work.

In order to harness the full power of evolution, we need to be able to simulate undirected evolution.
Undirected evolution requires more than one environment, so that organisms can evolve and niche into specific environments instead of just competing for the same environment.

In our case, the “environment” is the “problem” that we want to solve.
The more fit an organism is in our environment, the better it is to solve our given problem.

So far, I think everyone have been applying evolutionary/genetic algorithms on individual problems, evolving algorithms/solutions for a single purpose..
And thus, experiencing the problems of irreducible complexity.

But what if we were to introduce an entire “world” of problems?

If we have a shared “world” where we can introduce our new problems, the problem would be like an island in this world, and this island would be a new environment where existing organisms can niche into.
This way, we could see organisms re-use solutions from other problems, and with crossover we could see combinations of multiple solutions for other problems.

The solutions would of course have to be generic enough to handle pretty much every kind of algorithm, so I guess the DNA of the organisms needs to be very close to a real GPL language.
Possibly something like serialized Lisp/Clojure, running in a sandboxed environment…

By adding more and more problems to this “world”, the better it would become at solving harder problems since it can reuse existing solutions.

The structure of it all would be something like:

The “World” is the container for “Problems”.
“Problems” contains input/output sampling and “populations of organisms”, thus, each problem have its own eco system.
“organisms” evolve by mutations and genetic crossover, they can also migrate to other “problems” from time to time.

This way, an organism from the “SSH Encryption island” may migrate over to the island of “Bank authentication login code generator island” and possibly be used as a module in one of the branches of one of the organisms in there, and thus removing “irreducible complexity” from the equation here..
Evolution would be locally directed and globally undirected…

I think this could work at least to some extent, or?

//Roger

About these ads

Scaling Clustered Evolution: 1 + 1 = 4


This is a follow up on: http://rogeralsing.com/2009/01/01/clustering-evolution-we-have-lift-off/

Yesterday I got the chance to test run the cluster version of EvoLisa on a 64 bit 4 core machine.
(Thanks to my collegue Ulf Axelsson for the help on this one)

The results from the cluster version are quite interesting.
One could expect that you would get a maximum of 200% performance by using two nodes instead of one.
However, this is not the case, we are getting 400% performance by doing this.

Doubling the CPU capacity and get 4 times as much work done.

How is this possible?

This really confused me for a while.
But the reason is quite obvious once you figure it out.

Let’s assume the following:

* We use one core.
* We are running 1000 generations with 50 polygons over 10 seconds.
* 1 out of 10 mutations are positive.

This gives us approx 100 positive mutations over 10 seconds.

If we add one more core we would get:

* We use two cores in parallel.
* We are running a total of 2000 generations, with 50 polygons per node over 10 seconds.
* 1 out of 10 mutations are positive.

This would give us approx 200 positive mutations over 10 seconds. 
Thus, this would give the expected 200% performance.

BUT:

We are NOT rendering 50 polygons per core in this case.
Each core is only rendering 25 polygons each.

During those 10 seconds, we are actually able to run 2000 generations instead of 1000 per core, thus, running a total of 4000 generations over 10 sec.
Which in turn results in approx 400 positive mutations during the same time span.

We have doubled the CPU capacity and halved the work that needs to be done per node.
Thus, we get a * 2 * 2 perf boost.

Pretty slick :-)

PS.
The 4 core machine was able to render the Mona Lisa image with the same quality as the last image in the evolution series in: 1 min 34 sec!

//Roger