Evolutionary Algorithms – Directing the undirected


This is a followup to my previous post on the same topic : http://rogeralsing.com/2010/07/29/evolutionary-algorithms-problems-with-directed-evolution/

I started thinking about possible solutions after I published my last post, and I think I might have something that could work.

In order to harness the full power of evolution, we need to be able to simulate undirected evolution.
Undirected evolution requires more than one environment, so that organisms can evolve and niche into specific environments instead of just competing for the same environment.

In our case, the “environment” is the “problem” that we want to solve.
The more fit an organism is in our environment, the better it is to solve our given problem.

So far, I think everyone have been applying evolutionary/genetic algorithms on individual problems, evolving algorithms/solutions for a single purpose..
And thus, experiencing the problems of irreducible complexity.

But what if we were to introduce an entire “world” of problems?

If we have a shared “world” where we can introduce our new problems, the problem would be like an island in this world, and this island would be a new environment where existing organisms can niche into.
This way, we could see organisms re-use solutions from other problems, and with crossover we could see combinations of multiple solutions for other problems.

The solutions would of course have to be generic enough to handle pretty much every kind of algorithm, so I guess the DNA of the organisms needs to be very close to a real GPL language.
Possibly something like serialized Lisp/Clojure, running in a sandboxed environment…

By adding more and more problems to this “world”, the better it would become at solving harder problems since it can reuse existing solutions.

The structure of it all would be something like:

The “World” is the container for “Problems”.
“Problems” contains input/output sampling and “populations of organisms”, thus, each problem have its own eco system.
“organisms” evolve by mutations and genetic crossover, they can also migrate to other “problems” from time to time.

This way, an organism from the “SSH Encryption island” may migrate over to the island of “Bank authentication login code generator island” and possibly be used as a module in one of the branches of one of the organisms in there, and thus removing “irreducible complexity” from the equation here..
Evolution would be locally directed and globally undirected…

I think this could work at least to some extent, or?

//Roger

About these ads

Evolutionary Algorithms – Problems with Directed Evolution


Creationists often use “irreducible complexity” as an argument against evolution.
e.g. you need all parts in place and functioning before the “whole” can do its work and thus reproduce and spread its features.

The bacteria flangellum is one such feature that have been attributed with “irreducible complexity”.
You need the “tail” and some “engine” parts in place before it can be used as a propeller and drive the bacteria forwards.

Evolutionists have however proven this wrong and shown that each of these parts have had other purposes before they were re-used/combined for propulsion, so each part was already present.

The key here is that evolution in reality is not directed to a “final goal” it simply makes organisms adapt to their current environment.
e.g. an organism might evolve a single pair of legs and later generations might get more of those legs if that is beneficial.
The front pair of legs might even later evolve into a pair of arms that allows the organisms to grab food while they eat and so on.

In short, existing features can be reused and refined in order to reach a higher fitness level in the current environment.

As far as I know, we still have not managed to accomplish this sort of “undirected evolution” in computer programs in the same sense.
If we make a program that are supposed to come up with a solution for a given problem, I would use “directed evolution” and try to breed solutions that are better and better at solving the given problem.
So if our program was supposed to come up with a propulsion system for a body, it would fail at evolving the bacteria flangellum since we experience the effects of irreducible complexity, our program is unable to evolve all the parts for other reasons than moving the body forwards.

In order to harness the full power of evolution in computer programs, we need to be able to simulate “undirected evolution” so that we can evolve all these parts that later can be re-used for other purposes.

Are there any research going on in this topic at all?

I know that the old “Tierra” simulation was sort of undirected, the only goal was to consume as much CPU as possible, but it sure could use undirected evolution to get to that goal.

But other than that, anything?

Scaling Clustered Evolution: 1 + 1 = 4


This is a follow up on: http://rogeralsing.com/2009/01/01/clustering-evolution-we-have-lift-off/

Yesterday I got the chance to test run the cluster version of EvoLisa on a 64 bit 4 core machine.
(Thanks to my collegue Ulf Axelsson for the help on this one)

The results from the cluster version are quite interesting.
One could expect that you would get a maximum of 200% performance by using two nodes instead of one.
However, this is not the case, we are getting 400% performance by doing this.

Doubling the CPU capacity and get 4 times as much work done.

How is this possible?

This really confused me for a while.
But the reason is quite obvious once you figure it out.

Let’s assume the following:

* We use one core.
* We are running 1000 generations with 50 polygons over 10 seconds.
* 1 out of 10 mutations are positive.

This gives us approx 100 positive mutations over 10 seconds.

If we add one more core we would get:

* We use two cores in parallel.
* We are running a total of 2000 generations, with 50 polygons per node over 10 seconds.
* 1 out of 10 mutations are positive.

This would give us approx 200 positive mutations over 10 seconds. 
Thus, this would give the expected 200% performance.

BUT:

We are NOT rendering 50 polygons per core in this case.
Each core is only rendering 25 polygons each.

During those 10 seconds, we are actually able to run 2000 generations instead of 1000 per core, thus, running a total of 4000 generations over 10 sec.
Which in turn results in approx 400 positive mutations during the same time span.

We have doubled the CPU capacity and halved the work that needs to be done per node.
Thus, we get a * 2 * 2 perf boost.

Pretty slick :-)

PS.
The 4 core machine was able to render the Mona Lisa image with the same quality as the last image in the evolution series in: 1 min 34 sec!

//Roger