Genetic Programming: Mona Lisa FAQ

This is a follow up to:

Mona Lisa FAQ

Here are some answers to the most frequently asked questions:

Q) What is the use for this? this is nothing but a fun toy

A) Exactly… It was a 3 hours fun hack for my friends and readers… 

Q) What is your population size?

A) TWO , Parent and Child is competing against eachother each generation.

Q) Is this Genetic Programming?, I think it is a GA or even a hill climbing algorithm.

A) I will claim that this is a GP due to the fact that the application clones and mutates an executable Abstract Syntax Tree (AST).

Even if the population is small, there is still competition between the parent and the child, the best fit of the two will survive.



    DrawPolygon (
       Point(554,93) , Point(675,45) , Point (2345,5356) , Point(4,5)

    DrawPolygon (
       Point(66,112) , Point(12,99) , Point (542,8756)

    DrawPolygon (
        //dynamic number of points
        Point(423,342) , Point(3,432) , Point (12354,1234) , Point (0,23)


Q) What is your fitness function?

A) The fitness function is a pixel by pixel compare where the fitness for each pixel is summed and compared to the parent.

Pseudo Sample:

double fitness = 0
for y = 0 to width
    for x = 0 to height
          color c1 = GetPixel(sourceImage, x, y)
          color c2 = GetPixel(generatedImage, x, y)

          //get delta per color
          double deltaRed = c1.Red - c2.Red
          double deltaGreen = c1.Green - c2.Green
          double deltaBlue = c1.Blue - c2.Blue

          // measure the distance between the colors in 3D space
          double pixelFitness = deltaRed * deltaRed +
                                deltaGreen * deltaGreen +
                                deltaBlue * deltaBlue 

          //add the pixel fitness to the total fitness ( lower is better )
          fitness += pixelFitness

return fitness

Q) How long did the generation take?

A) About 3 hours on my laptop.

Q) Where can I get the source code?

A) Here:

Adding more soon….

112 thoughts on “Genetic Programming: Mona Lisa FAQ

  1. Could I have a pic with just the polygons (outline) in the final stage? I want to see side-by-side the final picture and the final polygons. my address is

    are you evolving just the colors or also the geometry?

  2. Will the polished version allow for users to select a picture file other than Mona Lisa?

  3. Roger, thanks for the clarifications. The only other question I would have would be how did you mutate the drawing? What probabilities did you use?

  4. Fascinating. This reminds me a lot of fractal compression using IFSs from “back in the day”. It shares the attribute of being resolution independent; a nice property for image compression.

    What sort of effective compression ratio do you see at the default image resolution? That is, if you use a standard compression routine (e.g. ZIP) to compress your presumably non-optimal DNA language, what is the output file size compared to the source?

  5. Pingback: Genetic Programming: Evolution of Mona Lisa « Roger Alsing Weblog

  6. I don’t know if there’d be a benefit but it’d be interesting fun to play with replacing pixelFitness() with a Delta E colour difference measure, i.e.

    RGB colourspace isn’t good for measuring colour differences – so there might be faster convergence and a better visual result with Delta E 1976, Delta E 1994 or Delta E 2000. Though the potential trade off is the computational expense of Delta E calcs and RGB to LAB colourspace conversion.

    Very neat btw.

  7. For a performance boost, I’d take out the sqrt() in the fitness function and just leave the multiplications:

    double pixelFitness = deltaRed * deltaRed
    + deltaGreen * deltaGreen
    + deltaBlue * deltaBlue

    While the sqrt() would give the proper value. Without the sqrt() still gives a valid indicator of fitness.

  8. I’m going to echo the comments of somebody else in the other thread… the program needs to also output in SVG.

  9. @Daniel ~ the fitness function would probably work without the Sqrt(), but its presence is still mathematically significant because we’re looking at the picture as a whole.

    With the Sqrt(), the match will get worse in a linear fashion when a region of the picture gets worse. Without it, the match will get worse quadratically.

    The question boils down to whether, for example, a single pixel that’s off by 20 colour units (measured in 3D space as it was in the function) is better or worse than two pixels that are off by 10 colour units each (according to the function presently, they’re equivalent, 20 vs. 20; take away the Sqrt and the one pixel is much worse, 400 vs. 200). If the one bad pixel is worse than the two less bad ones, the Sqrt() slows down the convergence.

    You’d probably get the fewest iterations if you determined the ideal exponent (instead of 1/2) to describe what the relationship between one “really bad” pixel and two “less bad” ones was, and used that exponent instead. Then you’d waste more time calculating the power, though, so it’s probably not worth it.

  10. This is a great compression methode! Did you think about making it in 3D to compress à movie?

  11. Have you considered trying HSI instead of RGB for each pixel? This might reduce the noise more quickly as it would simplify it from the RGB version and the distance would be a bit more precise.

  12. If you are willing to share the source code to your app, regardless of how ‘clean looking’ it is, I can rework it so it will probably run many orders of magnitude faster as that is kind of my specialty.

    I would enjoy messing with the algorithm.


  13. To make further speed ups beyond the removal of sqrt() you may find that sum of absolute difference is an equivalent, and faster still fitness function. Where:

    pixelFitness=abs(deltaRed) + abs(deltaGreen) + abs(deltaBlue);

  14. The performance problem is not really in the comparer but in the renderer.

    The renderer is using .NET GDI+ which is a software based renderer which is fairly slow.

    So thats where I’m wasting CPU atm.

  15. This is fantastic. In thinking of this as an asymetric compression algorithm, the basis function is a semi-transparent colored polygon and the encoding algorithm (transform) is the GP itself. As an Audio DSP engineer I have to wonder if audio compression could be achieved with a new basis function (not A(n)*sin(…)) and a GP could be used to encode.

  16. I wonder what is the total node count (all of the corners)? This might be a good general mechanism to vectorize a bitmap. More CPU –> better compression for a given rendering quality.

  17. About the first sample code:

    How are the values for the polygons/brushes set ? Are these just random values ? And after each iteration they change randomly a bit to see if the new version is better ? And what exactly is “a bit” ? I think randomly change everything would be too much.

    Why is the last polygon made up of dynamic points (>3) while the first two polygons are ones which just 3 points ?

  18. This is neat, but its *Not* Genetic programming. The individuals of the population are not executed and are not programs. you call it a executable abstract tree but I would contend that its a simple data structure that has no power to do computation on its own.

  19. >>you call it a executable abstract tree but I would contend that its a simple data structure that has no power to do computation on its own.

    Would you say there is a difference if the root of the AST returned an INT or a Bitmap ?

    What makes the INT version more of a program than an AST that returns a Bitmap?

    It’s not like an “Add” node in a computational tree is more complex than a “DrawPolygon” node.

    Call it Genetic declarative programming if you like ;-)

    At what stage do you consider a program to be a program?

    string x = “hello world”
    print x

    Is that a program?
    It does not compute anything and is merely rendering the data stored in X onto the screen.

  20. very cool, but you should get better results if you always kills off the parents but have more children to choose from; because keeping the parents you can quite often get stuck in a place where the parents are “good enough” but all their 1-mutation-step away children are not better, i.e. “stuck in a valley” when people talk about hill climbing; whereas if you always kills off the parents you won’t get this problem.

  21. What if, instead of comparing pixels against the Mona Lisa to determine fitness, you allowed random internet users to vote on the “beauty” of each successive generation (say on a scale of -1 to 1) where users would compare the new generation to the old generation and vote on the improvement (or lack thereof). The goal of the program would be to obtain a consistently high rating. Given enough time, would we end up with a beautiful piece of abstract art or a muddled mess? Or what if the human evaluators were asked to judge not the beauty of the painting but whether each successive generation looked more or less like particular figure, such as a face. Would we be able to “crowd-source” our way toward something recognizable?

  22. Can’t resist to implement this in one of OpenGL + CUDA or DirectX + CUDA. Consider a speed up of 1000 compared to a software render.

    It might even become feasible to apply this to movie clips, taking the polygons from the previous frame as a basis to start mutating for the current frame. Encode the movement of the polygon edges (and any removed/additional ones) and you’ve got a resolution-independent video codec.

    Source code, please ;)

  23. I think it’s more simulated annealing than genetic programming, if anything, but it isn’t really SA, since it is easy to get stuck in a local optimum. I’d suspect that actually using simulated annealing (sometimes accepting a child that is worse than the parent) will give better result. Check wikipedia or something for a description. I’m really interested to see using it makes the picture better.

  24. Very cool. Could you explain what the random-update function is? How are the “genes” mutated? Your other page says you “mutate it slightly”. I assume your configuration space involves n polygons each with some number of vertices and with a color. Are you just adding Gaussian random numbers to those continuous variables, or what?

    It looks like this is a random down-hill walk through state space. Have you tried something like simulated annealing instead? It seems like that might get you a more-optimal solution.

    Alternately, there are many other optimization algorithms you might use that might work better/faster. One that I haven’t tried but that looks interesting for cases in which an analytical derivative does not exist is this:

    Happy optimization.

  25. Hi Roger,

    Firstly, this is very interesting stuff. I think the debate about whether it is really genetic programing, or evolution, or what not, is a bit silly… the point is: it is interest, and it does apply the basic premise of genetic programing and genetic algorithms.

    When I saw this yesterday, I was inspired to tinker with it myself. I am in the process of making a C# app that does similar, but not identical thing, trying to match any picture. It is a great thought exercise.

    With regards to speed: I two am using GDI, because I am more interested in the concept then making it super pretty, but I am not rendering each generation to the screen, I am rendering them to non-visible pictures. This seems to be quite high performance, is that what you are also doing?

  26. Cool. Had the same idea decades ago, back on Commodore 64, but that machine wasn’t capable for such a things. I was sure it would be too slow even on modern machines (hours per frame IS slow) and / or the compression ratio wouldn’t be really impressive at acceptable quality. And some academic people (I’m just a game programmer / demoscener) already axed the algorithm, so there’s no reason to toy with it.

    Anyway, looks impressive, I wonder how fast it can be after optimizations, and what is it’s compression ratio. It would be interesting to test it on real life photos (lena.bmp) with only triangles (more of them) in different colorspace. Unfortunately, even if it’s compression ratio is really good, I’m afraid it wouldn’t be useful in video compression.

  27. This is really quite amazing. I’m curious to see how well it gets results with other photos – the “enigmatic” quality of the smile is particularly well represented here.

    The chief difficulties of the GP is the selection of the fitness function. Another would be the creation of a large enough population to find the best fit. Given the (almost) zero cost per child, what would be the most efficient number of children to spawn before selection? Would there be benefit to merging the top two (three, four…n) fittest in someway to introduce the changes rather than random mutations? Could there be other qualities that could benefit the selection? For example, have one trait determined by edge detection to create a line drawing, have another trait to select by colour fidelity. The more traits, the better the return.

  28. … I also love the way that when someone does something cool like this, you get a million and one comments saying how it could be done better. They’re just jealous. I am too.

  29. This is really cool, I just wish I had thought of it first ;)

    I’d like to point out this: which is a JavaScript-based version of this which runs right in your browser. Couldn’t find it linked here before, but I could’ve missed it, so sorry if it’s been mentioned.

    @Rob Agar: And if you wrote some snippet of code, wouldn’t you want to learn if you could do it better? I would. Of course, I didn’t read all the “how to do it better” comments and some of them aren’t probably useful, but you get the point.

  30. Cool stuff, but it should be clarified that this is not a genetic or evolutionary algorithm, “only” a stochastic optimum search. To quote Mi5ke, who already explained this in Rogers’s about me:

    Mi5ke: This is not Genetic Programming, nor is it even a Genetic Algorithm! It is just stochastic hill-climbing. Why? You have a population of one, and only have random changes. Think about it! Random movement follwed by evaluation.

    For a Genetic Algorithm you create a population of initially random “DNA” strings, and score each by how well it re-creates the Mona Lisa. You then pick two of them, with a probability in proportion to the score, and allow them to “breed”. That means, pick one of them at random, start copying it, but at randomly chosen moments, swap to copy the other string instead. If you wish, make *extraordinarily* rare copy errors. Score the new child, and add it to the population, while removing the lowest scoring string. Then go back and pick another pair for breeding. And so on.

    The key realisations necessary are that:

    A: all the needed randomness was in the original population, not in the copy “errors” – they just keep things from becoming too sterile;

    B: the two DNA strings can be thought of as vertices of a hyper-volume; the “child” is just another vertex of that same hyper-volume, not an in-between point.

    That would be a true GA, which should converge in a couple of hundred generations, unlike the nonsense presented!

  31. It *does* serve a purpose. We can now ask a creationist: “If the Mona Lisa could be painted by random chance, would you believe organisms can evolve by random chance?”

    Anyway, I’m a bit disappointed that you’re working on threading when everybody wants the source code and you don’t want to release it because it looks messy. Also, you could get someone to refactor it for you.

    Heck, I’ll do it myself if you want.

    Last but not least, the worst possible reason for not releasing source code, is embarrassment. Also, if you really want people to like the looks of your code, you’ll gain much self-discipline by publishing it, and cleaning it as quickly as possible. ;)

  32. Helgi,

    How is this an argument against creationism? You’ve got mutations happening as determined by an algorithm designed by an intelligent creator, and they are mutating based on their closeness to a predetermined, already-created image.

    Don’t turn this interesting code example into some ridiculous weapon against other people.

  33. First of all, about evolution vs. chance, nothing will ever happen by chance. When a creationist says “how can that have happened by chance”, the answer is that it could not. Evolution needs another driving source, a selection different from random chance. I have a short article (Swedish) on this here:

    I was wondering about the resolution on the fitness number generation. Will you get to the final result faster if the number of sample points increased a bit from each generation? The first generation might just need 4 sample points, the next might just need 16, and so on.

  34. I’d like to point out, to those people who think this relates to Creationism/Evolution, that it requires knowledge of the endpoint. If you can define a “fitness” function that doesn’t know what the Mona Lisa looks like, and the algorithm still creates a work of art, then you have something.

    In the meantime, if you used a copyrighted picture as the goal, the result of this algorithm would be considered a “derivative work” (because you used the original to create it) and it’d violate the copyright. That’s not “random”.

  35. Interesting however is it really that impressive?

    Would you find the following impressive?

    Take the 26 letters of the alphabet plus punctuation

    (e.g. ‘.’ , ‘,’ , ‘;’ , ‘?’ , ‘:! , ‘ ‘ , ‘(‘ , ‘)’ )

    26+8=34. Generate a string that is as long as the entire novel War and Peace. Compare the string to the

    string of letters in War and Peace. Iteratively modify the string. If the string is closer to

    War and Peace (e.g. an additional letter or punctuation matches on this iteration) replace the prior string with

    this one otherwise dont.

    In worst case in 34 * number of letters in Moby Dick, the novel would be exactly recreated. Would that indicate any intelligence on the part

    of the program,

    Here colors are substituted for letters and the result is an approximation. Note the backgrounddidn’t change much from iteration 99531 onward. It looks like the matching was focusing

    on eyes and mouth,

  36. Further to the “this does not represent evolution” point — this exercise is actually DISanalogous to evolution because this process is teleological, while evolution is not.

    One of the flawed arguments against evolution is the “Paley Watchmaker” argument, also told as the “747 in a Junkyard” argument. The argument goes, for instance, that if a million tornadoes hit a junkyard in succession, you’d never expect them to assemble a 747 by chance.

    One major flaw (of several) in this argument is that it is not analogous to how evolution works. Evolution is not teleological — there is no “ladder” of evolution and no end target toward which to aim. Instead, natural selection selects for any modification which favors passing that modification on to descendants.

    This experiment, on the other hand, has a specific end goal in mind, and generational “fitness” is evaluated based on whether it represents a step closer to that pre-determined end goal.

    So this exercise, while fascinating, does not illustrate any aspect of evolution. Rather, it demonstrates what some use to MISrepresent evolution.

    That doesn’t make this exercise any less interesting or relevant for doing what it does (which I agree is more of a stochastic hill climbing algorithm). Let’s just not confuse it for what it’s not.

  37. Nice application of a GA. The fitness function could be improved if it became more astringent as generations passed on. What kind of selection are you using? Ranked? tournament? It’d be interesting to see your representation, i.e. the chromosomes. It seems it falls into a local minima. The solution is good enough but it could be much better! We want the sauce plz, who cares if it is chunky!

  38. Regarding Rob’s comment above:
    “… I also love the way that when someone does something cool like this, you get a million and one comments saying how it could be done better. They’re just jealous. I am too.”

    You left out the people who drag religion and politics into it, the people who argue about the semantics of your description, and the one or two jerks calling it a fake and calling you a fraud.

    Personally, I think it’s really cool whether it’s SA, GA, GP, or whatever and I’m looking forward to the cleaned up code…

  39. very nice, i’ve started with GA when i was 16
    but at that time (the 80s) all I could do is play with simple math problems due to limited cpu and memory
    So after a year or 2 i moved on to other stuff and forgot about it. You’re article reminded me of this facinating subject and shows that with today’s pc’s you can achieve very nice results.
    I was so impressed that i immediatly started coding my own version in C#, sadly its not producing anything near mona lisa yet. But when it does i’ll post a link to the source here.

    Also looking forward to your sourcecode (ugly or neat, doesnt matter) so i can see what i do wrong:-)

  40. This is all fascinating — but I must say I’m even more fascinated by your comment icons. Hence I’m posting to see what I get.

  41. Unless the copyright was transferred to a relative post-mortem, the copyright entitlement dies once the creator does. Hence, the Mona Lisa would be considered part of public domain.


  42. Ok,

    First the creationist issue… This is a useful if somewhat blunt (in this implementation) weapon against the creationist myth that evolution is “random”. Ok, in this example the fitness function is very specific – though this need not be the case for all evolutionary computation. The car example posted above for instance is simply using “how far does this car go?” as the fitness function – yet selection pressures still “design” the car perfectly fine.

    The issue of which type of evolutionary computation is moot. Ok, there isn’t as much evolutionary mechanisms in this as there could be (which probably puts it – as others have said – in the stochastic hill climbing box), but it does create a tree structure (though doesn’t really make use of that for crossover and other genetic operators) much like Koza’s “automated design” programs.

    Roger, I would suggest re-running the simulation with a larger population size and some true genetic operators and see what effect that has on the convergence rate. As for the fitness function – I would keep it as is for now as it is unlikely to be a major factor in the convergence rate (at least not as much as the other suggestions).

  43. Although impressive, there is also the possiblity that mr Alsing has reversed the results. I can see a program taking a picture, putting random polygons colored by the background and slowly deconstructing the results till there is only a few remaining polygons. I’m not saying this is what has happpened – who am I to know. I’m only pointing out the fact that it’s also possible.

  44. Would you be interested in publishing a raw unoptimized version of your code?

    It would be nice to see run in a distributed platform (My current toy, a distributed platform where users use a browser to contribute, with some other network navigating tricks for the local network’s idiosyncrasies, and draconian security policies)

    And it would be a cool test of the distributed platform (I’ve never tested it with a practical application, just a simple factoring program for psuedoprimes)

    Maybe make this into a sourceforge project/openSource project?

  45. Pingback: Extremely Late Renaissance » Mona Lisa Underdrawn

  46. Add me to the list of folks who’d love to see the source code.

    I, too, started my own project after seeing this. I’m taking a population of 50 individuals, each with 50 triangles, through 20,000 generations.

    Currently, I’m not at all confident that my selection function is appropriate. (Nor my fitness function. Or my breeding algorithm.) I’m not getting results close to yours yet…. but it sure is fun playing with this!

  47. you say the slowness is in the rendering; it’s not necessary to render on each iteration, is it? if it is necessary, for the fitness method for example, then just render to screen every 100th or 1000th iteration or whatever, rendering only to memory on the other frames.

    maybe it’s the lack of sleep, bit i seem to remember C# and Java image generation slowness being in the “serialization” of the image to the display device, not necessarily the actual assembly of the image. no need to throw 3D hardware at a 2D image of 200×200 pixels.

  48. Pingback: A single snowflake in the fractal explosion of technology

  49. This reminded me of a 1991 project from Karl Sims:

    “It might be interesting to attempt to automatically evolve a symbolic expression that could generate a simple specific goal image. An image differencing function could be used to calculate a fitness based on how close a test image was to the goal, and an expression could be searched for by automatic selection.”

    Guess he was right :)
    I’m a bit underwhelmed by the progress in the last 17 years though..

  50. @jeremiah

    I am in the process of writing an implementation of Roger’s wonderful idea using a generation improvement / breeding approach.

    Like Roger I am using .NET (C#). Roger is right that the slow down is in the GTK. I am using a threaded approach with the worker threads separate to the GUI management thread, so I am rendering off screen, yet this off screen rendering – that is necessary for the fitness check – is one of the bottlenecks. The other bottleneck is actually reading from the rendered images. GTK is not optimised for this type of work. Having said that, GTK is very easy to use, and like I suspect Roger is, I am more interested in the algorithm then the specifics of implementation, so the trade off in performance is well worth the ability to focus on the evolution code.

  51. First of all, let me compliment you on the end result! That Mona Lisa is very distinguishable!

    I really wonder about the minimun number of polygons it would take to make a pixel-perfect representation of the source image. If that number would be sufficiently low it might amount to a very good compression algorithm. The algorithm would be very processor-intensive in encoding, but the resulting decoding should be very light.

  52. @Kristin: No, of course the Mona Lisa isn’t copyrighted. I was merely pointing out that if you did use a copyrighted image, the polygon graph produced by this algorithm would be a derivative work. Yes, the graph was created by random changes, but you then compared it to the original to decide whether the random change made it “better” or “worse”. The comparison introduces non-randomness (which is why we get something recognizable, obviously).

    For instance, suppose you create a random string of base pairs with the same length as the human genome. Applying the same algorithm, you change one pair at random, and if the mutated DNA is closer to the human genome you keep the mutation; if it’s not closer you discard it. You could repeat this process until you have the complete human genome, but what you’ve observed wasn’t “evolution”.

  53. First of all, I love the elegance of this GA application. A truly brilliant and illustrative example of GA.

    I was trying to roll my own in Flash by using polygons to render a webcam snapshot, but I’m having difficulty with the mutations — it’s prematurely settling on a result that doesn’t resemble the source image.

    I would love to see your source code for inspiration, and share my own once it works.

    In my application, I’m taking a snapshot from the camera object into a BitmapData. Then I render a series of random polygons into a BitmapData and do a compare() and convert to greyscale using an applyFilter(). This produces a B&W difference map between the two images. To any Flash developers out there: Is there any way to convert a BitmapData to a Matrix or a way to sum the total RGB color data in a BitmapData? I’m doing it pixel by pixel which is painfully slow.

    Thank you.

  54. @Tony: I’m not a flash developer, but I’d look into whether you could get a histogram of that greyscaled image. Then you’d simply have to sum the histogram elements times their indexes… e.g. H[1] would be the number of pixels which were off by 1, so that represents H[1]*1, whereas H[2] is the number of pixels which were off by 2, so you’d multiply it by 2, and you’d end up with fitness = H[1]*1 + H[2]*2 + … + H[255]*255.

    The other thing you could try is averaging the image. (Perhaps Flash has a pixelate function… e.g. instead of summing 40k pixels, you could reduce it to several hundred rectangles representing the average colour over the pixels they cover. You’d lose resolution, but it’d probably be faster.)

  55. “For instance, suppose you create a random string of base pairs with the same length as the human genome. Applying the same algorithm, you change one pair at random, and if the mutated DNA is closer to the human genome you keep the mutation; if it’s not closer you discard it. You could repeat this process until you have the complete human genome, but what you’ve observed wasn’t “evolution”.”

    Yes it is, the DNA changes at random and then selection pressures determine whether it survives or not. This kind of example with a fixed goal has significantly more specific selection pressures than in real life, but the basic process is still there.

    For an example without a fixed goal, check out this video evolving a watch, with selection favouring structures that are good at telling the time, but without any specific target in mind.,1322,Evolution-IS-a-Blind-Watchmaker,Chuck-Kopec-cdk007

  56. I whipped up a flash version this afternoon. It just uses triangles, but it mutates vertex position and each color channel (for 50 triangles). Also, the image size is smaller to speed up fitness calculation.

    There’s a link on the page to download a zip file of the code and stuff if anyone’s interested.

  57. wow,this is the first time i see something like this.. and it’s very cool! nice job ^^ do u have any other picture with those triangles?

  58. As a thought for further speed up:
    Mutate multiple polygons. Much of the area will be the same, in many of them, which means that the fitness check doesn’t have to calculate that multiple times.

    Now take the N best of this generation, and construct N(N-1)/2 ‘cross breeds’ where the values where they differ are somewhere between the two. Each one of these is in the next generation. In addition construct N more that are just the original N + a mutation.

    Track the performance of the descendants compared to the parent. Any that are worse than the parent are replaced by the parent. This conserves good ‘genes’

    If you decide to use a power function as someone earlier suggested, you may get an additional speed up by using a lookup function rather than using the FPU all the time. (fraction roots are computationally expensive)

  59. I have got my my implementation to a workable state.

    My implementation uses breeding, rather than mutating a single population. It also has various configurations (dominance, mutation, etc).

    At this early stage, because I am using generational improvements, my algorithm does not resolve a match as fast as some of the other implementations, but given enough run time, its ultimate result seems to be very good.

    I will continue to improve it, and feedback is very welcome.

    You can find the source, and a pre-built binary at:

    It is C# .NET 2.0 – it may work with Mono, I don’t really know much about Mono.

    @Sherwood Botsford

    My implementation actually has something similar to this. I have a breeder limit, every generation the total population is culled down to the X most fit candidates. Children compete with Parents.

  60. Just a quick note to add to my last post. The current default “Mutation Model” in my application is “Rampant”. This is actually the latest model I added, and it is very experimental, and is not great at this stage. For faster results I recommend using one of the others.

  61. Hey Roger,

    I was looking through your code (thanks for releasing it) to see if there was anything I could learn from it to help my implementation. The trick with the ‘unsafe’ bypass of the GDI GetPixel is very nice.

    I have implemented it in all my FitnessModels. Combined with some tweaks I made to my favored fitness model I now see visible improvement every update. In under 3 minutes I can usually see the picture taking shape.

    Still probably not as efficient as your implementation, but goes to show that your idea works just as well implemented as a evolutionary model (multiple breeding generations).

    My favored fitness model by the way is an alternating “quick scan”. It is basically the pixel delta method you put forward, however it skips data. Basically the first turn it scans all Y pixels, but only every in 5th X column, then the next turn it scans every X pixel, but only in every 5th Y column.

    Works quite nicely.

  62. @Tom: Ah, I got beaten to the punch it seems. I gotta get back to my paid job, but when I have some free time I’ll finish and put up my own.

    @Steve: Yes, the histogram() function does the trick by 256 element array. Thank you for your suggestion. Actually I’m not a flash developer either, but I thought that it would be a good way to learn flash, as genetic programming is relatively simple, compared to, say, collusion detection or whatever. I mostly do PHP, but it’s not dynamic enough to really illustrate the power of genetic algorithms :(

  63. great program. but i was wondering… why is it that you choose the mona lisa and not lena ( i know that the mona lisa is very distinguishable but i would have imagined that lena is even more so to your target audience ;). that, and the utah teapot (

    an interesting variation might be not to compare to a concrete image for the fitness, but rather to some parameters that define fine art (like colour composition, etc). obviously it’s difficult to qualitise such parameters even for existing pieces, but the parameters themselves could also evolve over time.

  64. This is really great Roger, I’ve toyed with genetic algorithms myself in the past, but this really gives a nice graphic illustration. When I first saw this last week I decided I’d give it a go myself, I’ve put the source code for my java version up at

  65. It’s not been active for about four years (though just now when I checked it looks like one can once again vote), but perhaps see for an approach that could be combined with this to achieve a result of picture creation governed only by the biome/landscape of ‘visitor aesthetics’.

    The basic recombination technique used there is that the string of words from a member of a ‘fit’ member of the population is taken and randomly cross-linked with another (there’s an emerging ‘species’ bias) to create offspring of each complimentary mix (and a little extra mutation). This could be applied (with some modifications/improvements, to taste) to the set of polygon ‘chromosome’ units.

    Though images and words-strings have different priorities (a good combination of adjacent words gets to keep sitting next to each other naturally, with correspondingly low chance of being split compared to words placed at greater distance from each other, whereas adjacent features in a polygon-creation could be intrinsically defined across any distance within the equivalent ‘poly-string’ and not gain the same degree of ‘strength by proximity’ – and that’s not the least of the variations) the remaining parellels are overwhelming.

    The problem with using people for establishing fitness (marking up and down the most and least prefered of a pair) is of course the low speed of environmental emulation, the need to keep it active and the difficult of re-running the scenario without wasting much invested personal energy. Something long the lines of the ReverseBoggle concept mentioned elsewhere (with a vast ‘lexicon’ of images, possibly swapped and changed to emulate a changing landscape/ecosystem) might be a good way round that issue.

  66. A simple but powerful optimization might be to start with a dramatically scaled down source image and canvas. Then once fitness reaches a certain proximity (or a user interjects with a button), step up the source image and canvas size to allow for larger granularity in the changes once the basic foundations are in place. Rendering and per-pixel analysis would be blazingly fast at first.

    Maybe even start with less polygons to begin with, and allow new ones to be introduced at each scaling up. This might encourage a more optimal “throw the primer on the canvas THEN start worrying about the details” design pattern.

    For example, given a 200×200 source image, at first we might want to scale that down to 50×50 for blazingly fast, yet rough, approximation, using say 30 polygons. Run for a while until fitnes is decent. Then we take a 100×100 version of the source image and multiply all coordinate values by 2 to scale them up to the new image space. Run the program, allowing to add new polygons (if they improve fitness) up to 40 polygons, until a new fitness target is reached. Use the full source image, multiply coordinates by 2 again, and run until the user stops it, allowing up to the 50th polygon to be added.

  67. Hey Roger,

    I think this is an awesome project! I`m working on a haskell port of your program so I`d like to know if there are any licensing issues of the source code that I should consider. I couldn`t find any licensing hints in the source code.

    My haskell code can be found here: . It`s not working yet, I`m still working on the rendering part.

  68. well Amazing but how did you calculate the final image
    it seem to be tooooooo hardddddd
    how many days you take to get final result. approximately???

  69. Pingback: Russian Coding 4 Fun : Мона Лиза у меня неплохо получилась

  70. I must say, amazing project! =D

    To all the nay-sayers who whine about this beeing a hill-climbing algorithm, GE, GP or GA: who cares? When push comes to shove, there is a population, and the fittest member goes on to reproduce. The author of the program chose asexual reproduction (mitosis+mutation), a perfectly valid option. After all, the number of bacteria, amobea and others isn’t exactly suffering from the lack of sex now are they?

    Inspired by this I wrote first a php version of this, using the limited “2 entities per generation”-approach. It produced an ok picture but (I suspect) got stuck in a local maxima.

    My next project had 100 entities per generation, and the top-10 performers got to reproduce asexually. I also had the option for adding and removing polygons, adding/removing/moving polygon points and changing colors. The result was this after (I think) roughly 70.000 generations:

    Mind, I didn’t in any way try to affect the colors, polygon count etc. and thus it looks quite horrible. =P

  71. Pingback: mental_floss Blog » Evolving Mona Lisa

  72. Superb.

    I have made something which in a way lower scale resembles your application. It’s an Excel macros which is kind of a graphic implementation of Richard Dawkins Weasel program. You can find it in the following links (due to filesize limits, the containing rar file has been divided on two volumes):

    Part 1
    Part 2

    Hope you like it, and congratulations for a very inspiring work.

  73. @ Marvinbek

    Yes. You can have the fitness reach zero. I’ve written something similar, which is significantly more optimized than the above program (it would take seconds rather than days to run), and I’ve run the program long enough to get perfect representations (after a while it could only improve single pixels and then improved itself out of them).

    With the way it’s currently written it would take years. But, rasterization is really just an image with pixels and pixels are little squares. So clearly it *could* be done if the six vertexes denote a particular polygon that ends up being a single pixel. But the diminishing returns make it rather worthless. It’s possible, but would take years and is hugely diminishing in the returns department.

  74. Just for the record, the above posters who convinced Roger Alsing to convert his fitness routine from Euclidian to Euclidian squared were wrong.

    There’s a significant difference between Euclidian and Euclidian squared fitness routines. You think that 4 + 5 + 10 is roughly the same as 16+25+100, but it becomes quite important.

    0,0,0,10 and compare it with 5,5,5,4

    While the former adds up to 10, and the latter adds to 29 the former squares to 100 and the latter squares to 91. Single random pixels with high color distances rather than averaged out get a veto power. While it seems like N² is reasonable for an approximation (and with a sample size of two where any improvement is taken it likely doesn’t). With anything more, you’re shooting yourself in the foot, and it bears it out with the generated images. They look significantly sloppier if you try to ramp up the sample size to get better images without realizing the assumption of the improvement is sample==2.

    While there’s a lot of different things to try, one shouldn’t accept that you can just lose that square-root without any consequences at all. If your sample size is larger than 2, this speed up is a shot in the foot. The sum of the variance of the color distances is different than sum of the color distances.

  75. I have to say that I believe a larger population would be beneficial. The real problem with a larger population is determining which ones to weed out, which leads to their competition being over computation time. The more effective ones get more CPU time/refinement, and if one of the outliers getting say 1/100th the time happens upon a freak polygon that makes things much better, it will gain more computation time.

    In doing this, you lose the pitfalls of a hill-climbing algorithm. Honestly, despite your objections, this is still a hill-climbing algorithm. It will develop along one path and come out with one output. There is no competition at the end, there is no pruning, even if you try to weasel and say “parents compete with children.” OK so parent wins, parent just makes another child. It is serial hill-climbing. Oops I turned right I should have gone straight now I will go straight instead. Still stuck in the valley, when there could be others exploring other parts of the map, efficiently, as I outlined above (give more CPU time to the better seeming ones).

    The algorithm itself makes a presupposition that the first few thousand generations are even important. The fitness function does not work on them properly; it preselects polygons that make up a sort of a base and paints on top of them, which is a little absurd, considering that they make up broad details that will not be in a finished picture that doesn’t have a small tonal range like the mona lisa. It becomes a somewhat additive process (even though that base is moved around and changed a little). You could pick almost a random collection of polygons and start from there and get roughly a same result; there is a lot of waste near the beginning. There should be a non-genetic pre-processing step, or multiple competitors that each get a fixed number of generations before any sort of pruning is done.

    I would apportion the populations into the following divisions:
    1) A population that just works on the image
    2) A population that at first works on a blurred or simplified version
    3) A population that at first works on a contrast-enhanced version
    4) A population that at first works on an edge-detection composite
    5) A population that at first works on a tone-mapped version of the picture (much like HDR tone-mapping, but in reverse: in this case, taking data and reducing it).

    Finally, another fitness test that can be added is compressibility of output. This could probably be efficiently performed by sorting polygons according to a weight function between their spacial location and average RGBA values, and then using differential encoding (next vertex is + 4,2 with color -8,+120,-100,+2 of last). If you’re looking to compress images instead of just making pretty ones, it is possible this could turn into a decent method, because of how easy it is to compress polygonal data on a 2D plane.

    You know, I’m inspired to implement all of this myself. I wonder how it might do.

  76. In your tools:

    public static bool WillMutate(int mutationRate)
    if (GetRandomNumber(0, mutationRate) == 1)
    return true;
    return false;


    public static bool WillMutate(int mutationRate) {
    return GetRandomNumber(0,mutationRate) == 1;

  77. Hi,
    Interesting. I’ve been looking at using DNA alignments as pictures, looking for pictorial signals. Different alignments of diseases in pictorial form give interesting pictures, which at the moment I don’t understand, but.

  78. Pingback: Thought this was cool: Genetic Programming: Evolution of Mona Lisa | Roger Alsing Weblog « CWYAlpha

  79. Pingback: a sort of daily dump of links i have checked.. | imagine art -||+ there about?a sort of daily dump of links i have checked..

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s