The Panda’s Thumb: Target? TARGET? We don’t need no stinkin’ Target!

Well, today has just been a panoply of wonderful science, lol. This article is NOT for the feint of heart as its a VERY in-depth look at genetic algorithms and the difference between targeted and non-targeted algorithms. Its a bit dense in places, but that’s only because the subject material is fairly complex … I actually need to give the author, Dave Thomas, credit for making a very complex subject even THIS understandable.

The short story of this article is that it looks into crticisms from the Intelligent Design theory of genetic algorithms currently used to simulate evolution. ID proponents argue that genetic algorithms have the target conditions built into them, and so have no real choice but to come up with the expected solution.

For example, they tend to use some of the simplest genetic algorithm examples in the world of science. Richard Dawkins used the WEASEL program to demonstrate basic issues in genetic algorithms. Without going into too much detail (more can be found here if you want), the WEASEL program is a simple computer program that randomly generates strings of characters, and applies a fitness test to them, in this case, how close the string is to “Methinks it is like a weasel” … those closest to this string are kept and replicated to the next generation, while the rest are discarded (or left to die out in evolutionary terms).

The obvious problem with this simple example is that you are testing using your end condition. Evolution has no clue what the final set is supposed to look like, and so any real-world fitness test must not contain the final conditions. ID proponents have seized on this simple fact, and attempt to use it to discredit genetic algorithms as a whole. The problem is, Dawkins himself characterized the Weasel program, even when he first described it, as an extremely simplified example intended to demonstrate the PROCEDURE of genetic algorithms in a simple, easily understandable way. Dawkins never intended this program to be taken as a true analog of ANY real-world evolutionary process, but instead merely to demonstrate the process through with a genetic algorithm might work.

The fact is, there are genetic algorithms out there that are designed as evolutionary models, which use very simple fitness tests that do not include the target condition being searched for. One graphical representation of an algorithm solving the Travelling Salesman Problem illustrates this well … because you randomly select the ‘city’ locations, its impossible for the ‘shortest route’ to be pre-programmed into the code as a target condition. All that you can test for in fitness is how long the route the salesman has to take is, and the solution to that problem will differ depending on where and how you group your cities. Ultimately, the simple fitness test of “Am I shorter than the shortest path so far?” is all that required to produce any number of acceptable solutions to randomly placed cities.

Thomas goes into a lengthy description of an even more complex genetic algorithm solving the Steiner Problem, another variation on the “shortest distance between many points” problem. In this one, there are very elegant solutions to be found in nature, that use sub-nodes that nearly always meet at 120 angles … they can be demonstrated for real using soap bubble films connecting the various nodes. In these real world examples, the elegant ‘Steiner solution’ is the one that comes up by default nearly all the time.

Thomas has created a genetic algorithm to work this problem, and the results he got from that algorithm are remarkable. While his genetic algorithm did find the elegant Steiner solution .5% of the time, far more often it settled onto any one of a number of ‘MacGyver’ solutions that were functionally almost as good, but nowhere near as elegant. Such a result is interesting enough … it shows that a variety of solutions can come out of a single algorithm, solutions that are essentially functionally equivalent. This proves fairly conclusively that no target conditions were in place … a target condition would produce said target all the time. Instead, this algorithm produces the intended ‘target’ less than 1% of the time, and comes up with other solutions to fit the conditions the rest of the time.

Even more facinating though is that once the “MacGyver” solutions were discovered, they were also able to be replicated through natural means in nearly all cases … but there was one solution containing a node of 4 vertices (something that is highly unstable in the real-world of soap bubbles) which remained impossible to produce a stable real world version of. Alternately, once Thomas started to play with the conditions of creating his bubble solutions (to create the unusual solutions he pulled apart the plates at angles slightly off of horizontal) he found a solution that could be produced in the real-world soap bubble simulator, but which was never discovered by the algorithm. These results show that both reality and theory contains overlapping, but not necessarily identical, sets of solutions, and many bear no resemblance to the theoretical ‘target’ solution.

Ultimately, this shows pretty conclusively that genetic algorithms can produce workable results with no target specified beyond a very simple fitness test. In the real world of evolution, the fitness test is equally simple, with no ‘target condition’ built into the test … the test is merely “Do I survive long enough to reproduce” or “Do I survive to reproduce more efficiently than those around me?” Thomas research shows there’s no need for evolution to know what the expected result is supposed to be, but it is sufficient to apply the simple fitness test. Those organisms that survive, go on to pass their mutations on to the next generation, and through multiple generations (Thomas’ Steiner algorithm runs 100-200 generations typically before a final solution) you end up with a complex, specific solution with no complex target to begin with. Its not proof of evolution exactly … but it is proof that genetic algorithms can provide a mechanism that explains how we get from random conditions to a highly complex ‘solution’ without any idea what the end product will look like.

Filed under: Elron Steele, Evolution, Global Paradigms, science, Science & Technology, steeletech, View From The Edge |

## Leave a Reply