Category Archives: Evolutionary Computing

Neural Darwinism – An Idea Reborn?

Years ago, I read a book by neuroscientist William H. Calvin, called How Brains Think. In it, he outlines a theory in which consciousness emerges through a myriad of super-fast ‘microevolutionary processes’ inside of our brains. Put simply, every thought you have and decision that you make is the result of a ultra-quick competition among a vast ‘population’ of candidate ideas. This theory is known as Neural Darwinism, and has been put forward as early as 1978.

This idea seemed fascinating to me. It provided a lot of answers to questions that I had about my own creative process, and also seemed to suggest that we could make ourselves better thinkers by providing the most suitable mental environments for ideas to evolve within. I’ve written about some of these ideas in previous posts on this blog. Like the best theories, it also seemed to have a certain elegance to it – it makes sense that one of the most powerful optimization mechanisms known – evolution – would be at work inside of our brains.

Unfortunately, there was a problem. In order for any kind of true evolution to occur in the brain, there needed to be some mechanism for replication. The mind didn’t seem to operate this way – from what we knew, ideas (or neural patterns) weren’t copied. Evolution won’t work, if the finches can’t lay eggs. It seemed like an interesting theory might be dead in the water.

A week ago, though, a research team from Hungary and the UK posted a paper titled ‘The Neuronal Replicator Hypothesis‘ which suggests that replication of neuronal patterns can (and does) occur within the brain using known neurophysiological processes. This would mean that, true to the ideas of Neural Darwinism, evolution could indeed play a role in cognition and consciousness. Furthermore, the paper also suggests that in combination with another known neural mechanism, Hebbian learning, this brain-based evolutionary process could be more powerful than the traditional Darwinist model.

This new development is exciting. Not only does it revive a once-promising theory, it also adds to it – perhaps giving us a workable model of how complex things like consciousness and creativity might arise. A better understanding of these processes is valuable not only at a scientific level, but also for anyone involved in creative endeavors. In the long run, it may be possible to actively ‘optimize’ our thinking processes – to have better ideas, to solve bigger problems – and be more creative.

7 Days of Source Day #7: Variance – Better Design Through Genetics?

Project: Variance
Date: January, 2007
Language: ActionScript 2
Key Concepts: Genetic Algorithms


I’ll admit – this is some fairly dusty code. But I wanted to end this 7 (or 25) days of source release with this project for a reason: I think it’s a pretty good idea. I took it to a prototype stage but not much further – maybe someone will find the time to move it forward into something more functional.

Variance is a tool that lets you evolve graphics, from pre-built ‘sets’ of elements and typefaces. It combines some simple editing tools (you can manually change colour, position, rotation, etc.) with some evolutionary magic – you can select two different compositions and hybridize them to get a new set of graphics that share some of the properties of their parents. It works fairly well; usually within about 6 generations you can get a graphic that starts to look pretty good (starting from some jumbled random compositions).

When I was building this tool, I imagined how nice it would be if this kind of functionality was built into a ‘traditional’ tool like Illustrator. Working on a logotype, and are feeling a bit stuck? Have the application generate 9 mutations of your current design, then evolve those for a few generations until you get a result you like. Just getting started on a poster design? Pick some assets and have the Variance plug-in generate some possibilities. In both of these cases, the designer is still being a designer – picking the best results, tweaking layouts – but they are letting the genetic algorithms do the grunt work of coming up with possible variations. At the same time, they are allowing for mutations; often modifications that they may not have tried themselves. This is a very direct application of an Evolutionary Creative Process (ECP), which I’ve talked about before on this blog.

Utility aside, it’s fun to play with.

Getting Started:

You can see Variance in action here:

To read more about the project and to get a walk-through, you can read this post (you may want to do this before you try the application)

You’ll need Flash to get into the source and to get working with it.

Download: VarianceSource(4.12 MB)


This software is licensed under the CC-GNU GPL version 2.0 or later.

7 Days of Source Day #5: smart.rockets

Project: smart.rockets
Date: Summer, 2006
Language: ActionScript 2
Key Concepts: Evolutionary computing, genetic algorithms, rocket science


For me, 2009 has undoubtedly been the Year of Data Visualization. Three years ago, I was engrossed in the somewhat different world of evolutionary computing. Having come from a background in genetics, I was very interested in exploring how evolutionary techniques could be applied in a programmatic fashion, particularly to the kinds of creative systems that I had been building over the previous years.

In preparation for a talk about my explorations at FlashBelt that year, I built a system in which a set of rockets evolved their way towards a target, navigating through obstacles by evolving their own individual firing patterns over time. It was a good demo to show how the general concept of evolutionary computing worked, and let me talk about mutation rates, hybridization algorithms, and all kinds of other interesting things.

As it turns out, the idea of combining EC and space travel is a very real idea. NASA uses evolutionary computing techniques to solve all kinds of problems, ranging from radio telescope scheduling to satellite antenna design to (you guessed it) construction or rocket firing patterns. In a future of un-manned space probes, the long-term goal will be to have machines that can evolve to adapt to unexpected conditions. Such a machine might use genetic algorithms to evaluate a huge number of possible strategies, and to choose the one that is most likely to be a success.

I’ll admit, that previous paragraph places this little Flash toy in undeservedly grand company. But smart.rockets does provide a simple example to get you thinking about how evolutionary computing works. This project was built in ActionScript 2 (opening the file left me with a strange mix of nostalgia and terror). It would be fairly trivial to convert this project to AS3 or to Processing – if anyone does end up doing this, let me know and I will post a link.

Getting Started:

You can see a working example of this file here: You’ll need Flash or some kind of .SWF compiler to get the files working.


In 2006 I was working as a freelance Flash developer and designer. As you might suspect, there wasn’t a whole lot of demand for commercial projects which used evolutionary computation, or Lindenmayer systems, or Particle Swarm Optimization, or any number of other strange things that I was exploring in my spare time. I owe a great deal of thanks to the organizers of conferences like FlashBelt, who gave me a venue to talk about things like evolving rockets and in turn gave me an excuse to keep working in these interesting (and completely non-lucrative) areas.

Download: (48k)


This software is licensed under the CC-GNU GPL version 2.0 or later.

A Better Idea? Some Thoughts on Evolution, Fitness & Creativity

Over the next little while, I will be publishing a series of posts that document some ideas and discussions that I have presented in talks and lectures over the last few years. This somewhat quickly-written post is a discussion of fitness landscapes and evolutionary computing, which expands on a part of a presentation titled ‘Emergence’ that I gave several times in 2008 & 2009.

Navigating Fitness Landscapes

Many of us, in one way or another, do things that require us to come up with good ideas. Indeed, most of us are looking not only for good ideas, but the best ones – within whichever framework or context we are operating in. How often, if ever, do we find that best idea? Are there strategies that we can use to find the best ideas faster, or more often?

Scientists are often faced with problems that have many solutions. They say that the ‘better’ solutions are more fit, and that the best solution is the most fit, or the fittest solution. A set of possible solutions for a problem, then, forms a fitness landscape of possible solutions. When we are facing a creative challenge of any kind, we are heading out into a similar landscape – high peaks of brilliant innovation and low valleys of mediocrity. Every point on this landscape represents a possible idea or solution; the ones higher up are better solutions than those which are lower down. While we hope to find a conceptual Mount Everest, we often find our upward limits on the tops of convenient grassy knolls along with perhaps the occasional noteworthy peak.

It would be nice if these landscapes were simple, with one obvious peak representing a single, best solution. We could set out into our landscape, and simply follow the upward path. When we got to the highest point, we’d be done. Best idea found! Lay out the picnic blanket, have a beer, and celebrate a job well done.

Got a Better Idea? (Supporting images)

The problem, of course, is that our landscapes of ideas are populated with many summits – peaks that could be (and probably are) higher than the ones on which we are enjoying our congratulatory lunch. There might be one higher peak, or more likely, many. While one peak may indeed be a local maximum, it may remain a maximum as we expand our search to include all of the available options. Our travels become a combined problem of finding peaks, and deciding wether they are high enough to settle on.

Did I mention we are walking these landscapes blind? While our helpful overhead views show us that there are bigger & better ideas out there, from our spot on the top of our hill we can’t necessarily see those peaks – particularly the ones that are far away. Complete maps of fitness landscapes are rarely available – and those that are are usually made well after they would have been useful.

Got a Better Idea? (Supporting images)

In practice, these fitness landscapes are often rugged and varied, with many sets of peaks, valleys, and plateaus. And, the bigger and more complicated the problem that we are working on is, the more vast these landscapes become. The more rugged the landscape is, the more difficult it is to successfully find the fittest point. Navigating through these solution terrains (blind) can be a tricky business. Luckily, science offers us a few strategies that are very useful in finding these elusive fitness peaks, and their associated good ideas.

Got a Better Idea? (Supporting images)

The Power Law of Innovation

I believe that when most of us are working creatively, we follow a ‘Keep Going Up’ (KGU) approach to finding higher ground. We start with an idea, and gradually improve it – we change colours, adjust composition, re-work wording, or add and remove notes, and keep the changes that make our idea better. We move upwards, usually in small steps. This approach is an excellent one for finding local maxima – those peaks nearest to us that are the highest ones. But if we only ever go up, it is very easy to get caught on a small local summit, and not find a much taller one that might be near by.

A solution might be to take bigger steps. This way, we can cover a lot of ground, and have a better chance of getting out of local ruts, or onto higher peaks. However, with big steps, we also risk jumping off of a good area entirely.

A very good solution, as it turns out, seems to be to take big steps in the beginning of a search, and smaller steps as you get closer to a local maximum. The best transition from big steps to small steps appears to obey a power law distribution, which looks something like this:

Power Law of Innovation

Large, exploratory jumps around the landscape quite quickly settle into smaller, less-risky steps. There is some pretty evidence we see this kind of a strategy working in technology innovation. Stuart Kauffman uses the example of the bicycle – in the beginning of the bicycle’s development, there was wild variation in structure, shape and size. But as the optimum form was developed, we have seen bikes changing very little from generation to generation.

Kauffman calls this effect The Power Law of Innovation, and it certainly seems to be in effect in the web world. We have seen several waves of innovation of the web (conveniently numbered!) and in each wave we have had a huge variety of innovation in the beginning of the cycle, progressing to periods of very (very) little variability by the end. Companies like Google, Facebook & Twitter find high peaks in the web fitness landscape, and subsequent companies are often content to stake their claim close by this already proven ground. Similar patterns can be found in a lot of diverse fields, and go a long way to explaining trends in music, fashion, and art.

We are still faced with the problem of being stuck on a local maximum when there might be bigger, better hills somewhere on the landscape. This conundrum is compounded when we throw in a really large and particularly troublesome reality – these real-life fitness landscapes are constantly changing. Not only are we wandering through unmarked terrain blindfolded, that terrain is changing with every step we take. Even if we have found a really great, really high peak on which to pitch our tent, it may be that that peak is moments away from becoming a valley (some particularly large web companies & publishers find themselves in this situation today). How do we avoid getting stranded on these local maxima? A good answer might lay in the application of evolutionary strategies.

Again, with the Darwinism

When asked to summarize evolution in one phrase, most people would answer with this: Survival of the Fittest. However, this admittedly catchy phrase only describes one small (albeit necessary) part of the puzzle: competition.

In a survival of the fittest model, we’d take a population of possible ideas (this population makes up the landscape that we have been talking about so far), and choose the best ones. We could then take the best of those, and theoretically move towards an optimum solution. I think this is is how a lot of us work on a project – start with some initial ideas, choose the best two, battle with those until we settle on one idea (or until a client chooses one), then tweak it until we get a result we’re happy with. A model of that kind of a process looks something like this:

Got a Better Idea? (Supporting images)

What we are really doing here is employing our KGU strategy – we will find local maxima very effectively, but we’re gambling on the fact that these local maxima are going to be good ones. Most often, we’re going to end up on the top of hills and not mountains.

To avoid this, we can include a few other aspects of an evolutionary approach. First, we can create a new population of solutions at every step. Rather than continually narrowing our pool of possible solutions, we are keeping our available selection broad and in doing so are offering more possibilities to find the best ideas. The other thing that we do with in an Evolutionary Creative Process (ECP) is to allow for mutation in every new generation of ideas. This allows us to occasionally take those big jumps across the fitness landscape that were successful for us in the beginning – but with the added safety net of a larger population of possible choices every generation.

A model of an ECP looks like this:

Got a Better Idea? (Supporting images)

Keep making ideas. Combine the best results. Take risks. These somewhat common-sense approaches gain credence when placed together into an an evolutionary framework. This model is interesting because it can be applied and tested both conceptually and practically. We can heed this advice when were thinking and creating (evolving ideas) –  but we can also put the model in direct application through the use of Genetic Algorithms (GAs).

Genetic Algorithms take a lot of the concepts that we have discussed so far in this post and put them into action on a computer. They use some of the principles behind evolution to ‘breed’ solutions to problems. GAs have been used in the past to assist in the design of NASA’s satellite antennae, to solve complex math equations, to fit the Mona Lisa into 140 characters, to design buildings, and to solve a pile of other tricky problems. They are particularly successful at solving problems with many, many possible solutions, and for which there aren’t any known answers (which seems to fairly accurately describe pretty much every creative problem, ever). A GA can be applied to any problem, provided that fitness can be measured (ie. we can tell that one problem is better than another) and that individual solutions can be encoded by a hybridizable genome (one solution needs to be able to be bred with another to give a set of results).

Fast computers running Genetic Algorithms can move through huge fitness landscapes incredibly quickly. However, assessing fitness of solutions can be difficult for a computer, particularly where judgements of aesthetic are concerned. A good solution may be to allow the computer to do the ‘grunt’ work; assessing solutions by machine judge-able criteria, then having our brains (which are far better suited to these kinds of tasks, for now) continue the process for other criteria. In this case, computers can be used to make a vast fitness landscape smaller and more navigable.

Into the Real World

For the last few months, I have been working on a project which quite literally applies the concepts that I have talked about in this post. Conceived by Alex Beim of Tanglible Interaction, the project involves the design of a 60m x 10m landscaped accessible outdoor playspace which will be built in Richmond, BC. We have built a tool (using Processing) which allows us to generate a near infinite number of possible playspace layouts (a landscape of landscapes!) – and then use a Genetic Algorithm to evolve the population, selecting for interesting and accessible results. Alex and I can evolve through thousands of generations of playgrounds in the course of an hour, saving favourites, then evolve these together to get new results.

This is a unique approach to designing a playground. It allows us to explore many more possibilities than we could with a conventional approach, while considering and managing a number of very important constraints. It’s also a first chance to take some of the ideas that I have been developing over the last few years and apply them to a incredibly interesting and dynamic problem.

The park will be built along the Fraser River later this year.


Links and resources

Stuart Kauffman’s Alone in the Universe: The search for the laws of self-organization and complexity is a great resource for learning more about fitness landscapes as well as many other fascinating (and controversial) topics.

The Wikipedia entry on Genetic Algorithms is a good place to start to explore Evolutionary Computing.

Darwin Rocks! is a cool Danish site designed to explain evolution in a fun way – great for younger readers.

A few years ago I built a prototype of a composition tool that employs an interactive genetic algorithm to assist in making simple graphical compositions like logotypes. Thought that project, Variance, didn’t get very far (though I’d love to see it resuscitated) but I think it’s an interesting model for how simple applications of evolutionary computing techniques could assist in the design process, with the designer acting as the judge of fitness.

I’ve made a couple of other projects over the years that involve GAs and evolution – Darwinstruments, Smart Rockets

The fitness landscape diagrams were made with a quick sketch I built in Processing.

Breeding Images

Glocal Pool - Imagined Phylogeny #4

The Glocal Project is a massive contributive artwork. Two months before the launch of the project, we already have upwards of 8,000 submissions from more than 2,000 participants around the world. 

One of the most challenging questions has been: how can we make sense of such a large collection of images?

Obviously the first place to start is to catalogue as much information as we can about each image. Some of this information is easy to gather: place, date, place, tags, and other basic information is readily available through Flickr. We’ve also written some simple scripts to record luminosity and to put together a colour pallette for each image. Perhaps most interestingly, we’ve also integrated compositional analysis software, which looks at each image and assigns it a ‘signature’. This signature can then be compared against others in the database to find similar images. This is a very useful tool, since it allows us to find relationships between images that may not have been obvious to human analysis.

I began thinking about these image signatures as a kind of genotype – genetic information that describes each unique image. With that in mind, I wondered wether it would be possible to breed images! The process starts off simply – the image signatures are spliced together at two insertion points:

Sig 1: 1111|111111111111|111111

Sig 2: 2222|222222222222|222222

Child: 1111|222222222222|111111

We then take the child signature and run it through the similarity engine, looking for images in the Glocal pool that matched the child most closely. Happily, this process worked. Below, you can see the three images that result from ‘breeding’ the initial two images. In the offspring, we see the circular element from the parent image on the left in all three images. The most successful child here is the middle one, where we see both the light circular shape from the ‘egg’ and the colour abstraction from the image on the right. 

Phylogeny #4 - detail 1

This process can be repeated over generations. In the next image below, I’ve selected the two outside images and asked for images that could be their offspring. In almost all of the child images, we see the consistent circular image in the middle of the frame. There are a few outliers, which may have been imperfect matches – or, more interestingly, which may have picked up on ‘dormant’ portions of the image genotype from previous generations. 

Phylogeny #4 - detail 2

We can proceed through these ‘trees’ in a generational fashion, or we can diverge and back-breed. If you take a close look at the image at the top of this post (click to get a larger view), you will see that there is a fair amount of inter-generational mixing. 

As this process continues, we can explore the relational landscape that exists in the Glocal pool, and in the process we construct ‘family trees’ which present a possible way in which the images could be related. I imagine an anthropologist, stumbling onto a box containing 8,000 images, might apply similar techniques to make some sense of what stories and histories lay within. These ‘imagined phylogenies’ could be constructed from any database of images, and of course with a larger database the relations would be more clear. Given a large enough database, we could see fairly seamless trees constructed in which the offspring very strongly resemble each of their parents. It may also be possible to apply these techniques to historical databases of images, perhaps providing some useful information about image relationships.

Phylogeny #4 - detail 3

We will be posting the ‘live’ version of this tool very soon. In the meantime, you can see more images in our Glocal Visualizations Flickr set, along with other visualizations that have been produced as part of the Glocal project so far. As always, questions and feedback are welcome and appreciated!