Living Image Project

Notice : This project has ended. The page no longer actively updated.
Images are generated by a combination of neural networks and genetic algorithms, using the DelphiNEAT library created by Mattias Fugerlund. I have written a Delphi application that takes user votes as input and uses them as a fitness measure for individual genotypes. Each genotype defines the internal structure of a neural network. Each image is generated by a neural network, which has 7 inputs and 1 output.
The inputs are :
  1. The number 1
  2. x-coordinate of a pixel, scaled to -1..1 range.
  3. y-coordinate of a pixel, scaled to -1..1 range.
  4. Distance from the center of image, also scaled.
  5. 1 if generating intensity of red, 0 otherwise.
  6. 1 if generating intensity of green, 0 otherwise.
  7. 1 if generating intensity of blue, 0 otherwise.
The output is the intensity of a color component, in -1..1 range. This is used as trunc(abs(output)*255)(simplified) to get one actual byte-sized color component of a RGB pixel. The network is run three times to get red, green and blue components. This is done for every pixel of the image, which is 800x600 originally. Then all the images are batch-processed by Photoshop to make them the desired size (and get rid of some artifacts) and further compressed by PNGOUT. The results are then uploaded to the site and entered in the database. All this takes about an hour, at about 100% CPU usage ;)