News

The Deep Dreams of Google's Neural Networks

Imagine a horse. Aside from the colour and the size, chances are you and I are thinking the same thing. This is because in our experiences we’ve seen and may have even ridden horses. But try to imagine that you’d never seen a horse and that instead you had to build it from other objects you’ve seen.

It’s in this same way that Google have devised their Deep Dream Neural Networks, I’ll break this down a little because it’s difficult to fully comprehend but the results are fascinating. Google developed an artificial neural network with the ability to recognise images – this network was then fed images of animals, birds, reptiles and abstract components, in the hopes that it would extract the essence of repeated imagery (for example patterns feathers and the distance between eyes and snout).

Each layer of the network deals with features at a different level of abstraction, so instead of analysing the image as a whole, the neural network analyses various sizes, shapes and patterns. This then creates a feedback loop, the network will continue to process the image until it more closely matches what it has “seen” rather than what the source image once was. To help illustrate this point, I’ve posted a few sample images below alongside the same photo having been repeatedly fed through the analyser.

The current version of the neural network was trained mostly on images of animals, which explains why most of the images it conjures are animal based. If you’d like to try this for yourself all you’ll need is a .jpeg image and to follow this link http://deepdreamgenerator.com/ . I’d recommend saving and reprocessing your imagery to make the elements more pronounced, and if you’re not too horrified by the results, share them with us on Google+, Facebook, or Twitter and be sure to tag them with #deepdream so other researchers can check them out too.