Google Research publishes information about its research into neural networks. They do that on the basis of different types of pictures, both by the network to feed with the torch and vice versa: a network learn how to create shapes out of random noise, and everything in between.
The underlying techniques for neural networks make for interesting portraits. It shows that art lovers in the foreseeable future should perhaps start to wonder whether they to a by a man-made work look or for a work created by a few sets of artificial neurons.
A name for the new movement there is in any case already: inceptionism. The flow consists of neural networks that are trained through millions of images to analyze, and then gradually the network parameters are adjusted until the ranking comes out that the researchers want. Each picture is first in the input-layer is stopped, and then that layer with the next layer ‘talks’ until the output layer is reached. The ‘answer’ of the network from the output layer.
The challenge for researchers is to understand what exactly is in each layer takes place. It is not known at this time what each layer detects. For example, it may be that the first layer faces to corners and edges, the following layers are the basic forms of interpreting, such as a door or a animal and that the last layers to be active in more complex forms, such as whole buildings. To find out what’s happening in the network occurs, the process may be reversed by, for example, to ask the network to a picture of a banana from random noise, it is really interesting.
The team has also test conducted, with interpretations where not pre-entered will be what is featured, but the network itself must decide what there is to see. Then a random picture is loaded, and a layer of the neural network is asked to inform what the network detects. Because each layer of the neural network on a different way of dealing with abstraction, there are only simple strokes to characters and whole images. By a feedback loop, there are better recognizable images.
Source: Inception Image Gallery
Ultimately, does Google do this work, of course, not for the fun of it. It is to understand and visualize how neural networks learn difficult classificatietaken to perform, how network architecture to improve and what the network learned during the training.
Update June 22, 11.38: Tweaker H!GHGuY rightly notes that the article will have a very large simplification of the reality. The article is adjusted accordingly.