This is a visualisation of the three-layer feedforward neural network described in “Neural Networks That Learn”, chapter 36 of The New Turing Omnibus. It was made for the 2016-05-31 meeting of London Computation Club.
The network can be trained to convert polar coordinates into rectangular coordinates. The training points are randomly chosen from the unit disk and their polar coordinates fed into the network. The network’s current output for each training point is displayed as a dot, with a line connecting it to the correct rectangular coordinates for that point; the length of the line therefore corresponds to the network’s current error on that input. You can explore other inputs by moving the mouse around.
When you press “Start training”, the network is iteratively trained with backpropagation to reduce the error for all points in the training set. It’s interesting to watch how the outputs cluster and separate over time. A larger training set reveals more detail but makes the animation slower.
Here’s the code.
— @tomstuart / firstname.lastname@example.org