How does a back-propagation training algorithm work?

Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.

In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.

A step-by-step demo of feed-forward and back-propagation steps can be found here.


Edit

If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.


High-level description of the backpropagation algorithm

Backpropagation is trying to do a gradient descent on the error surface of the neural network, adjusting the weights with dynamic programming techniques to keep the computations tractable.

I will try to explain, in high-level terms, all the just mentioned concepts.

Error surface

If you have a neural network with, say, N neurons in the output layer, that means your output is really an N-dimensional vector, and that vector lives in an N-dimensional space (or on an N-dimensional surface.) So does the "correct" output that you're training against. So does the difference between your "correct" answer and the actual output.

That difference, with suitable conditioning (especially some consideration of absolute values) is the error vector, living on the error surface.

Gradient descent

With that concept, you can think of training the neural network as the process of adjusting the weights of your neurons so that the error function is small, ideally zero. Conceptually, you do this with calculus. If you only had one output and one weight, this would be simple -- take a few derivatives, which would tell you which "direction" to move, and make an adjustment in that direction.

But you don't have one neuron, you have N of them, and substantially more input weights.

The principle is the same, except instead of using calculus on lines looking for slopes that you can picture in your head, the equations become vector algebra expressions that you can't easily picture. The term gradient is the multi-dimensional analogue to slope on a line, and descent means you want to move down that error surface until the errors are small.

Dynamic programming

There's another problem, though -- if you have more than one layer, you can't easily see the change of the weights in some non-output layer vs the actual output.

Dynamic programming is a bookkeeping method to help track what's going on. At the very highest level, if you naively try to do all this vector calculus, you end up calculating some derivatives over and over again. The modern backpropagation algorithm avoids some of that, and it so happens that you update the output layer first, then the second to last layer, etc. Updates are propagating backwards from the output, hence the name.

So, if you're lucky enough to have been exposed to gradient descent or vector calculus before, then hopefully that clicked.

The full derivation of backpropagation can be condensed into about a page of tight symbolic math, but it's hard to get the sense of the algorithm without a high-level description. (It's downright intimidating, in my opinion.) If you haven't got a good handle on vector calculus, then, sorry, the above probably wasn't helpful. But to get backpropagation to actually work, it's not necessary to understand the full derivation.


I found the following paper (by Rojas) very helpul, when I was trying to understand this material, even if it's a big PDF of one chapter of his book.

http://page.mi.fu-berlin.de/rojas/neural/chapter/K7.pdf