Back-propagation

Terry Benzschawel

Error back-propagation is used in artificial neural networks to adjust the weights to reduce the error. Back-propagation is the “backward propagation of errors”, since an error is computed at the output of the network and distributed backwards throughout the network’s layers. The motivation for back-propagation is to train a multilayered neural network such that it can learn the appropriate internal representations of any arbitrary mappings of input to output. The challenge of using back-propagation is the potential failure of the network to converge to the global error minimum (ie, the local minimum problem). A related problem is the potential lack of generalisation. That is, the failure to optimise out-of-sample model performance, usually owing to overfitting.

6.1 ERROR BACK-PROPAGATION

Error back-propagation is designed to control the characteristics of the gradient descent algorithm. The objective is to efficiently minimise the error in the output of the network, typically by adaptive adjustment of the learning rate. Back-propagation is performed by applying the “delta rule” in one of a variety of ways. We consider first the generalised delta rule, which forms the basis for

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here