KEMBAR78
Backpropagation algorithm in Neural Network | PPTX
i1=0.05,
i2=0.10
w1=0.15,
w2=0.20
w3=0.25,
w4=0.30 b1=0.3
w5=0.4.
w6=0.45
w7=0.5,
w8=0.55 b2=0.6
For war d
Pass
Squash it using logistic function to get output
For war d Pass
Cal cul at ing Tot al Er r or
The Backwar ds Pass
Update the weights using gradient
descent.
So that they cause the actual output
to be closer the target output.
Thereby minimizing the error for each
output neuron and the network as a
whole.
Consider w5.
total
partial
derivative of
E
wrt w5.
Out put
Layer :
Update the weight.
To decrease the error, we then subtract this value from the current weight.
= learning rate (eta)
We perform the actual
updates in the neural
network after we have the
new weights leading into
the hidden layer neurons
(ie, we use the original
weights, not the updated
weights, when we continue
the backpropagation
algorithm below).
Hidden Layer :
Finally, we’ve updated all of our weights!
When we fed forward the 0.05 and 0.1 inputs originally, the error on the
network was 0.298371109.
After this first round of backpropagation, the total error is now down to
0.291027924.

Backpropagation algorithm in Neural Network