KEMBAR78
Neural network basic | DOCX
What is a Neural Network?
The term ‘Neural’ has origin from the human (animal) nervous system’s basic
functional unit ‘neuron’ or nerve cells present in the brain and other parts of
the human (animal) body. A neural network is a group of algorithms that
certify the underlying relationship in a set of data similar to the human brain.
The neural network helps to change the input so that the network gives the
best result withoutredesigning the output procedure.
Components of Artificial Neural Network:
1-InputValues
2-OutputValue (Desired/Expected)
3-InitialWeights
4-InitialBias (Default =1) (Feed Forward)
5-Pre-Activation Function(∑ 𝑥𝑖 −
𝑛
𝑖=1 wi+b)
6-Activation Function (sigmoid)
7-CurrentOutput
8-updateweights (Backpropagation)
Process: -
Using gradient descent Algorithm wherechain rule applied for
Backpropagation,firstTakeinput Feature, they’re going to be an output values,
Inputis multiplied with given weights(initial)+bias(default=1) and feed forward
to pre activation function to calculate the sum of all the feeds, then sigmoid
function and then calculate the desired output, the convergenceof weights
needs Backpropagation the update the weights of each input (hidden layers)
Explained in code:
a) Input Layer NumPy Array
b) DesiredOutputs
c) Weights/Bias Initialization
InputLayer: Use
4 valuesin
matrix format
DesiredOutput:Eachinput
matrix has some desired
output.It cannotinjectinto
Neural Network ininputs only
use whenerroror loss
calculation
Initial Weights
associatedwith
inputs
Biasdefault
value =1
d) ActivationFunction
HIDDEN LAYERS CALCULATIONS
e) Chain Rule in gradient descent, Update Of weights &Calculationof Pre-
ActivationFunction
Steep
Activation
Function
Derivative
calculationof
Activation Function
Epochs isa terminmachine learningindicatesthe
numberof passesof entire trainingdataset----Noof
Iterations
Calculation of all weight’s pre-
activation(∑ 𝑥𝑖 −
𝑛
𝑖=1 wi+b) function
Calculationof Error
and Losstotal error.
Current
output
Update of Weights
Backpropagation
Update of Bias
Backpropagation
f) Prediction
Is on the Basis of comparison between the calculated output and desired
output
The desiredoutputof input
matrix [0,0] is0 The outputis
appr to 0.
The outputis appro
The desiredoutputof input
matrix [0,1] is1 The outputis
appr to 1.
The outputis appro
The desiredoutputof input
matrix [1,1] is1 The outputis
appr to 1.
The outputis appro
The desiredoutputof input
matrix [1,0] is0 The outputis
appr to 0.
The outputis appro

Neural network basic

  • 1.
    What is aNeural Network? The term ‘Neural’ has origin from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells present in the brain and other parts of the human (animal) body. A neural network is a group of algorithms that certify the underlying relationship in a set of data similar to the human brain. The neural network helps to change the input so that the network gives the best result withoutredesigning the output procedure. Components of Artificial Neural Network: 1-InputValues 2-OutputValue (Desired/Expected) 3-InitialWeights 4-InitialBias (Default =1) (Feed Forward) 5-Pre-Activation Function(∑ 𝑥𝑖 − 𝑛 𝑖=1 wi+b) 6-Activation Function (sigmoid) 7-CurrentOutput 8-updateweights (Backpropagation)
  • 2.
    Process: - Using gradientdescent Algorithm wherechain rule applied for Backpropagation,firstTakeinput Feature, they’re going to be an output values, Inputis multiplied with given weights(initial)+bias(default=1) and feed forward to pre activation function to calculate the sum of all the feeds, then sigmoid function and then calculate the desired output, the convergenceof weights needs Backpropagation the update the weights of each input (hidden layers) Explained in code: a) Input Layer NumPy Array b) DesiredOutputs c) Weights/Bias Initialization InputLayer: Use 4 valuesin matrix format DesiredOutput:Eachinput matrix has some desired output.It cannotinjectinto Neural Network ininputs only use whenerroror loss calculation Initial Weights associatedwith inputs Biasdefault value =1
  • 3.
    d) ActivationFunction HIDDEN LAYERSCALCULATIONS e) Chain Rule in gradient descent, Update Of weights &Calculationof Pre- ActivationFunction Steep Activation Function Derivative calculationof Activation Function Epochs isa terminmachine learningindicatesthe numberof passesof entire trainingdataset----Noof Iterations Calculation of all weight’s pre- activation(∑ 𝑥𝑖 − 𝑛 𝑖=1 wi+b) function Calculationof Error and Losstotal error. Current output Update of Weights Backpropagation Update of Bias Backpropagation
  • 4.
    f) Prediction Is onthe Basis of comparison between the calculated output and desired output The desiredoutputof input matrix [0,0] is0 The outputis appr to 0. The outputis appro The desiredoutputof input matrix [0,1] is1 The outputis appr to 1. The outputis appro The desiredoutputof input matrix [1,1] is1 The outputis appr to 1. The outputis appro The desiredoutputof input matrix [1,0] is0 The outputis appr to 0. The outputis appro