SOFM Network- Operates in 2 phase
1- Training Phase.
• Input data is fed to neurons in output layer that are
assigned with weights.
• Each output unit competes a similarity score by ED
measure and each other.
• The output unit which is close to input sample by similarity,
is chosen as the winning unit and its connection weights
Click to Edit are adjusted by a learning factor.
• Thus the best matching output unit whose weights are
adjusted are moved close to input sample and a
topological feature map is formed.
• This process in repeated until the map does not change.
2- Mapping Phase - test samples are classified
www.acharya.ac.in
Module -5
Artificial Neural Networks
Introduction
• ANN imitate the human brain( constitutes a mass
neuron).
• Neuron are processing units which receive
Click to Edit
information, process it and then transmit data to other
neurons.
• ANN is a learning mechanism that models a human
brain
ANN - Model
Each neuron is modelled as computing unit called as Node
● Node does complex
calculations.
● Operates in parallel
that learns from
Click to Edit
observations
Applications of ANN
● NLP, Pattern, Face, Speech
Character Recognition.
● Stock Prediction
● Text Processing, Computer
Vision etc.
PERCEPTRON AND LEARNING
THEORY
Linear Binary Classifier used for Supervised Learning
Steps
1. Inputs from other from
Click to Edit
neurons.
2. Weight and Bias.
3. Net Sum.
4. Activation Function
Perceptron Algorithm
Click to Edit
Problem
Consider a perceptron to represent the Boolean function AND with the initial weights w,
=0.3, w, =-0.2, learning rate alpha = 0.2 and bias Theta = 0.4.
The activation function used here is the Step function f(x) which gives the output value as
binary ,i.e., O or 1. If value of f(x) is greater than or equal to 0, it outputs 1o r else it outputs
0.
Design a perceptron that performs the Boolean function AND and update the weights until
Click to Edit
the Boolean function gives the desired output.
Click to Edit
Click to Edit
Click to Edit
Click to Edit
XOR Problem
Click to Edit
Click to Edit
Gradient Descent
Click to Edit
Types on ANN
Click to Edit
Fully connected neural network
Click to Edit
Multilayer Perceptron
Click to Edit
Feed Back Neural Network
Click to Edit
Multi Layer Perceptron
Consider learning in a Multi layer Perceptron. The given MLP
consists of an input layer , one hidden layer and an output layer.
Train the MLP by updating the weights and biases in the network
Click to Edit
X1 X2 X3 X4 O(Desired)
1 1 0 1 1
Input: Input vector (x, Xy ., *)
Output: Y
Learning rate: 0.8
Assign random weights and biases for every connection in
Click to Edit the network in the range [-0.5, +0.5].
Step 1: Forward Propagation[Epoch -1]
1. Calculate Input and Output in the lnput Layer.
Output at each input node
Click to Edit
2: Calculate Net input & Output in the Hidden Layer and
Output Layer (x0 {Bias} = 1)
Net input Activation function at x6 & x5
Click to Edit
3. Estimate error at the node of Output layer
Error at Output layer
Click to Edit
● Back propagation is required to reduce error of 0.581 .
● By updating the weights and bias in hidden layers
Step 2: Backward Propogation
1. Calculate the Error at each node.
Output layer error caculation
Click to Edit
Hidden layer error caculation
Click to Edit
2: Update the weight
Click to Edit
Click to Edit
Update the Baises
Click to Edit
[Epoch -2] Step 1 : Forward Propogation
Click to Edit
Error = 1 - 0.474 = 0.526
Error is reduced by 0.055: 0.581 - 0.526.
Click to Edit
Radial Basis Function NN(RFNN)
• Is a Multi layer perceptron
• One input, hidden and output layer
● Hidden layer uses a non-linear radial basis function as
activation function.
Click to Edit
❖ converts input parameters into high dimensional
space(derive feature vector)
❖ then fed to linearly separate problem
● Useful for interpolation , function approximation , time series
prediction , classification and system control
RBFNN
Click to Edit
RBFNN
Click to Edit
RBNFF Algorithms
Click to Edit
Click to Edit
Click to Edit
Click to Edit
Click to Edit
RBFNN- generally trained to determine
following parameters
• The number of neurons in the hidden layer.
• The center of each hidden layer RBF neuron.
Click to Edit • The radius or variance of each RBF neuron.
• The weights assigned from the hidden layer to
output layer for summation functions.
RBFF- approaches to determine the
centres of the Hidden Layer
• Random selection of fixed cluster centers
• Self - organised selection of centres using
k-means clustering.
Click to Edit
• Supervised selection of centers.
Self Organizing Feature Map
➔ Feed Forward Neural Networks by Dr Teuvo
Kohonene(1982)
➔ Adaptive Learning Network
➔ Unsupervised learning model - clusters data by
mapping in high-dimensional into two dimensional
Click to Edit
map.
➔ Model learns to clusters or self organize a high
dimensional data without knowing the class
membership of input data - self organizing
nodes(feature map).
Network Architecture and
Operations
Click to Edit
SOLVED PROBLEM
Click to Edit
Click to Edit
Click to Edit
Click to Edit
Click to Edit
Classification result - SOFM
1. (1,0,1,0) – > Unit -1
2. (1,0,0,0) → Unit -1
Click to Edit 3. (1,1,1,1) → Unit-2
4. (0,1,1,0) → Unit-2
SOFM Algorithm
Click to Edit
Click to Edit THANK YOU