Introduction to Neural Networks
Md. Apu Hosen
Lecturer
Dept. of CSE, NUBTK
Contents
• Human Brain
• Artificial Neuron
• Artificial Neuron vs Biological Neuron
• The architecture of an Artificial Neural Network
• Types of Neural Network
Human Brain
• The human brain consists of neurons or nerve cells that
transmit and process the information received from our senses.
• Neurons send messages all over the body to allow to do
everything from breathing to talking, eating, walking, and
thinking.
• Many such nerve cells are arranged together in our brain to
form a network of nerves to pass the sense.
How do our brains work?
• A neuron is connected to other neurons
through about 10,000 synapses.
• Simply put, a neuron collects inputs from
other neurons using dendrites.
• Dendrites carry the impulse to the nucleus
of the nerve cell which is also called as
soma. Here , the electrical impulse is
processed and then passed on to the axon.
• The axon is longer branch among the
dendrites which carries the impulse from the
soma to the synapse. Dendrites: Input
• The synapse then , passes the impulse to Cell body: Processor
dendrites of the second neuron. Synaptic: Link
• Thus, a complex network of neurons is Axon: Output
created in the human brain.
How do our brains work?
Figure: Multiple Connected Biological Network
Artificial Neural Network
• Artificial neural network (ANN) is a machine learning approach
that mimics the human brain and consists of a number of
artificial neurons.
• Neurons in ANNs tend to have fewer connections than
biological neurons.
• Each neuron in ANN receives a number of inputs.
• An activation function is applied to these inputs which results in
the activation level of the neuron (output value of the neuron).
• Knowledge about the learning task is given in the form of
examples called training examples.
Artificial Neural Network
Node
Figure: Artificial Neuron
Weight: Weight is a numerical value associated with a connection between two neurons,
representing the strength of the connection or the importance of one neuron's output on another
neuron's input.
Activation Function: An activation function is a mathematical function that transforms the
weighted sum of input values at a neuron into an output signal determining whether the neuron
activated based on a specific threshold or rule.
Artificial Neuron vs Biological Neuron
Biological Neural Network Artificial Neural Network
Dendrites Inputs
Cell nucleus Nodes
Synapse Weights/Interconnection
Axon Output
The architecture of an Artificial Neural Network
• In order to define a neural network that consists of a large number of
artificial neurons, which are termed units arranged in a sequence of
layers.
• Artificial Neural Network primarily consists of three layers:
1. Input Layer:
2. Output Layer
3. Hidden Layer
The architecture of an Artificial Neural Network
The architecture of an Artificial Neural Network
1. Input Layer:
• This layer will accept the data and pass it to the rest of the network.
• The nodes of the input layer are passive, meaning they do not change the
data.
• They receive a single value on their input and duplicate the value to their
many outputs.
• From the input layer, it duplicates each value and sent to all the hidden
nodes.
The architecture of an Artificial Neural Network
2. Hidden Layer:
• It performs all the calculations to find hidden features and patterns.
• In a hidden layer, the actual processing is done via a system of weighted
‘connections’.
• There may be one or more hidden layers.
• The values entering a hidden node multiplied by weights, a set of
predetermined numbers stored in the program.
• The weighted inputs are then added to produce a single number.
The architecture of an Artificial Neural Network
3. Output Layer
• output layer receives connections from hidden layers or from the input
layer.
• It returns an output value that corresponds to the prediction of the
response variable.
• The output layer holds the result or the output of the problem.
Types of Artificial Neural Network
• The depth, number of hidden layers, and I/O capabilities of each node are
a few criteria used to identify neural networks. Types of neural networks
are:
1. Perceptron and Multilayer Perceptron neural networks.
2. Feedforward artificial neural networks.
3. Recurrent neural networks.
4. Convolutional neural networks.
5. Radial basis functions artificial neural networks.
6. Modular neural networks.
Perceptron and Multilayer Perceptron
neural networks
• Perceptron:
• Perceptron is a building block of an Artificial Neural Network.
• A perceptron is a simple binary classification algorithm, proposed by
Cornell scientist Frank Rosenblatt.
• It helps to divide a set of input signals into two parts—“yes” and “no”.
Perceptron and Multilayer Perceptron
neural networks
• Difference of Perceptron and Neuron
Neuron Perceptron
Neuron is complex computational unit. Perceptron is a simpler model of neuron.
The output of a neuron is not necessarily a The output of a perceptron is always a
binary number. binary number.
Neuron employs non-linear activation Perceptron employs only a threshold
function. activation function.
Perceptron and Multilayer Perceptron
neural networks
Multi Layer Perceptron:
• A multilayer perceptron (MLP) is a perceptron that teams up with
additional perceptron's, stacked in several layers, to solve complex
problems.
• Each perceptron sends multiple signals, one signal going to each
perceptron in the next layer.
• For each signal, the perceptron uses different weights. In the
diagram, every line going from a perceptron in one layer to the next
layer represents a different output.
• Each layer can have a large number of perceptrons, and there can
be multiple layers, so the multilayer perceptron can quickly become
a very complex system.
Perceptron and Multilayer Perceptron
neural networks
Feedforward Artificial Neural Networks
• An FNN is a type of artificial neural network where data flows in one direction,
from the input layer through one or more hidden layers to the output layer.
• It's called "feedforward" because there are no recurrent connections or loops in
the network.
• Neurons (also known as nodes) in each layer perform calculations. They take
input values, multiply them by weights, sum these products, and then apply an
activation function.
• Activation functions introduce non-linearity into the network. They determine the
output of each neuron based on the weighted sum of inputs.
• FNNs learn to make predictions or classifications through training.
• During training, they adjust the weights using optimization techniques like
backpropagation and gradient descent.
• It is commonly known as ANN (Artificial Neural network)
Feedforward Artificial Neural Networks
Feedforward Artificial Neural Networks
Application:
1. Classification (e.g., image and text classification)
2. Regression (e.g., predictive modeling)
3. Natural Language Processing (e.g., sentiment analysis, named entity
recognition)
4. Recommendation Systems (e.g., collaborative filtering)
5. Anomaly Detection (e.g., fraud detection, network security)
6. Healthcare (e.g., disease diagnosis, drug discovery)
7. Financial Analysis (e.g., stock market prediction, credit scoring)
8. Speech Recognition (e.g., speech-to-text)
9. Image and Video Processing (e.g., object detection, video analysis)
Recurrent Neural Networks (RNN)
• Recurrent Neural Networks (RNNs) are a type of artificial neural network
designed to process sequences of data.
• RNN works on the principle of saving the output of a particular layer and
feeding this back to the input in order to predict the output of the layer.
• The nodes in different layers of the neural network are compressed to form
a single layer of recurrent neural networks.
Recurrent Neural Networks (RNN)
Recurrent Neural Networks (RNN)
Why Recurrent Neural Networks (RNN)?
RNN were created because there were a few issues in the feed-forward neural
network:
•Cannot handle sequential data
•Considers only the current input
•Cannot memorize previous inputs
The solution to these issues is the RNN. An RNN can handle sequential data,
accepting the current input data, and previously received inputs. RNNs can
memorize previous inputs due to their internal memory.
Convolutional Neural Networks (CNN)
• A Convolutional Neural Network (CNN) is a type of Deep Learning
neural network architecture commonly used in Computer Vision.
• It uses a special technique called Convolution.
• In mathematics convolution is a mathematical operation on two
functions that produces a third function that expresses how the shape
of one is modified by the other.
Convolutional Neural Networks (CNN)
Radial Basis Function (RBF) Networks
• Radial Basis Function (RBF) Networks are a particular type of Artificial
Neural Network used for function approximation problems.
• RBF Networks differ from other neural networks in their three-layer
architecture, universal approximation, and faster learning speed.
• Radial Basis Functions are a special class of feed-forward neural
network.
Modular Neural Networks
• A modular neural network is made up of several neural network
models that are linked together via an intermediate.
• In this case, the multiple neural networks act as modules, each solving
a portion of the issue.
• An integrator is responsible for dividing the problem into multiple
modules as well as integrating the answers of the modules to create
the system's final output.
Modular Neural Networks
Any
Question?