Teaching Scheme –3 Hrs / Week
Total Credit - 3
Examination Scheme:
Theory: In-Sem (Theory): 30 Marks
End Sem (Theory): 70 Marks
3.
Teaching Scheme
Theory:03 hrs. / week
Credits: 03
Examination Scheme
In-Sem (Theory): 30 Marks
End Sem (Theory): 70 Marks
4.
Course Objectives:
1. Tocomprehend the theoretical foundations, algorithms and methodologies of
Neural Network.
2. To design and develop an application using specific deep learning models and
know complexity of Deep Learning algorithms and their limitations
3. To examine the case studies of deep learning techniques
5.
Course Outcomes
Analyze foundationalmachine learning algorithms such as linear regression, logistic
regression, decision trees, and support vector machines, along with their learning
types and application areas. C405.1
Apply the fundamentals of deep learning architectures to implement simple neural
network models using frameworks such as TensorFlow, Keras, and PyTorch.. C405.2
Analyze deep learning architectures and techniques for their effectiveness in real-
world applications. C405.3
Analyze various deep Convolutional Neural Network (CNN) architectures and their
applications in computer vision C405.4
Analyze the functionality of various Natural Language Processing (NLP)
architectures. C405.5
Analyze the computer vision and Natural Language processing (NLP) techniques with
their applications. C405.6
6.
DL Course Syllabus
UNITI INTRODUCTION
Introduction to Machine Learning, Types of Machines Learning, Linear Regression, Classification
and Logistic Regression, Decision Tree and Random Forest, Naïve Bayes and Support Vector
Machine. Applications of machine learning
What are wegoing to explore?
AI and ML
What is Machine Learning?
Traditional Programming vs Machine Learning
Phases of Machine learning
Timeline of Machine Learning Algorithms
Types of ML
What is MachineLearning?
Machine Learning is a field of study that gives computers the
ability to learn without being explicitly programmed - Arthur
Samuel
This quote, by Arthur Samuel, who pioneered machine learning in the 1950s,
captures the core idea of what machine learning is.
A branch of artificial intelligence, concerned with the design and
development of algorithms that allow computers to evolve behaviors based
on empirical data(experience)
Explores algorithms that can
• learn from data / build a model from data
• use the model for prediction, decision making or solving some tasks
12.
Traditional Programming: Intraditional programming, a
programmer writes explicit rules or instructions for the computer
to follow. These rules dictate exactly how the computer should
process input data to produce the desired output. It requires a
deep understanding of the problem and a clear way to encode the
solution in a programming language. Traditional programming is a
manual process—meaning a person (programmer) creates the
program. But without anyone programming the logic, one has to
manually formulate or code rules.
Machine Learning: In machine learning, instead of writing explicit
rules, a programmer trains a model using a large dataset. The
model learns patterns and relationships from the data, enabling it
to make predictions or decisions without being explicitly
programmed for each possibility..
13.
Example: Detection ofa Spam Email:
•Traditional Programming:
You might write:
IF "win money" IN subject OR "lottery" IN body THEN spam
•Machine Learning:
You feed the model:
•Thousands of emails labeled “spam” or “not spam”
•It then predicts new emails without being told exact rules
1. Training Data
•Thisis the raw input data collected from the real world.
•It contains:
• Features (input variables like age, salary, pixel values, etc.)
• Labels (output values like "spam"/"not spam", "disease"/"no disease", etc.)
Example:
In an image classification task, the training data would be labelled images (e.g., images of
cats and dogs with correct tags).
2. Feature Vector
•Features are measurable properties or characteristics of the data.
•Feature extraction or selection transforms raw data into a numerical format that the
algorithm can understand.
Example:
From a photo, features could be color histogram, texture, or edge count.
17.
DEPT. OF ITMACHINE LEARNING 18
3. Algorithm
•The learning algorithm is the core logic that analyzes the feature vectors and finds
patterns.
•Examples include:
• Decision Trees
• SVM (Support Vector Machine)
• Neural Networks
• KNN, etc.
It adjusts its internal parameters (called weights, rules, or structure) to minimize errors
between predicted and actual outputs during training.
4. Model
•Once the algorithm has learned from the training data, it produces a model.
•This model can now make predictions on unseen (new) data.
•It acts like a function:
Input → Model → Output
Example:
If trained on weather data, the model might predict tomorrow’s temperature or chance of
rain.
18.
Step Description
Training DataReal-world examples, often labeled
Feature Vector Extracted attributes in numerical form
Algorithm
Learns patterns from features using
mathematical logic
Model Final output that can predict or classify
new data
19.
DEPT. OF ITMACHINE LEARNING 20
. Test Data (📄)
•This is new, unseen data that was not used during training.
•It’s used to evaluate how well the trained model performs on real-world scenarios.
•This data has the same format as the training data but typically does not include labels
(if used for actual prediction).
Example:
In a disease diagnosis app, a patient’s symptoms and test results would be the test data.
2. Feature Vector (🔢)
•Just like in the training phase, the test data is processed to extract a feature vector.
•This vector contains numerical representations of important characteristics.
Example:
From a patient record, features might be age, blood pressure, cholesterol levels, etc.
20.
DEPT. OF ITMACHINE LEARNING 21
3. Model (🧠)
•This is the pre-trained model generated during the learning phase.
•It already "knows" how to map feature vectors to outcomes, based on patterns learned
from the training data.
•It takes the new feature vector as input.
4. Prediction (🤖)
•The model applies its learned parameters to the input vector to produce a predicted
output.
•The result could be:
• A class (e.g., spam/not spam)
• A numeric value (e.g., house price)
• A probability (e.g., 90% chance of rain)
21.
DEPT. OF ITMACHINE LEARNING 22
Summary
Step Description
Test Data New inputs the model hasn’t seen before
Features Key characteristics extracted from the
input
Model Pre-trained system used to make decisions
Prediction Final output or decision generated by the
model
Why all theinterest—and why now?
Why now, when artificial intelligence, around for more than 50 years?
Reason- extraordinary convergence of
• Large volumes of Big Data
• Exponential growth of computing power
• Sophisticated self-learning algorithms
25.
Types of Machinelearning
Supervised Learning - a computer algorithm is trained on input data that has been labeled for a
particular output. (AI) In supervised learning, the machine is trained on a set of labeled data, which means that the
input data is paired with the desired output. The machine then learns to predict the output for new input data.
Supervised learning is often used for tasks such as classification, regression, and object detection.
Unsupervised learning - algorithms to identify patterns in data sets containing data points that are
neither classified nor labeled. (AI) In unsupervised learning, the machine is trained on a set of unlabeled data,
which means that the input data is not paired with the desired output. The machine then learns to find patterns
and relationships in the data. Unsupervised learning is often used for tasks such as clustering, dimensionality
reduction, and anomaly detection.
Reinforcement learning
27.
DEPT. OF ITMACHINE LEARNING 28
•The “Supervisor” here is metaphorical. It refers to the fact that the learning process is guided
by known outputs.
•The supervisor provides:
•A training dataset (inputs with labels)
•The desired output (what the algorithm should learn to predict)
Algorithm
•The labeled data is fed into a machine learning algorithm (e.g., decision tree, SVM,
neural network).
•The algorithm learns to map inputs to outputs by adjusting its internal parameters
(weights, splits, etc.).
Processing / Model Training
•The algorithm processes the training examples multiple times, comparing predictions with
actual labels, and improving accuracy through optimization (e.g., reducing prediction error).
•The result of this process is a trained model that can generalize to new data.
28.
DEPT. OF ITMACHINE LEARNING 29
Output / Prediction
•Once the model is trained, it is capable of taking new unseen data (like images of
animals it hasn't seen before) and predicting their labels.
•In the diagram, the system successfully identifies and classifies:
• Elephant
• Camel
• Cow
29.
Key Points:
•Supervised learninginvolves training a machine from labelled data.
•Labelled data consists of examples with the correct answer or
classification.
•The machine learns the relationship between inputs (fruit images) and
outputs (fruit labels).
•The trained machine can then make predictions on new, unlabelled data.
30.
Unsupervised learning
- Unsupervisedlearning is a type of machine learning that learns from unlabelled data.
This means that the data does not have any pre-existing labels or categories. The goal of
unsupervised learning is to discover patterns and relationships in the data without any
explicit guidance.
- Here the task of the machine is to group unsorted information according to similarities,
patterns, and differences without any prior training of data.
31.
Key Points
•Unsupervised learningallows the model to discover patterns and relationships in
unlabelled data.
•Clustering algorithms group similar data points together based on their inherent
characteristics.
•Feature extraction captures essential information from the data, enabling the
model to make meaningful distinctions.
•Label association assigns categories to the clusters based on the extracted
patterns and characteristics.
33.
Supervised Learning
Learning
Algorithm
Model
New Inputx
Output y
Input1 Output1
Input2 Output2
Input3 Output3
Input-n Output-n
X y
Training data includes labelled data (or) desired outputs
Makes predictions based on the labels assigned to data
Categories:
• Classification -Output is a categorical data
• Regression- Output is continuous data
Applications
• Recognizing objects in
images
• Medical diagnosis
• Predicting financial market
and sales
• Weather forecast
#34 Typical business uses of supervised learning include recognizing objects in images, predicting financial results, detecting fraud, and evaluating risk.
Given a bank customer’s profile, should I sanction him/her a loan? – Supervised Learning.
Given a patient’s X-ray image, diagnose if he has cancer. – Supervised Learning
#41 SVD - Singular Value Decomposition PCA - Principal Component Analysis