The document provides an introduction to machine learning, covering its definitions, types (supervised and unsupervised learning), models, and applications across various fields such as healthcare, finance, and education. It also delves into specific algorithms like linear regression, decision trees, and clustering methods, highlighting their use cases, advantages, and challenges, along with performance metrics for evaluating models. Additionally, it discusses techniques for feature engineering and optimization methods that enhance model accuracy and efficiency.
What is
Machine
Learning?
• “Learningis any process by which a system
improves performance from experience.” -
Herbert Simon
• Machine learning is training computers to
effectively achieve a performance criterion
using examples or historical data.
4.
Why?
Machine Learning isused when
§ Human expertise is unavailable (space expeditions).
§ Human expertise is not explicable (speech translation).
§ Information need to be personalized (education, medicine).
§ Domains with huge amount of data.
5.
Applications
• Education ---Developing Learning Paths
for Students
• Healthcare --- Personalized Medicine
• Retail --- Product recommendations
• Web --- Search
• Manufacturing --- robotics, control
• Finance – fraud detection, asset
management
• HR --- people analytics
• Medical --- drug discovery, automated
diagnosis
• ………..
6.
Types of Learning
SupervisedUnsupervised
§ Examples or training data is available
o Human annotations, user Interactions
§ Data contains features correlated with the
desired outcome
§ A model is learned from the examples
§ Goal of the model is to predict future
behavior
§ Direct examples are not available
§ Data contains features correlated,
outcome may not be defined.
§ It is possible to create clusters
correlated with the learning objective
based on patterns in the data
Learning Objective ---
7.
Types of Learning
SupervisedUnsupervised
§ Regression
§ Linear
§ Decision Trees
§ Classification
§ Logistic Regression
§ Naïve Bayes
§ SVM
§ Decision Trees – RF, GBDT
§ Clustering --- kmeans
§ Similarity based results
§ Transfer Learning
Models ---
8.
Linear Regression
Linear regressionwas developed in the field of statistics and is studied as a model for understanding the relationship
between input and output numerical variables, but has been borrowed by machine learning. It is both a statistical
algorithm and a machine learning algorithm.
Linear Regression | simple linear regression | Ordinary Least Squares | multiple linear regression
• The dependent variable Y has a linear relationship to the independent variable X
• For each value of X, the probability distribution of Y has the same standard deviation σ.
• For any given value of X, The Y values are independent, as indicated by a random pattern on the residual plot.
• The Y values are roughly normally distributed (i.e., symmetric and unimodal).
Example: Sales Prediction à company’s advertising spend on radio, TV, and newspapers.
9.
Cost Function
Mean SquaredErrors
To minimize MSE we use Gradient Descent to calculate the gradient of our cost
function. Gradient Descent is an algorithm used to minimize some function by
iteratively moving in the direction of steepest descent as defined by the negative
of the gradient. In linear regression gradient descent is used to update the
parameters or weights of the linear model
https://ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html
Learning Rate : The size of these steps is called the learning rate. With a high learning rate we can cover more ground
each step, but we risk overshooting the lowest point since the slope of the hill is constantly changing.
Linear Regression usingscikit-learn
https://stackabuse.com/linear-regression-in-python-with-scikit-learn/
Data set : https://drive.google.com/file/d/1oakZCv7g3mlmCSdv9J8kdSaqO5_6dIOw/view
4.64, which is less than 10% of the mean
12.
Applications
• Trendline ---A trend line represents a trend, the long-term movement in time series data after other
components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have
increased or decreased over the period of time.
• Epidemiology --- Early evidence relating tobacco smoking to mortality and morbidity came from observational
studies employing regression analysis. In order to reduce spurious correlations when analyzing observational data,
researchers usually include several variables in their regression models in addition to the variable of primary interest.
• Finance --- The capital asset pricing model uses linear regression as well as the concept of beta for analyzing and
quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression
model that relates the return on the investment to the return on all risky assets.
• Economics --- Linear regression is the predominant empirical tool in economics. For example, it is used to
predict consumption spending,[20] fixed investment spending, inventory investment, purchases of a
country's exports,[21] spending on imports,[21] the demand to hold liquid assets,[22] labor demand,[23] and labor
supply.[23]
https://en.wikipedia.org/wiki/Linear_regression
13.
Classification
http://cs229.stanford.edu/notes2020spring/cs229-notes1.pdf
Values of Y,i.e the response can take discrete values
o Binary Classification – response can belong to two
classes (0,1)
üRating – thumbs up, thumbs down.
o Multi-class classification – there can be n classes à
[1, …. n]
ü Movie/restaurant rating can range from
[1,2,3,4,5]
o Multi-class , multi – label classification.
üExample –
ØConcept space in Probability has many
concepts (classes) – [Probability, Bayes
Theorem, DiscretePDF – Binomial, Poisson,
Continuous PDF – normal, exponential, …]
ØA question will typically belong to multiple
classes.
14.
Logistic Regression --Binary Classification
Logistic Function or Sigmoid function
https://sebastianraschka.com/faq/docs/logistic-why-sigmoid.html -- discussion on the logit function
Interpretation: Output is the probability that y=1 given x.
Output > 0.5 represents y =1 and output < 0.5 represents y=0
Multinomial Logistic /Softmax
Sigmoid gets replaced by the softmax function.
This applies when there are 1,…k classes.
17.
Regularization --- Linear
/Logistic
•Penalty against complexity
o model does not up
"peculiarities," "noise,"
or "imagines a pattern
where there is none.”
• Helps with generalizing to
new data sets.
• Addition of a bias to the
model when suffers from
high variance or overfitting
L2 regularization
L1 regularization
18.
Data --- Train,
Testand
Validation
Train – 90% , Test –5 %, Validation –5% ---- the
percentages can vary depending on the total size of
the dataset.
• Train --- data used to train the model
• Validation --- data that the model has not seen
but is used for parameter tuning, i.e the model is
optimized based on performance on this set.
• Test --- model has not seen this data, this data is
not used in any part of the computation. Final
performance metrics are reported on this data.
19.
Bias – VarianceTradeoff
BIAS — When we model in a very simple, novice way, for example
just a single linear equation prediction for an actual complex model.
Of course because of this Model becomes Under fit and miss out
various important insights and relations between variables.
VARIANCE — On the other hand, when we become over concern for
a simple given data and fit a model in a very complicated way, it
results in Over fit. So each noise and outlier will be considered as
valid data point and modelled accordingly.
20.
Decision Tree
Advantage ---
•Results are interpretable
• Works for both numerical and categorical data
• Does not require feature transformations (example
--- normalization, scaling)
• Robust to Multicollinearity – correlated features.
Single decision are rarely used in practice
• Unstable --- small changes in data can lead to large
structural changes in the decision tree
• Prone to overfitting
• Easily becomes complex
21.
Ensemble
Tecniques ---
Random Forest
Manytrees are better than one!
N slightly differently trained
decision trees and merges them
together to get more accurate
and stable predictions.
22.
Regularization
• Limit treedepth.
• Pruning
• Penalize selection of new
features over features that
have similar gain
• Set stricter stopping criterion
on when to split a node
further (e.g. min gain, number
of samples etc.)
Feature Importance
A feature’s importance score measures
the contribution from the feature. It is
based on the impurity reduction of the
class due to the feature.
Bias towards features with more categories
It is important to compute the correlation
with accuracy.
Clustering
• Hierarchical clustering
•K-means clustering
• K-NN (k nearest neighbors)
Clustering is a technique that
finds groups (clusters) in the
data that have similar patterns.
Similarity based
Recommendations
• Textdata – news article,
study material , books, …
• For every piece of content
compute the distance to
all every other content in
the cluster --- save the top
–n in a database
• When a user views any
content surface the top-n
(typically 5) other similar
items
27.
Applications
• Recommendations basedon Text
Similarity
• Customer Segmentation
• Content Categorization
• As a pre-analysis for supervised
learning
Regression
R² Error: Themetric helps us to compare our current model
with a constant baseline and tells us how much our model is
better. The constant baseline is chosen by taking the mean of
the data and drawing a line at the mean.
Adjusted R²: Adjusted R² depicts the same meaning as R² but
is an improvement of it. R² suffers from the problem that the
scores improve on increasing terms even though the model is
not improving which may misguide the researcher. Adjusted
R² is always lower than R² as it adjusts for the increasing
predictors and only shows improvement if there is a real
improvement
30.
Classification
True Positive ---Number of observations that model correctly
predicts the positive class
False Positive --- Number of observations where model
incorrectly predicts the positive class.
False Negatives --- Number of observations where model
incorrectly predicts the negative class.
True Negatives --- Number of observations where model
correctly predicts the negative class
https://en.wikipedia.org/wiki/Precision_and_recall
Thresholding --- Coverage
Ina binary classification if you choose randomly the probability of belonging to a class is 0.5
0.3
0.7
It is possible improve the percentage of
correct results at the cost of coverage.
ROC & AUC
ROC– Reciever Operating Characteristics
An ROC curve (receiver operating characteristic curve) is a graph
showing the performance of a classification model at all
classification thresholds.
AUC – Area Under the Curve.
• AUC is scale-invariant. It measures how well predictions
are ranked, rather than their absolute values.
• AUC is classification-threshold-invariant. It measures the
quality of the model's predictions irrespective of what
classification threshold is chosen.
https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc
• TPR = TP/(TP+FN)
• FPR = FP/(FP+TN)
Random
Imputation or missingvalues
https://towardsdatascience.com/feature-engineering-for-machine-learning-3a5e293a5114#1c08
Drop rows with missing values
Cons: Reduces training data. In case of multiple features values may only be missing in a subset of features.
Replace with a reasonable value – median is common
Cons: There are assumptions used when values are missing which may not be correct.
Categorical Imputation --- replace with the most common value.
Cons: Assumptions
Log Transform
• Ithelps to handle skewed data and after transformation, the
distribution becomes more approximate to normal.
• In most of the cases the magnitude order of the data changes within
the range of the data. For instance, the difference between
ages 15 and 20 is not equal to the ages 65 and 70. In terms of years,
yes, they are identical, but for all other aspects, 5 years of difference
in young ages mean a higher magnitude difference. This type of data
comes from a multiplicative process and log transform normalizes
the magnitude differences like that.
• It also decreases the effect of the outliers, due to the normalization
of magnitude differences and the model become more robust.
Scaling -- allfeatures have the same range
The continuous features become identical in terms of the range, after
a scaling process. This process is not mandatory for many algorithms,
but it might be still nice to apply. However, the algorithms based
on distance calculations such as k-NN or k-Means need to have scaled
continuous features as model input.
Use Case ---Predict will a
user buy a book?
Online Book Store
46.
Generating Features
Book FeaturesUser Features Book-User Features
• Tags --- genre, subject, …
• Level – Beginner, Intermediate
Advanced
• Popularity score –
1. exponential decay on clicks
2. Create time bound scores,
such number of views in the
last 7 days, last 14 days
• Price
• Length of the book
• …
• Tags --- genre, subject, …
• Level – Beginner, Intermediate
Advanced
Derived from user interactions on the
site.
• View Score
1. Exponential decay clicks on
books
2. Time bound --- number of
views in past 7/14 days
• Price category
• Time of day – categorical
• …
• Number of Views in past 14/30
days
• Already bought
• Number of views from the
same author
• …
47.
Response Variable &Modeling
• If the site is not super active you might not have enough data on purchases
• Multi-stage model
• Stage 1 : response variable views – i.e will the user view/click on this
book in the next 3 days?
• Stage 2: response variable purchase – will the user purchase this book
in the next 3 days. The probability of view will be a feature in the Stage
2 model.
Modeling: We have a mix of categorical and numerical features --- Random Forest
K-Nearest Neighbors
K-Nearest Neighboris a classification
algorithm that leverages observations
close to the target point to decide which
class it belongs to
There are two parts of the algorithm: first, how to measure
“close”; second, how many close observations (K) we need.
https://github.com/spotify/annoy
Decision Tree BasedRegression
Pros:
Decision trees can handle both categorical and numerical data.
Cons:
Does not handle feature interaction very well.