Machine Learning Lab
Semester: V L T P C
Course Code: 0 0 3 1.5
1.5
Course Outcomes: At the end of the Course, Student will be
able to:
CO1: Develop a program for computing central tendency measures and Apply Data
Preprocessing techniques.
CO2: Build Classifiers using KNN, Decision Tree, Random Forest algorithms
CO3: Implement classification algorithms such as Naïve Bayes, SVM, Multi-Layer
Perceptron
CO4: Apply clustering algorithms such as K-Means, Fuzzy C-Means.
CO5: Discuss the use of Expectation Maximization based clustering algorithm
Mapping of Course Outcomes with Program Outcomes:
CO/PO PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 1 2 2 2 2 - - - 1 1 - 2
CO2 2 2 3 - 2 - - - 1 1 - 2
CO3 1 2 3 2 2 - - - 1 1 - 2
CO4 1 2 3 2 2 - - - 1 1 - 2
CO5 1 2 3 2 2 - - - 1 1 - 2
Mapping of Course Outcomes with Program Specific Outcomes:
CO/PSO PSO1 PSO2
CO1 - 2
CO2 - 3
CO3 - 1
CO4 - 2
CO5 - 2
Students need to implement the following experiments
List of Experiments:
Experiment 1:
Compute Central Tendency Measures: Mean, Median, Mode Measure of Dispersion:
Variance, Standard Deviation.
Experiment 2:
Apply the following Pre-processing techniques for a given dataset.
1. Attribute selection
2. Handling Missing Values
3. Discretization
4. Elimination of Outliers
1
Experiment 3:
Apply KNN algorithm for classification and regression
Experiment 4:
Demonstrate decision tree algorithm for a classification problem and perform parameter
tuning for better results
Experiment 5:
Demonstrate decision tree algorithm for a regression problem
Experiment 6:
Apply Random Forest algorithm for classification and regression
Experiment 7:
Demonstrate Naïve Bayes Classification algorithm.
Experiment 8:
Apply Support Vector algorithm for classification
Experiment 9:
Demonstrate simple linear regression algorithm for a regression problem
Experiment 10:
Apply Logistic regression algorithm for a classification problem
Experiment 11:
Demonstrate Multi-layer Perceptron algorithm for a classification problem
Experiment 12:
Implement the K-means algorithm and apply it to the data you selected. Evaluate
performance by measuring the sum of the Euclidean distance of each example from its class
center. Test the performance of the algorithm as a function of the parameters K.
Experiment 13:
Demonstrate the use of Fuzzy C-Means Clustering
Experiment 14:
Demonstrate the use of Expectation Maximization based clustering algorithm
REFERENCE BOOKS:
1. Machine Learning Probabilistic Approach, Kevin P. Murphy, MIT Press, 2012. 2.
2. Stephen Marsland, “Machine Learning -An Algorithmic Perspective”, Second
Edition, Chapman and Hall/CRC Machine Learning and Pattern Recognition Series,
2014. 3.
3. Andreas C. Müller and Sarah Guido “Introduction to Machine Learning with Python:
A Guide for Data Scientists”, Oreilly.
2
Web Links:
1. https://www.deeplearning.ai/machine-learningyearning/
2. https://www.cse.huji.ac.il/~shais/UnderstandingMachineLearning/index.html
3. https://onlinecourses.nptel.ac.in/noc21_cs24/preview
4. https://www.udemy.com/course/machinelearning/