KEMBAR78
Course Content | PDF | Machine Learning | Deep Learning
0% found this document useful (0 votes)
21 views3 pages

Course Content

supervised machine learning

Uploaded by

GETACHEW AZMIR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views3 pages

Course Content

supervised machine learning

Uploaded by

GETACHEW AZMIR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Course Title: Applied Machine Learning

Course Code: CoSc 6112


Credit Hours: 3 (Lab 2hrs)
Description of Course
This course is about learning to extract statistical structure from data, for making decisions
and
predictions, as well as for visualization. The course will cover many of the most important
mathematical and computational tools for probabilistic modeling, as well as examine specific
models from the literature and examine how they can be used for particular types of data.
There
will be a heavy emphasis on implementation. You may use Matlab, Python or R. Each of the
five
assignments will involve some amount of coding, and the final project will almost certainly
require the running of computer experiments.

Learning Outcomes
Upon completing this course, students should be able to:
 Explain and differentiate modern machine learning techniques;
 Identify potential application areas where machine learning techniques can be useful;
 Frame the problem in terms of machine learning;
 Select appropriate techniques based on the particular characteristics of the domains and
applications under consideration;
 Implement the solution, and evaluate the results

Course Content

Chapter One: Machine Learning Overviews


 Introduction to machine learning
 Definition of learning systems
 Goals and applications of machine learning
 AI Vs ML
 Essential Math for ML and AI
Chapter Two: Supervised Learning
 Regression such as Logistic regression, linear regression and ridge regression
 Classification or Learning Algorithms such as decision tree learning, instance-based
learning (k-nearest- neighbors, collaborative filtering, Case-based learning), support
vector machines (Kernels for learning non-linear functions, kernel methods, bootstrap,
Bayesian learning (naive Bayes classifier and Bayesian networks), Expectation
Maximization (EM) algorithms, rule learning, artificial neural networks (Perceptron,
Multilayer networks and back propagation. Hidden layers and constructing intermediate,
distributed representations. Over fitting, learning network structure, recurrent networks.)
Chapter Three: Unsupervised Learning

 Learning from unclassified data


 Dimensionality reduction and density estimation
 Clustering
 Hierarchical Agglomerative clustering
 K-means partitioned clustering
 Expectation maximization (EM) for soft clustering
 Semi-supervised learning with EM using labelled and unlabeled data
 Graph-Based Algorithms and Multi-view Algorithms
Chapter Four: Reinforcement Learning

 Introduction to reinforcement learning


 Bandit problems and online learning
 Dynamic programming, Frontiers of RL, Monte-Carlo Methods, Temporal Difference
Learning (Q-learning and R-learning) etc.
 Policy Search (such as gradient based via backpropagation and gradient free search
method)
Chapter Five 5. Deep Learning

 Modern Practical Deep Networks


 Deep Feed forward Networks
 Neural Networks
 Bayesian neural nets
 Deep Boltzmann Machine (DBM)
 Deep Belief Networks (DBN)
 Convolutional Neural Networks
 Sequence Modelling: Recurrent and Recursive Nets
 Deep learning platforms (such as SciKit-Learn, Tensor flow, DeepLearning4J, Theano,
Keras, PyTorch, etc.
Chapter Six: Computational Learning Theory

 Models of learnability: learning in the limit; probably approximately correct (PAC)


learning.
 Sample complexity: quantifying the number of examples needed to PAC learn
 Inductive learning and analytical Learning, Parametric VS. non-parametric learning
 Dimensionality Reduction techniques (such as Principal Component Analysis, Factor
Analysis (FA) and Linear Discriminant Analysis (LDA))
Chapter Seven: Experimental Evaluation of Learning Algorithms

 Measuring the accuracy of learned hypotheses


 Comparing learning algorithms
 Model selection, cross-validation, AUC, Precision, Recall, Specificity, Mean absolute
percentage error, Root mean square error learning curve, and statistical hypothesis testing

Teaching Strategy
This course will be offered through lectures, presentations, class discussions, laboratory work
and Group project work. The exact contents of this course may vary. New materials may be
added as new technologies emerge.

Assessment Methods:

 Project 35%
 Presentations 15%
 Written Examination 50%
Teaching Support and Inputs for each content
There is no single textbook for the module. It is, thus, recommended that the students read
appropriate chapters from the given reading materials in addition to their own reading
materials.
Module Requirements

 Every student should attend all lectures: - Students should group themselves into 3 for the
project work, article review and identify their project titles.
 Students should submit every assignment according to the deadline.
 Students should present/demonstrate their assignments.
 Students should sit for the written examination.
Reading Materials

 Andrea Giussani. Applied Machine Learning with Python (2020, Bocconi University
Press) - libgen.li
 Andreas C. Mueller, Sarah Guido - Introduction to Machine Learning with Python_ A
Guide for Data Scientists (2016, O'Reilly Media) - libgen.li
 F. V. Jensen. "Bayesian Networks and Decision Graphs". Springer. 2001.
 Neural Networks and Deep Learning by Michael Nilson
 The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani & Jerome
Friedman
 Machine Learning: A Probabilistic Perspective Kevin P. Murphy, MIT Press, 2012.
 Pattern Recognition and Machine Learning Christopher M. Bishop, Springer, 2006.
 Information Theory, Inference, and Learning Algorithms David J.C. MacKay,
Cambridge University Press, 2003.
 Machine Learning: a Probabilistic Perspective by Kevin Patrick Murphy, 201

You might also like