A Course in Machine Learning
by Hal Daum III
Machine learning is the study of algorithms that learn from
data and experience. It is applied in a vast variety of
application areas, from medicine to advertising, from
military to pedestrian. Any area in which you need to make
sense of data is a potential consumer of machine learning.
CIML is a set of introductory materials that covers most
major aspects of modern machine learning (supervised
learning, unsupervised learning, large margin methods,
probabilistic modeling, learning theory, etc.). It's focus is
on broad applications with a rigorous backbone. A subset
can be used for an undergraduate course; a graduate course
could probably cover the entire material and then some.
You may obtain the written materials by purchasing a ($55)
print copy, by downloading the entire book, or by
downloading individual chapters below. If you find the
electronic version of the book useful and would like to
donate a small amount to support further development,
that's always appreciated! You can get the source code for
the book, labs and other teaching materials on GitHub. The
current version is 0.99 (the "beta" pre-release). [You can
view v0.9 if you prefer.
Support and Mailing Lists:
If you would like to be informed when new versions of
CIML materials are released, please join the CIML mailing
list. If you find errors in the book, please fill out a bug
report. If you're the first to submit an error, you'll get listed
in the acknowledgments!
Individual Chapters:
0. Front Matter
1. Decision Trees
2. Limits of Learning
3. Geometry and Nearest Neighbors
4. The Perceptron
5. Practical Issues
6. Beyond Binary Classification
7. Linear Models
8. Bias and Fairness
9. Probabilistic Modeling
10. Neural Networks
11. Kernel Methods
12. Learning Theory
13. Ensemble Methods
14. Efficient Learning
15. Unsupervised Learning
16. Expectation Maximization
17. Structured Prediction
18. Imitation Learning
19. Back Matter
Acknowledgments
Thanks to everyone who was ever a teacher or student of
mine, to those who provided feedback on drafts, and to
colleagues for encouragement to get this done! Special
thanks to: TODO...