This course will give a detailed introduction to learning theory with a focus on the classification problem. It will be shown how to obtain (pobabilistic) bounds on the generalization error for certain types of algorithms. The main themes will be: * probabilistic inequalities and concentration inequalities * union bounds, chaining * measuring the size of a function class, Vapnik Chervonenkis dimension, shattering dimension and Rademacher averages * classification with real-valued functions Some knowledge of probability theory would be helpful but not required since the main tools will be introduced.
K. Kawaguchi, L. Kaelbling, and Y. Bengio. (2017)cite arxiv:1710.05468Comment: To appear in Mathematics of Deep Learning, Cambridge University Press. All previous results remain unchanged.
S. Kamath, A. Orlitsky, D. Pichapati, and A. Suresh. Proceedings of The 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, page 1066--1100. Paris, France, PMLR, (03--06 Jul 2015)
C. Canonne. (2020)cite arxiv:2002.11457Comment: This is a review article; its intent is not to provide new results, but instead to gather known (and useful) ones, along with their proofs, in a single convenient location.