This course will give a detailed introduction to learning theory with a focus on the classification problem. It will be shown how to obtain (pobabilistic) bounds on the generalization error for certain types of algorithms. The main themes will be: * probabilistic inequalities and concentration inequalities * union bounds, chaining * measuring the size of a function class, Vapnik Chervonenkis dimension, shattering dimension and Rademacher averages * classification with real-valued functions Some knowledge of probability theory would be helpful but not required since the main tools will be introduced.
P. Bartlett, D. Foster, и M. Telgarsky. (2017)cite arxiv:1706.08498Comment: Comparison to arXiv v1: 1-norm in main bound refined to (2,1)-group-norm. Comparison to NIPS camera ready: typo fixes.
P. Bartlett, N. Harvey, C. Liaw, и A. Mehrabian. (2017)cite arxiv:1703.02930Comment: Extended abstract appeared in COLT 2017; the upper bound was presented at the 2016 ACM Conference on Data Science. This version includes all the proofs and a refinement of the upper bound, Theorem 6. 16 pages, 2 figures.