Author of the publication

Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation.

, and . NeurIPS, page 9722-9733. (2019)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Theoretical insights on generalization in supervised and self-supervised deep learning.. Stanford University, USA, (2022)Shape Matters: Understanding the Implicit Bias of the Noise Covariance., , , and . CoRR, (2020)Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence., , , and . CoRR, (2022)Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel., , , and . NeurIPS, page 9709-9721. (2019)Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss., , , , and . NeurIPS, page 1565-1576. (2019)Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data., , , and . ICLR, OpenReview.net, (2021)Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss., , , and . NeurIPS, page 5000-5011. (2021)The Implicit and Explicit Regularization Effects of Dropout., , and . ICML, volume 119 of Proceedings of Machine Learning Research, page 10181-10192. PMLR, (2020)Markov Chain Truncation for Doubly-Intractable Inference., and . AISTATS, volume 54 of Proceedings of Machine Learning Research, page 776-784. PMLR, (2017)Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel, , , and . (2018)cite arxiv:1810.05369Comment: version 2: title changed from originally Ön the Margin Theory of Feedforward Neural Networks". Substantial changes from old version of paper, including a new lower bound on NTK sample complexity version 3: reorganized NTK lower bound proof.