Author of the publication

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

, and . (2019)cite arxiv:1909.12292.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Social Welfare and Profit Maximization from Revealed Preferences., , and . WINE, volume 11316 of Lecture Notes in Computer Science, page 264-281. Springer, (2018)Gradient descent follows the regularization path for general losses., , , and . COLT, volume 125 of Proceedings of Machine Learning Research, page 2109-2136. PMLR, (2020)Actor-critic is implicitly biased towards high entropy optimal policies., , and . CoRR, (2021)VScript: Controllable Script Generation with Visual Presentation., , , , , , , , and . AACL/IJCNLP (System Demonstrations), page 1-8. Association for Computational Linguistics, (2022)Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training., , , , and . EACL, page 2128-2140. Association for Computational Linguistics, (2023)Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks., and . ICLR, OpenReview.net, (2020)Risk and parameter convergence of logistic regression., and . CoRR, (2018)Gradient descent aligns the layers of deep linear networks., and . CoRR, (2018)A refined primal-dual analysis of the implicit bias., and . CoRR, (2019)Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models., , , , and . CoRR, (2023)