Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study., , , , and . CoRR, (2023)Are Transformers universal approximators of sequence-to-sequence functions?, , , , and . ICLR, OpenReview.net, (2020)Minimax Bounds on Stochastic Batched Convex Optimization., , and . COLT, volume 75 of Proceedings of Machine Learning Research, page 3065-3162. PMLR, (2018)Does SGD really happen in tiny subspaces?, , and . CoRR, (2024)Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity., , and . NeurIPS, page 15532-15543. (2019)Provable Memorization via Deep Neural Networks using Sub-linear Parameters., , , and . COLT, volume 134 of Proceedings of Machine Learning Research, page 3627-3661. PMLR, (2021)Minimum Width for Universal Approximation., , , and . ICLR, OpenReview.net, (2021)On the Training Instability of Shuffling SGD with Batch Normalization., , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 37787-37845. PMLR, (2023)Linear attention is (maybe) all you need (to understand Transformer optimization)., , , , , and . ICLR, OpenReview.net, (2024)Are Transformers universal approximators of sequence-to-sequence functions?, , , , and . CoRR, (2019)