Author of the publication

A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.

, , , and . CoRR, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Deep neural network approximations for Monte Carlo algorithms, , and . (2019)cite arxiv:1908.10828Comment: 45 pages.Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning., , and . CoRR, (2020)Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions., , and . J. Mach. Learn. Res., (2020)Galerkin Approximations for the Stochastic Burgers Equation., and . SIAM J. Numerical Analysis, 51 (1): 694-715 (2013)Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations., , , , and . J. Num. Math., 28 (4): 197-222 (2020)Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations., , and . CoRR, (2023)Full history recursive multilevel Picard approximations for ordinary differential equations with expectations., , , and . CoRR, (2021)Gradient descent provably escapes saddle points in the training of shallow ReLU networks., , and . CoRR, (2022)Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions., , , , and . CoRR, (2022)Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation., , , and . CoRR, (2021)