Author of the publication

Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias.

, , , and . CoRR, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

No persons found for author name Takase, Tomoumi
add a person with the name Takase, Tomoumi
 

Other publications of authors with the same name

Self-paced Data Augmentation for Training Neural Networks., , and . CoRR, (2020)Understanding Gradient Regularization in Deep Learning: Efficient Finite-Difference Computation and Implicit Bias., , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 15809-15827. PMLR, (2023)Dynamic batch size tuning based on stopping criterion for neural network training.. Neurocomputing, (2021)Effective neural network training with adaptive learning rate based on training loss., , and . Neural Networks, (2018)Time-domain Mixup Source Data Augmentation of sEMGs for Motion Recognition towards Efficient Style Transfer Mapping., , , and . EMBC, page 35-38. IEEE, (2021)Evaluation of Stratified Validation in Neural Network Training with Imbalanced Data., , and . BigComp, page 1-4. IEEE, (2019)Longer Distance Weight Prediction for Faster Training of Neural Networks., , and . SMC, page 2194-2199. IEEE, (2018)Feature combination mixup: novel mixup method using feature combination for neural networks.. Neural Comput. Appl., 35 (17): 12763-12774 (June 2023)Self-paced data augmentation for training neural networks., , and . Neurocomputing, (2021)Difficulty-weighted learning: A novel curriculum-like approach based on difficult examples for neural network training.. Expert Syst. Appl., (2019)