Author of the publication

Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses

, , , , and . (2015)cite arxiv:1509.05753Comment: 11 pages, 4 figures (main text: 5 pages, 3 figures; Supplemental Material: 6 pages, 1 figure).
DOI: 10.1103/PhysRevLett.115.128101

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Neural networks trained with SGD learn distributions of increasing complexity., , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 28843-28863. PMLR, (2023)Network reconstruction from infection cascades., and . CoRR, (2016)Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes, , , , , , and . (2016)cite arxiv:1605.06444Comment: 31 pages (14 main text, 18 appendix), 12 figures (6 main text, 6 appendix).Local entropy as a measure for sampling solutions in Constraint Satisfaction Problems, , , , and . (2015)cite arxiv:1511.05634Comment: 46 pages (main text: 22), 7 figures. This is an author-created, un-copyedited version of an article published in Journal of Statistical Mechanics: Theory and Experiment. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at http://dx.doi.org/10.1088/1742-5468/2016/02/023301.Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses, , , , and . (2015)cite arxiv:1509.05753Comment: 11 pages, 4 figures (main text: 5 pages, 3 figures; Supplemental Material: 6 pages, 1 figure).Feature learning in finite-width Bayesian deep linear networks with multiple outputs and convolutional layers., , , , and . CoRR, (2024)