J. Huggins, M. Kasprzak, T. Campbell, and T. Broderick. (2019)cite arxiv:1910.04102Comment: A python package for carrying out our validated variational inference workflow -- including doing black-box variational inference and computing the bounds we develop in this paper -- is available at https://github.com/jhuggins/viabel. The same repository also contains code for reproducing all of our experiments.
M. Vadera, A. Cobb, B. Jalaian, and B. Marlin. (2020)cite arxiv:2007.04466Comment: Presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning.
J. Hron, A. Matthews, and Z. Ghahramani. Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, page 2019--2028. Stockholmsmässan, Stockholm Sweden, PMLR, (10--15 Jul 2018)
C. Chu, K. Minami, and K. Fukumizu. (2020)cite arxiv:2004.01822Comment: ICLR 2020, Workshop on Integration of Deep Neural Models and Differential Equations.
S. Sinha, H. Bharadhwaj, A. Goyal, H. Larochelle, A. Garg, and F. Shkurti. (2020)cite arxiv:2003.04514Comment: Samarth Sinha* and Homanga Bharadhwaj* contributed equally to this work. Code will be released at https://github.com/rvl-lab-utoronto/dibs.
B. Axelrod, I. Diakonikolas, A. Sidiropoulos, A. Stewart, and G. Valiant. (2019)cite arxiv:1907.08306Comment: The present paper is a merger of two independent works arXiv:1811.03204 and arXiv:1812.05524, proposing essentially the same algorithm to compute the log-concave MLE.
S. Chatzis. Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, page 729--737. Atlanta, Georgia, USA, PMLR, (17--19 Jun 2013)
D. Simpson, H. Rue, T. Martins, A. Riebler, and S. Sørbye. (2014)cite arxiv:1403.4630Comment: Major revision of previous version. Includes a beefed up literature review and new desiderata for hierarchical priors. Removes (for space) the Cox proportional hazard model and the section on hyperparameters for Gaussian random fields.