S. Greydanus, M. Dzamba, and J. Yosinski. (2019)cite arxiv:1906.01563Comment: Main paper has 8 pages and 5 figures. Under review for NeurIPS 2019.
Abstract
Even though neural networks enjoy widespread use, they still struggle to
learn the basic laws of physics. How might we endow them with better inductive
biases? In this paper, we draw inspiration from Hamiltonian mechanics to train
models that learn and respect exact conservation laws in an unsupervised
manner. We evaluate our models on problems where conservation of energy is
important, including the two-body problem and pixel observations of a pendulum.
Our model trains faster and generalizes better than a regular neural network.
An interesting side effect is that our model is perfectly reversible in time.
%0 Journal Article
%1 greydanus2019hamiltonian
%A Greydanus, Sam
%A Dzamba, Misko
%A Yosinski, Jason
%D 2019
%K deep-learning dynamic learning machine-learning theory
%T Hamiltonian Neural Networks
%U http://arxiv.org/abs/1906.01563
%X Even though neural networks enjoy widespread use, they still struggle to
learn the basic laws of physics. How might we endow them with better inductive
biases? In this paper, we draw inspiration from Hamiltonian mechanics to train
models that learn and respect exact conservation laws in an unsupervised
manner. We evaluate our models on problems where conservation of energy is
important, including the two-body problem and pixel observations of a pendulum.
Our model trains faster and generalizes better than a regular neural network.
An interesting side effect is that our model is perfectly reversible in time.
@article{greydanus2019hamiltonian,
abstract = {Even though neural networks enjoy widespread use, they still struggle to
learn the basic laws of physics. How might we endow them with better inductive
biases? In this paper, we draw inspiration from Hamiltonian mechanics to train
models that learn and respect exact conservation laws in an unsupervised
manner. We evaluate our models on problems where conservation of energy is
important, including the two-body problem and pixel observations of a pendulum.
Our model trains faster and generalizes better than a regular neural network.
An interesting side effect is that our model is perfectly reversible in time.},
added-at = {2019-06-11T01:08:30.000+0200},
author = {Greydanus, Sam and Dzamba, Misko and Yosinski, Jason},
biburl = {https://www.bibsonomy.org/bibtex/2bb5430ed242f6766d4845200cfabc289/kirk86},
description = {[1906.01563] Hamiltonian Neural Networks},
interhash = {7c771e896f254e267b86b1c91f17934b},
intrahash = {bb5430ed242f6766d4845200cfabc289},
keywords = {deep-learning dynamic learning machine-learning theory},
note = {cite arxiv:1906.01563Comment: Main paper has 8 pages and 5 figures. Under review for NeurIPS 2019},
timestamp = {2019-06-11T01:08:46.000+0200},
title = {Hamiltonian Neural Networks},
url = {http://arxiv.org/abs/1906.01563},
year = 2019
}