Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit
Constraints
M. Finzi, K. Wang, und A. Wilson. (2020)cite arxiv:2010.13581Comment: NeurIPS 2020. Code available at https://github.com/mfinzi/constrained-hamiltonian-neural-networks.
Zusammenfassung
Reasoning about the physical world requires models that are endowed with the
right inductive biases to learn the underlying dynamics. Recent works improve
generalization for predicting trajectories by learning the Hamiltonian or
Lagrangian of a system rather than the differential equations directly. While
these methods encode the constraints of the systems using generalized
coordinates, we show that embedding the system into Cartesian coordinates and
enforcing the constraints explicitly with Lagrange multipliers dramatically
simplifies the learning problem. We introduce a series of challenging chaotic
and extended-body systems, including systems with N-pendulums, spring coupling,
magnetic fields, rigid rotors, and gyroscopes, to push the limits of current
approaches. Our experiments show that Cartesian coordinates with explicit
constraints lead to a 100x improvement in accuracy and data efficiency.
%0 Generic
%1 finzi2020simplifying
%A Finzi, Marc
%A Wang, Ke Alexander
%A Wilson, Andrew Gordon
%D 2020
%K virgile
%T Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit
Constraints
%U http://arxiv.org/abs/2010.13581
%X Reasoning about the physical world requires models that are endowed with the
right inductive biases to learn the underlying dynamics. Recent works improve
generalization for predicting trajectories by learning the Hamiltonian or
Lagrangian of a system rather than the differential equations directly. While
these methods encode the constraints of the systems using generalized
coordinates, we show that embedding the system into Cartesian coordinates and
enforcing the constraints explicitly with Lagrange multipliers dramatically
simplifies the learning problem. We introduce a series of challenging chaotic
and extended-body systems, including systems with N-pendulums, spring coupling,
magnetic fields, rigid rotors, and gyroscopes, to push the limits of current
approaches. Our experiments show that Cartesian coordinates with explicit
constraints lead to a 100x improvement in accuracy and data efficiency.
@misc{finzi2020simplifying,
abstract = {Reasoning about the physical world requires models that are endowed with the
right inductive biases to learn the underlying dynamics. Recent works improve
generalization for predicting trajectories by learning the Hamiltonian or
Lagrangian of a system rather than the differential equations directly. While
these methods encode the constraints of the systems using generalized
coordinates, we show that embedding the system into Cartesian coordinates and
enforcing the constraints explicitly with Lagrange multipliers dramatically
simplifies the learning problem. We introduce a series of challenging chaotic
and extended-body systems, including systems with N-pendulums, spring coupling,
magnetic fields, rigid rotors, and gyroscopes, to push the limits of current
approaches. Our experiments show that Cartesian coordinates with explicit
constraints lead to a 100x improvement in accuracy and data efficiency.},
added-at = {2020-10-28T14:18:57.000+0100},
author = {Finzi, Marc and Wang, Ke Alexander and Wilson, Andrew Gordon},
biburl = {https://www.bibsonomy.org/bibtex/21511ca6d056c5102afac0bef86ba05f4/topel},
interhash = {bd3f7929052cd6fba2f87933a9476bd3},
intrahash = {1511ca6d056c5102afac0bef86ba05f4},
keywords = {virgile},
note = {cite arxiv:2010.13581Comment: NeurIPS 2020. Code available at https://github.com/mfinzi/constrained-hamiltonian-neural-networks},
timestamp = {2020-10-28T14:18:57.000+0100},
title = {Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit
Constraints},
url = {http://arxiv.org/abs/2010.13581},
year = 2020
}