Article,

Strong convex relaxations and mixed-integer programming formulations for trained neural networks

, , , and .
(2018)cite arxiv:1811.01988.

Abstract

We present strong convex relaxations for high-dimensional piecewise linear functions that correspond to trained neural networks. These convex relaxations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or providing optimality guarantees for decision problems with machine learning models embedded inside (i.e. the `predict, then optimize' paradigm). Our convex relaxations arise from mixed-integer programming (MIP) formulations, and so they can be paired with existing MIP technology to produce provably optimal primal solutions, or to further strengthen the relaxations via cutting planes. We provide convex relaxations for networks with many of the most popular nonlinear operations (e.g. ReLU and max pooling) that are strictly stronger than other approaches from the literature. We corroborate this computationally on image classification verification tasks on the MNIST digit data set, where we show that our relaxations are able to match the bound improvement provided by state-of-the-art MIP solvers, in orders of magnitude less time.

Tags

Users

  • @kirk86

Comments and Reviews