Abstract
Generative neural networks can be used to specify continuous transformations
between images via latent-space interpolation. However, certifying that all
images captured by the resulting path in the image manifold satisfy a given
property can be very challenging. This is because this set is highly
non-convex, thwarting existing scalable robustness analysis methods, which are
often based on convex relaxations. We present ApproxLine, a scalable
certification method that successfully verifies non-trivial specifications
involving generative models and classifiers. ApproxLine can provide both sound
deterministic and probabilistic guarantees, by capturing either infinite
non-convex sets of neural network activation vectors or distributions over such
sets. We show that ApproxLine is practically useful and can verify interesting
interpolations in the networks latent space.
Users
Please
log in to take part in the discussion (add own reviews or comments).