Preventing Failures Due to Dataset Shift: Learning Predictive Models
That Transport
A. Subbaswamy, P. Schulam, and S. Saria. (2018)cite arxiv:1812.04597Comment: In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Previously presented at the NeurIPS 2018 Causal Learning Workshop.
Abstract
Classical supervised learning produces unreliable models when training and
target distributions differ, with most existing solutions requiring samples
from the target domain. We propose a proactive approach which learns a
relationship in the training domain that will generalize to the target domain
by incorporating prior knowledge of aspects of the data generating process that
are expected to differ as expressed in a causal selection diagram.
Specifically, we remove variables generated by unstable mechanisms from the
joint factorization to yield the Surgery Estimator---an interventional
distribution that is invariant to the differences across environments. We prove
that the surgery estimator finds stable relationships in strictly more
scenarios than previous approaches which only consider conditional
relationships, and demonstrate this in simulated experiments. We also evaluate
on real world data for which the true causal diagram is unknown, performing
competitively against entirely data-driven approaches.
Description
[1812.04597] Preventing Failures Due to Dataset Shift: Learning Predictive Models That Transport
cite arxiv:1812.04597Comment: In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Previously presented at the NeurIPS 2018 Causal Learning Workshop
%0 Journal Article
%1 subbaswamy2018preventing
%A Subbaswamy, Adarsh
%A Schulam, Peter
%A Saria, Suchi
%D 2018
%K causal-analysis invariance
%T Preventing Failures Due to Dataset Shift: Learning Predictive Models
That Transport
%U http://arxiv.org/abs/1812.04597
%X Classical supervised learning produces unreliable models when training and
target distributions differ, with most existing solutions requiring samples
from the target domain. We propose a proactive approach which learns a
relationship in the training domain that will generalize to the target domain
by incorporating prior knowledge of aspects of the data generating process that
are expected to differ as expressed in a causal selection diagram.
Specifically, we remove variables generated by unstable mechanisms from the
joint factorization to yield the Surgery Estimator---an interventional
distribution that is invariant to the differences across environments. We prove
that the surgery estimator finds stable relationships in strictly more
scenarios than previous approaches which only consider conditional
relationships, and demonstrate this in simulated experiments. We also evaluate
on real world data for which the true causal diagram is unknown, performing
competitively against entirely data-driven approaches.
@article{subbaswamy2018preventing,
abstract = {Classical supervised learning produces unreliable models when training and
target distributions differ, with most existing solutions requiring samples
from the target domain. We propose a proactive approach which learns a
relationship in the training domain that will generalize to the target domain
by incorporating prior knowledge of aspects of the data generating process that
are expected to differ as expressed in a causal selection diagram.
Specifically, we remove variables generated by unstable mechanisms from the
joint factorization to yield the Surgery Estimator---an interventional
distribution that is invariant to the differences across environments. We prove
that the surgery estimator finds stable relationships in strictly more
scenarios than previous approaches which only consider conditional
relationships, and demonstrate this in simulated experiments. We also evaluate
on real world data for which the true causal diagram is unknown, performing
competitively against entirely data-driven approaches.},
added-at = {2019-08-06T11:37:13.000+0200},
author = {Subbaswamy, Adarsh and Schulam, Peter and Saria, Suchi},
biburl = {https://www.bibsonomy.org/bibtex/28d1ba8d2aac1d0182d7305b50f17ff7f/kirk86},
description = {[1812.04597] Preventing Failures Due to Dataset Shift: Learning Predictive Models That Transport},
interhash = {f9d1da2538555d5c534c8792249370b7},
intrahash = {8d1ba8d2aac1d0182d7305b50f17ff7f},
keywords = {causal-analysis invariance},
note = {cite arxiv:1812.04597Comment: In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Previously presented at the NeurIPS 2018 Causal Learning Workshop},
timestamp = {2019-08-06T11:37:13.000+0200},
title = {Preventing Failures Due to Dataset Shift: Learning Predictive Models
That Transport},
url = {http://arxiv.org/abs/1812.04597},
year = 2018
}