We propose a novel approach for inferring the individualized causal effects
of a treatment (intervention) from observational data. Our approach
conceptualizes causal inference as a multitask learning problem; we model a
subject's potential outcomes using a deep multitask network with a set of
shared layers among the factual and counterfactual outcomes, and a set of
outcome-specific layers. The impact of selection bias in the observational data
is alleviated via a propensity-dropout regularization scheme, in which the
network is thinned for every training example via a dropout probability that
depends on the associated propensity score. The network is trained in
alternating phases, where in each phase we use the training examples of one of
the two potential outcomes (treated and control populations) to update the
weights of the shared layers and the respective outcome-specific layers.
Experiments conducted on data based on a real-world observational study show
that our algorithm outperforms the state-of-the-art.
Description
[1706.05966] Deep Counterfactual Networks with Propensity-Dropout
%0 Journal Article
%1 alaa2017counterfactual
%A Alaa, Ahmed M.
%A Weisz, Michael
%A van der Schaar, Mihaela
%D 2017
%K causal-analysis deep-learning
%T Deep Counterfactual Networks with Propensity-Dropout
%U http://arxiv.org/abs/1706.05966
%X We propose a novel approach for inferring the individualized causal effects
of a treatment (intervention) from observational data. Our approach
conceptualizes causal inference as a multitask learning problem; we model a
subject's potential outcomes using a deep multitask network with a set of
shared layers among the factual and counterfactual outcomes, and a set of
outcome-specific layers. The impact of selection bias in the observational data
is alleviated via a propensity-dropout regularization scheme, in which the
network is thinned for every training example via a dropout probability that
depends on the associated propensity score. The network is trained in
alternating phases, where in each phase we use the training examples of one of
the two potential outcomes (treated and control populations) to update the
weights of the shared layers and the respective outcome-specific layers.
Experiments conducted on data based on a real-world observational study show
that our algorithm outperforms the state-of-the-art.
@article{alaa2017counterfactual,
abstract = {We propose a novel approach for inferring the individualized causal effects
of a treatment (intervention) from observational data. Our approach
conceptualizes causal inference as a multitask learning problem; we model a
subject's potential outcomes using a deep multitask network with a set of
shared layers among the factual and counterfactual outcomes, and a set of
outcome-specific layers. The impact of selection bias in the observational data
is alleviated via a propensity-dropout regularization scheme, in which the
network is thinned for every training example via a dropout probability that
depends on the associated propensity score. The network is trained in
alternating phases, where in each phase we use the training examples of one of
the two potential outcomes (treated and control populations) to update the
weights of the shared layers and the respective outcome-specific layers.
Experiments conducted on data based on a real-world observational study show
that our algorithm outperforms the state-of-the-art.},
added-at = {2019-05-23T04:13:39.000+0200},
author = {Alaa, Ahmed M. and Weisz, Michael and van der Schaar, Mihaela},
biburl = {https://www.bibsonomy.org/bibtex/249feca14c67a4302aac8fd160f98f2bd/kirk86},
description = {[1706.05966] Deep Counterfactual Networks with Propensity-Dropout},
interhash = {ee6716bf31f639403bd4ff46dfa37d85},
intrahash = {49feca14c67a4302aac8fd160f98f2bd},
keywords = {causal-analysis deep-learning},
note = {cite arxiv:1706.05966},
timestamp = {2019-05-23T04:13:55.000+0200},
title = {Deep Counterfactual Networks with Propensity-Dropout},
url = {http://arxiv.org/abs/1706.05966},
year = 2017
}