The cost of large scale data collection and annotation often makes the
application of machine learning algorithms to new tasks or datasets
prohibitively expensive. One approach circumventing this cost is training
models on synthetic data where annotations are provided automatically. Despite
their appeal, such models often fail to generalize from synthetic to real
images, necessitating domain adaptation algorithms to manipulate these models
before they can be successfully applied. Existing approaches focus either on
mapping representations from one domain to the other, or on learning to extract
features that are invariant to the domain from which they were extracted.
However, by focusing only on creating a mapping or shared representation
between the two domains, they ignore the individual characteristics of each
domain. We suggest that explicitly modeling what is unique to each domain can
improve a model's ability to extract domain-invariant features. Inspired by
work on private-shared component analysis, we explicitly learn to extract image
representations that are partitioned into two subspaces: one component which is
private to each domain and one which is shared across domains. Our model is
trained not only to perform the task we care about in the source domain, but
also to use the partitioned representation to reconstruct the images from both
domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
%0 Generic
%1 DomainSeperation
%A Bousmalis, Konstantinos
%A Trigeorgis, George
%A Silberman, Nathan
%A Krishnan, Dilip
%A Erhan, Dumitru
%B Proceedings of the 30th International Conference on Neural Information Processing Systems
%D 2016
%K linbot2
%P 343-351
%T Domain Separation Networks
%U http://arxiv.org/abs/1608.06019
%X The cost of large scale data collection and annotation often makes the
application of machine learning algorithms to new tasks or datasets
prohibitively expensive. One approach circumventing this cost is training
models on synthetic data where annotations are provided automatically. Despite
their appeal, such models often fail to generalize from synthetic to real
images, necessitating domain adaptation algorithms to manipulate these models
before they can be successfully applied. Existing approaches focus either on
mapping representations from one domain to the other, or on learning to extract
features that are invariant to the domain from which they were extracted.
However, by focusing only on creating a mapping or shared representation
between the two domains, they ignore the individual characteristics of each
domain. We suggest that explicitly modeling what is unique to each domain can
improve a model's ability to extract domain-invariant features. Inspired by
work on private-shared component analysis, we explicitly learn to extract image
representations that are partitioned into two subspaces: one component which is
private to each domain and one which is shared across domains. Our model is
trained not only to perform the task we care about in the source domain, but
also to use the partitioned representation to reconstruct the images from both
domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
@conference{DomainSeperation,
abstract = {The cost of large scale data collection and annotation often makes the
application of machine learning algorithms to new tasks or datasets
prohibitively expensive. One approach circumventing this cost is training
models on synthetic data where annotations are provided automatically. Despite
their appeal, such models often fail to generalize from synthetic to real
images, necessitating domain adaptation algorithms to manipulate these models
before they can be successfully applied. Existing approaches focus either on
mapping representations from one domain to the other, or on learning to extract
features that are invariant to the domain from which they were extracted.
However, by focusing only on creating a mapping or shared representation
between the two domains, they ignore the individual characteristics of each
domain. We suggest that explicitly modeling what is unique to each domain can
improve a model's ability to extract domain-invariant features. Inspired by
work on private-shared component analysis, we explicitly learn to extract image
representations that are partitioned into two subspaces: one component which is
private to each domain and one which is shared across domains. Our model is
trained not only to perform the task we care about in the source domain, but
also to use the partitioned representation to reconstruct the images from both
domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.},
added-at = {2018-12-05T14:57:31.000+0100},
author = {Bousmalis, Konstantinos and Trigeorgis, George and Silberman, Nathan and Krishnan, Dilip and Erhan, Dumitru},
biburl = {https://www.bibsonomy.org/bibtex/23a78a81fb4d709e47e535014f8da995f/ross_mck},
booktitle = {Proceedings of the 30th International Conference on Neural Information Processing Systems},
interhash = {19b47dbea8c154ae065b772b029dfeb6},
intrahash = {3a78a81fb4d709e47e535014f8da995f},
keywords = {linbot2},
pages = {343-351},
timestamp = {2019-10-07T18:29:21.000+0200},
title = {Domain Separation Networks},
url = {http://arxiv.org/abs/1608.06019},
year = 2016
}