Generalization Bounds for Representative Domain Adaptation
C. Zhang, L. Zhang, W. Fan, and J. Ye. (2014)cite arxiv:1401.0376Comment: arXiv admin note: substantial text overlap with arXiv:1304.1574.
Abstract
In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.
Description
[1401.0376] Generalization Bounds for Representative Domain Adaptation
%0 Journal Article
%1 zhang2014generalization
%A Zhang, Chao
%A Zhang, Lei
%A Fan, Wei
%A Ye, Jieping
%D 2014
%K bounds generalization learning theory
%T Generalization Bounds for Representative Domain Adaptation
%U http://arxiv.org/abs/1401.0376
%X In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.
@article{zhang2014generalization,
abstract = {In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.},
added-at = {2019-08-12T18:42:17.000+0200},
author = {Zhang, Chao and Zhang, Lei and Fan, Wei and Ye, Jieping},
biburl = {https://www.bibsonomy.org/bibtex/229a847fd17028a2ebd60b8cacd29be34/kirk86},
description = {[1401.0376] Generalization Bounds for Representative Domain Adaptation},
interhash = {c407b53e17a14c2d8167fa5918e5d14d},
intrahash = {29a847fd17028a2ebd60b8cacd29be34},
keywords = {bounds generalization learning theory},
note = {cite arxiv:1401.0376Comment: arXiv admin note: substantial text overlap with arXiv:1304.1574},
timestamp = {2019-08-12T18:42:17.000+0200},
title = {Generalization Bounds for Representative Domain Adaptation},
url = {http://arxiv.org/abs/1401.0376},
year = 2014
}