Author of the publication

A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation.

, , , , and . CoRR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Parametric Adversarial Divergences are Good Task Losses for Generative Modeling., , , , , and . ICLR (Workshop), OpenReview.net, (2018)Understanding Dimensional Collapse in Contrastive Self-supervised Learning., , , and . CoRR, (2021)A high-order feature synthesis and selection algorithm applied to insurance risk modelling., , , , and . Int. J. Bus. Intell. Data Min., 6 (3): 237-258 (2011)Quickly Generating Representative Samples from an RBM-Derived Process., , and . Neural Comput., 23 (8): 2058-2073 (2011)Online Adversarial Attacks., , , , , , and . CoRR, (2021)Adding noise to the input of a model trained with a regularized objective, , , and . CoRR, (2011)Clustering is Efficient for Approximate Maximum Inner Product Search., and . CoRR, (2015)The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training., , , , and . AISTATS, volume 5 of JMLR Proceedings, page 153-160. JMLR.org, (2009)Deep Learning using Robust Interdependent Codes., , and . AISTATS, volume 5 of JMLR Proceedings, page 312-319. JMLR.org, (2009)Why Does Unsupervised Pre-training Help Deep Learning?, , , and . AISTATS, volume 9 of JMLR Proceedings, page 201-208. JMLR.org, (2010)