We present seven myths commonly believed to be true in machine learning
research, circa Feb 2019. This is an archival copy of the blog post at
https://crazyoscarchang.github.io/2019/02/16/seven-myths-in-machine-learning-research/
Myth 1: TensorFlow is a Tensor manipulation library
Myth 2: Image datasets are representative of real images found in the wild
Myth 3: Machine Learning researchers do not use the test set for validation
Myth 4: Every datapoint is used in training a neural network
Myth 5: We need (batch) normalization to train very deep residual networks
Myth 6: Attention $>$ Convolution
Myth 7: Saliency maps are robust ways to interpret neural networks
Beschreibung
[1902.06789] Seven Myths in Machine Learning Research
%0 Generic
%1 chang2019seven
%A Chang, Oscar
%A Lipson, Hod
%D 2019
%K 2019 arxiv machine-learning paper research
%T Seven Myths in Machine Learning Research
%U http://arxiv.org/abs/1902.06789
%X We present seven myths commonly believed to be true in machine learning
research, circa Feb 2019. This is an archival copy of the blog post at
https://crazyoscarchang.github.io/2019/02/16/seven-myths-in-machine-learning-research/
Myth 1: TensorFlow is a Tensor manipulation library
Myth 2: Image datasets are representative of real images found in the wild
Myth 3: Machine Learning researchers do not use the test set for validation
Myth 4: Every datapoint is used in training a neural network
Myth 5: We need (batch) normalization to train very deep residual networks
Myth 6: Attention $>$ Convolution
Myth 7: Saliency maps are robust ways to interpret neural networks
@misc{chang2019seven,
abstract = {We present seven myths commonly believed to be true in machine learning
research, circa Feb 2019. This is an archival copy of the blog post at
https://crazyoscarchang.github.io/2019/02/16/seven-myths-in-machine-learning-research/
Myth 1: TensorFlow is a Tensor manipulation library
Myth 2: Image datasets are representative of real images found in the wild
Myth 3: Machine Learning researchers do not use the test set for validation
Myth 4: Every datapoint is used in training a neural network
Myth 5: We need (batch) normalization to train very deep residual networks
Myth 6: Attention $>$ Convolution
Myth 7: Saliency maps are robust ways to interpret neural networks},
added-at = {2019-02-28T16:05:04.000+0100},
author = {Chang, Oscar and Lipson, Hod},
biburl = {https://www.bibsonomy.org/bibtex/2504c33646dd71adb0d431810ce6ad816/analyst},
description = {[1902.06789] Seven Myths in Machine Learning Research},
interhash = {be9feef66ce4799635dc6c2938fe3460},
intrahash = {504c33646dd71adb0d431810ce6ad816},
keywords = {2019 arxiv machine-learning paper research},
note = {cite arxiv:1902.06789},
timestamp = {2019-02-28T16:05:04.000+0100},
title = {Seven Myths in Machine Learning Research},
url = {http://arxiv.org/abs/1902.06789},
year = 2019
}