Abstract
Deep neural networks have been able to outperform humans in some cases like
image recognition and image classification. However, with the emergence of
various novel categories, the ability to continuously widen the learning
capability of such networks from limited samples, still remains a challenge.
Techniques like Meta-Learning and/or few-shot learning showed promising
results, where they can learn or generalize to a novel category/task based on
prior knowledge. In this paper, we perform a study of the existing few-shot
meta-learning techniques in the computer vision domain based on their method
and evaluation metrics. We provide a taxonomy for the techniques and categorize
them as data-augmentation, embedding, optimization and semantics based learning
for few-shot, one-shot and zero-shot settings. We then describe the seminal
work done in each category and discuss their approach towards solving the
predicament of learning from few samples. Lastly we provide a comparison of
these techniques on the commonly used benchmark datasets: Omniglot, and
MiniImagenet, along with a discussion towards the future direction of improving
the performance of these techniques towards the final goal of outperforming
humans.
Users
Please
log in to take part in the discussion (add own reviews or comments).