Abstract
Despite the success of recent Neural Architecture Search (NAS) methods on
various tasks which have shown to output networks that largely outperform
human-designed networks, conventional NAS methods have mostly tackled the
optimization of searching for the network architecture for a single task
(dataset), which does not generalize well across multiple tasks (datasets).
Moreover, since such task-specific methods search for a neural architecture
from scratch for every given task, they incur a large computational cost, which
is problematic when the time and monetary budget are limited. In this paper, we
propose an efficient NAS framework that is trained once on a database
consisting of datasets and pretrained networks and can rapidly search for a
neural architecture for a novel dataset. The proposed MetaD2A (Meta
Dataset-to-Architecture) model can stochastically generate graphs
(architectures) from a given set (dataset) via a cross-modal latent space
learned with amortized meta-learning. Moreover, we also propose a
meta-performance predictor to estimate and select the best architecture without
direct training on target datasets. The experimental results demonstrate that
our model meta-learned on subsets of ImageNet-1K and architectures from
NAS-Bench 201 search space successfully generalizes to multiple unseen datasets
including CIFAR-10 and CIFAR-100, with an average search time of 33 GPU
seconds. Even under MobileNetV3 search space, MetaD2A is 5.5K times faster than
NSGANetV2, a transferable NAS method, with comparable performance. We believe
that the MetaD2A proposes a new research direction for rapid NAS as well as
ways to utilize the knowledge from rich databases of datasets and architectures
accumulated over the past years. Code is available at
https://github.com/HayeonLee/MetaD2A.
Description
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
Links and resources
Tags
community