TY - JOUR
T1 - Automatic Design of Deep Networks with Neural Blocks
AU - Zhong, Guoqiang
AU - Jiao, Wencong
AU - Gao, Wei
AU - Huang, Kaizhu
N1 - Publisher Copyright:
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - In recent years, deep neural networks (DNNs) have achieved great successes in many areas, such as cognitive computation, pattern recognition, and computer vision. Although many hand-crafted deep networks have been proposed in the literature, designing a well-behaved neural network for a specific application requires high-level expertise yet. Hence, the automatic architecture design of DNNs has become a challenging and important problem. In this paper, we propose a new reinforcement learning method, whose action policy is to select neural blocks and construct deep networks. We define the action search space with three types of neural blocks, i.e., dense block, residual block, and inception-like block. Additionally, we have also designed several variants for the residual and inception-like blocks. The optimal network is automatically learned by a Q-learning agent, which is iteratively trained to generate well-performed deep networks. To evaluate the proposed method, we have conducted experiments on three datasets, MNIST, SVHN, and CIFAR-10, for image classification applications. Compared with existing hand-crafted and auto-generated neural networks, our auto-designed neural network delivers promising results. Moreover, the proposed reinforcement learning algorithm for deep networks design only runs on one GPU, demonstrating much higher efficiency than most of the previous deep network search approaches.
AB - In recent years, deep neural networks (DNNs) have achieved great successes in many areas, such as cognitive computation, pattern recognition, and computer vision. Although many hand-crafted deep networks have been proposed in the literature, designing a well-behaved neural network for a specific application requires high-level expertise yet. Hence, the automatic architecture design of DNNs has become a challenging and important problem. In this paper, we propose a new reinforcement learning method, whose action policy is to select neural blocks and construct deep networks. We define the action search space with three types of neural blocks, i.e., dense block, residual block, and inception-like block. Additionally, we have also designed several variants for the residual and inception-like blocks. The optimal network is automatically learned by a Q-learning agent, which is iteratively trained to generate well-performed deep networks. To evaluate the proposed method, we have conducted experiments on three datasets, MNIST, SVHN, and CIFAR-10, for image classification applications. Compared with existing hand-crafted and auto-generated neural networks, our auto-designed neural network delivers promising results. Moreover, the proposed reinforcement learning algorithm for deep networks design only runs on one GPU, demonstrating much higher efficiency than most of the previous deep network search approaches.
KW - Automatic deep networks design
KW - Deep convolutional neural networks
KW - Image classification
KW - Neural blocks
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85071477866&partnerID=8YFLogxK
U2 - 10.1007/s12559-019-09677-5
DO - 10.1007/s12559-019-09677-5
M3 - Article
AN - SCOPUS:85071477866
SN - 1866-9956
VL - 12
SP - 1
EP - 12
JO - Cognitive Computation
JF - Cognitive Computation
IS - 1
ER -