TY - GEN
T1 - Learning from few samples with memory network
AU - Zhang, Shufei
AU - Huang, Kaizhu
N1 - Publisher Copyright:
© Springer International Publishing AG 2016.
PY - 2016
Y1 - 2016
N2 - Neural Networks (NN) have achieved great success in pattern recognition and machine learning. However, the success of NNs usually relies on a sufficiently large number of samples. When fed with limited data, NN’s performance may be degraded significantly. In this paper, we introduce a novel neural network called Memory Network, which can learn better from limited data. Taking advantages of the memory from previous samples, the new model could achieve remarkable performance improvement on limited data. We demonstrate the memory network in Multi-Layer Perceptron (MLP). However, it keeps straightforward to extend our idea to other neural networks, e.g., Convolutional Neural Networks (CNN). We detail the network structure, present the training algorithm, and conduct a series of experiments to validate the proposed framework. Experimental results show that our model outperforms the traditional MLP and other competitive algorithms in two real data sets.
AB - Neural Networks (NN) have achieved great success in pattern recognition and machine learning. However, the success of NNs usually relies on a sufficiently large number of samples. When fed with limited data, NN’s performance may be degraded significantly. In this paper, we introduce a novel neural network called Memory Network, which can learn better from limited data. Taking advantages of the memory from previous samples, the new model could achieve remarkable performance improvement on limited data. We demonstrate the memory network in Multi-Layer Perceptron (MLP). However, it keeps straightforward to extend our idea to other neural networks, e.g., Convolutional Neural Networks (CNN). We detail the network structure, present the training algorithm, and conduct a series of experiments to validate the proposed framework. Experimental results show that our model outperforms the traditional MLP and other competitive algorithms in two real data sets.
KW - Memory
KW - Multi-layer perceptron
UR - http://www.scopus.com/inward/record.url?scp=84992671939&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46687-3_67
DO - 10.1007/978-3-319-46687-3_67
M3 - Conference Proceeding
AN - SCOPUS:84992671939
SN - 9783319466866
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 606
EP - 614
BT - Neural Information Processing - 23rd International Conference, ICONIP 2016, Proceedings
A2 - Doya, Kenji
A2 - Ikeda, Kazushi
A2 - Lee, Minho
A2 - Hirose, Akira
A2 - Ozawa, Seiichi
A2 - Liu, Derong
PB - Springer Verlag
T2 - 23rd International Conference on Neural Information Processing, ICONIP 2016
Y2 - 16 October 2016 through 21 October 2016
ER -