TY - JOUR
T1 - Novel deep neural network based pattern field classification architectures
AU - Huang, Kaizhu
AU - Zhang, Shufei
AU - Zhang, Rui
AU - Hussain, Amir
N1 - Funding Information:
The paper was partly supported by the National Science Foundation of China ( NSFC 61876155 ), Natural Science Foundation of Jiangsu Province BK20181189 , Key Program Special Fund in XJTLU under no. KSF-A-01 , KSF-T-06 , KSF-E-26 , KSF-P-02 and KSF-A-10 , and the UK Engineering and Physical Sciences Research Council (EPSRC) Grant Ref: EP/I009310/1 (AV-COGHEAR) .
Funding Information:
The paper was partly supported by the National Science Foundation of China (NSFC 61876155), Natural Science Foundation of Jiangsu ProvinceBK20181189, Key Program Special Fund in XJTLU under no. KSF-A-01, KSF-T-06, KSF-E-26, KSF-P-02 and KSF-A-10, and the UK Engineering and Physical Sciences Research Council (EPSRC) Grant Ref: EP/I009310/1 (AV-COGHEAR).
Publisher Copyright:
© 2020
PY - 2020/7
Y1 - 2020/7
N2 - Field classification is a new extension of traditional classification frameworks that attempts to utilize consistent information from a group of samples (termed fields). By forgoing the independent identically distributed (i.i.d.) assumption, field classification can achieve remarkably improved accuracy compared to traditional classification methods. Most studies of field classification have been conducted on traditional machine learning methods. In this paper, we propose integration with a Bayesian framework, for the first time, in order to extend field classification to deep learning and propose two novel deep neural network architectures: the Field Deep Perceptron (FDP) and the Field Deep Convolutional Neural Network (FDCNN). Specifically, we exploit a deep perceptron structure, typically a 6-layer structure, where the first 3 layers remove (learn) a ‘style’ from a group of samples to map them into a more discriminative space and the last 3 layers are trained to perform classification. For the FDCNN, we modify the AlexNet framework by adding style transformation layers within the hidden layers. We derive a novel learning scheme from a Bayesian framework and design a novel and efficient learning algorithm with guaranteed convergence for training the deep networks. The whole framework is interpreted with visualization features showing that the field deep neural network can better learn the style of a group of samples. Our developed models are also able to achieve transfer learning and learn transformations for newly introduced fields. We conduct extensive comparative experiments on benchmark data (including face, speech, and handwriting data) to validate our learning approach. Experimental results demonstrate that our proposed deep frameworks achieve significant improvements over other state-of-the-art algorithms, attaining new benchmark performance.
AB - Field classification is a new extension of traditional classification frameworks that attempts to utilize consistent information from a group of samples (termed fields). By forgoing the independent identically distributed (i.i.d.) assumption, field classification can achieve remarkably improved accuracy compared to traditional classification methods. Most studies of field classification have been conducted on traditional machine learning methods. In this paper, we propose integration with a Bayesian framework, for the first time, in order to extend field classification to deep learning and propose two novel deep neural network architectures: the Field Deep Perceptron (FDP) and the Field Deep Convolutional Neural Network (FDCNN). Specifically, we exploit a deep perceptron structure, typically a 6-layer structure, where the first 3 layers remove (learn) a ‘style’ from a group of samples to map them into a more discriminative space and the last 3 layers are trained to perform classification. For the FDCNN, we modify the AlexNet framework by adding style transformation layers within the hidden layers. We derive a novel learning scheme from a Bayesian framework and design a novel and efficient learning algorithm with guaranteed convergence for training the deep networks. The whole framework is interpreted with visualization features showing that the field deep neural network can better learn the style of a group of samples. Our developed models are also able to achieve transfer learning and learn transformations for newly introduced fields. We conduct extensive comparative experiments on benchmark data (including face, speech, and handwriting data) to validate our learning approach. Experimental results demonstrate that our proposed deep frameworks achieve significant improvements over other state-of-the-art algorithms, attaining new benchmark performance.
KW - Deep learning
KW - Field classification
KW - Neural network
UR - http://www.scopus.com/inward/record.url?scp=85083741129&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2020.03.011
DO - 10.1016/j.neunet.2020.03.011
M3 - Article
C2 - 32344155
AN - SCOPUS:85083741129
SN - 0893-6080
VL - 127
SP - 82
EP - 95
JO - Neural Networks
JF - Neural Networks
ER -