Phonetic Temporal Neural Model for Language Identification

Zhiyuan Tang, Dong Wang*, Yixiang Chen, Lantian Li, Andrew Abel

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

53 Citations (Scopus)


Deep neural models, particularly the long short-term memory recurrent neural network (LSTM-RNN) model, have shown great potential for language identification (LID). However, the use of phonetic information has been largely overlooked by most existing neural LID methods, although this information has been used very successfully in conventional phonetic LID systems. We present a phonetic temporal neural model for LID, which is an LSTM-RNN LID system that accepts phonetic features produced by a phone-discriminativeDNNas the input, rather than raw acoustic features. This new model is similar to traditional phonetic LID methods, but the phonetic knowledge here is much richer: It is at the frame level and involves compacted information of all phones. Our experiments conducted on the Babel database and the AP16-OLR database demonstrate that the temporal phonetic neural approach is very effective, and significantly outperforms existing acoustic neural models. It also outperforms the conventional i-vector approach on short utterances and in noisy conditions.

Original languageEnglish
Pages (from-to)134-144
Number of pages11
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Issue number1
Publication statusPublished - Jan 2018


  • Language identification
  • Multi-Task learning
  • deep neural networks


Dive into the research topics of 'Phonetic Temporal Neural Model for Language Identification'. Together they form a unique fingerprint.

Cite this