Digital Library

cab1

 
Title:      SPARSE NEURAL NETWORK LANGUAGE MODEL
Author(s):      Hidekazu Yanagimoto
ISBN:      978-989-8704-04-7
Editors:      Philip Powell, Miguel Baptista Nunes and Pedro IsaĆ­as
Year:      2014
Edition:      Single
Keywords:      Natural language processing, Neural network
Type:      Full Paper
First Page:      87
Last Page:      94
Language:      English
Cover:      cover          
Full Contents:      click to dowload Download
Paper Abstract:      These days a neural network language model is proposed and can represent words as continuous feature vectors. Using the word vectors, grammatically or semantically related words are picked up from text corpus automatically. On the other hand, in deep learning sparse autoencoder is proposed and achieve good performance introducing less activation of a hidden layer in a three-layer neural network. Since there is frequent co-occurrence between a pair of words, the language model can be constructed in lower dimension space. Hence, introducing the sparseness in a neural network language model, I aim to achieve the goal that the language model captures structure of texts. The proposed method uses L1-norm of outputs form a hidden layer in a neural network to achieve sparse activation. Some evaluation experiments were carried out using real stock market news corpus and I confirmed the performance of a neural network language model. Form results of the experiments I found the sparse neural network language model has the hidden layer activated less frequently than the conventional neural network language model keeping the same performance as the neural network not considering sparse activation.
   

Social Media Links

Search

Login