本篇介绍的内容主要用于NLP(Nature Language Process, 自然语言处理)。Deep Learning 算法已经在图像和音频领域取得了惊人的成果,但是在 NLP 领域中尚未见到如此激动人心的结果,但就目前而言,Deep Learning 在 NLP 领域中的研究已经将高深莫测的人类语言撕开了一层神秘的面纱。本篇内容主要就是用来做词向量的映射与训练。
一、Embedding
keras.layers.embeddings.Embedding(input_dim,output_dim, init='uniform', input_length=None, weights=None, W_regularizer=None, W_constraint=None, mask_zero=False)
将正整数转换为固定size的denses向量。比如[[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
** input shape: 2维tensor,shape为(nb_samples,sequence_length)
** output shape: 3维tensor,shape为(nb_samples,sequence_length, output_dim)。
** 参数**:
input_dim : int>=0。Size of the vocabulary, ie.1+maximum integer index occuring in the input data
output_dim: int >= 0. Dimension ofthe dense embedding.
init: 初始化权值的函数名称或Theano function。可以使用Keras内置的(内置初始化权值函数见这里),也可以传递自己编写的Theano function。如果不给weights传递参数时,则该参数必须指明。
weights: 用于初始化权值的numpy arrays组成的list。这个List至少有1个元素,shape为(input_dim, output_dim)
W_regularizer:权值的规则化项,必须传入一个WeightRegularizer的实例(比如L1或L2规则化项,详细的内置规则化见这里)。
mask_zero: Whether or not the input value0 is a special "padding" value that should be masked out. This isuseful for recurrent layers which may take variable length input. If this isTrue then all subsequent layers in the model need to support masking or anexception will be raised.
input_length: Length of input sequences, whenit is constant. This argument is required if you are going to connect Flattenthen Dense layers upstream (without it, the shape of the dense outputs cannotbe computed).
二、WordContextProduct
keras.layers.embeddings.WordContextProduct(input_dim,proj_dim=128,
init='uniform', activation='sigmoid', weights=None)
这个层主要是把一对word转换为两个向量。This layer turns a pair ofwords (a pivot word + a context word, ie. a word from the same context as apivot, or a random, out-of-context word), indentified by their indices in avocabulary, into two dense reprensentations (word representation and contextrepresentation).
Then it returnsactivation(dot(pivot_embedding, context_embedding)), which can be trained toencode the probability of finding the context word in the context of the pivotword (or reciprocally depending on your training procedure).
更多信息可以看这里:Efficient Estimation of Wordreprensentations in Vector Space
** inputshape: 2维tensor,shape为(nb_samples, 2)
** outputshape: 2维tensor,shape为(nb_samples, 1)。
** 参数**:
input_dim : int>=0。Size of the vocabulary, ie.1+maximum integer index occuring in the input data
proj_dim: int >= 0. Dimension ofthe dense embedding used internally.
init: 初始化权值的函数名称或Theano function。可以使用Keras内置的(内置初始化权值函数见这里),也可以传递自己编写的Theano function。如果不给weights传递参数时,则该参数必须指明。
activation : 激活函数名称或者Theano function。可以使用Keras内置的(内置激活函数见这里),也可以是传递自己编写的Theano function。如果不明确指定,那么将没有激活函数会被应用。
weights: 用于初始化权值的numpy arrays组成的list。这个List要有2个元素,shape为(input_dim, proj_dim)。The first element is the wordembedding weights, the second one is the context embedding weights.