什么是Keras中的嵌入? [英] What is an Embedding in Keras?

查看:69
本文介绍了什么是Keras中的嵌入?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Keras文档尚不清楚这到底是什么.我知道我们可以使用它来将输入要素空间压缩为较小的空间.但是,从神经设计的角度来看,这是怎么做的呢?它是自动编码器吗,RBM?

Keras documentation isn't clear what this actually is. I understand we can use this to compress the input feature space into a smaller one. But how is this done from a neural design perspective? Is it an autoenocder, RBM?

推荐答案

据我所知,嵌入层是一个简单的矩阵乘法,可将单词转换为相应的单词嵌入.

As far as I know, the Embedding layer is a simple matrix multiplication that transforms words into their corresponding word embeddings.

嵌入层的权重具有形状(vocabulary_size,embedding_dimension).对于每个训练样本,其输入都是整数,表示某些单词.整数在词汇量范围内.嵌入层将每个整数i转换为嵌入权重矩阵的第i行.

The weights of the Embedding layer are of the shape (vocabulary_size, embedding_dimension). For each training sample, its input are integers, which represent certain words. The integers are in the range of the vocabulary size. The Embedding layer transforms each integer i into the ith line of the embedding weights matrix.

为了快速进行矩阵乘法,输入整数不存储为整数列表,而是存储为单矩阵.因此,输入形状为(nb_words,vocabulary_size),每行一个非零值.如果将其乘以嵌入权重,则会得到形状为

In order to quickly do this as a matrix multiplication, the input integers are not stored as a list of integers but as a one-hot matrix. Therefore the input shape is (nb_words, vocabulary_size) with one non-zero value per line. If you multiply this by the embedding weights, you get the output in the shape

(nb_words, vocab_size) x (vocab_size, embedding_dim) = (nb_words, embedding_dim)

因此,通过简单的矩阵乘法,您就可以将样本中的所有单词转换为相应的单词嵌入.

So with a simple matrix multiplication you transform all the words in a sample into the corresponding word embeddings.

这篇关于什么是Keras中的嵌入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆