如何获得 Tensorflow seq2seq 嵌入输出 [英] How to get Tensorflow seq2seq embedding output

查看:34
本文介绍了如何获得 Tensorflow seq2seq 嵌入输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 tensorflow 训练一个序列到序列模型,并且一直在查看他们的示例代码.

I am attempting to train a sequence to sequence model using tensorflow and have been looking at their example code.

我希望能够访问由编码器创建的向量嵌入,因为它们似乎具有一些有趣的属性.

I want to be able to access the vector embeddings created by the encoder as they seem to have some interesting properties.

然而,我真的不清楚这是怎么回事.

However, it really isn't clear to me how this can be.

在单词的向量表示示例中,他们谈论了很多关于这些嵌入的用途,然后似乎没有提供访问它们的简单方法,除非我弄错了.

In the vector representations of words example they talk a lot about what these embeddings can be used for and then don't appear to provide a simple way of accessing them, unless I am mistaken.

任何有关如何访问它们的帮助将不胜感激.

Any help figuring out how to access them would be greatly appreciated.

推荐答案

与所有 Tensorflow 操作一样,大多数变量都是动态创建的.有多种方法可以访问这些变量(及其值).在这里,您感兴趣的变量是经过训练的变量集的一部分.要访问这些,我们可以使用 tf.trainable_variables() 函数:

As with all Tensorflow operations, most variables are dynamically created. There are different ways to access these variables ( and their values ). Here, the variable you are interested in is part of the set of trained variables. To access these, we can thus use the tf.trainable_variables() function:

for var in tf.trainable_variables():
    print var.name

这会给我们 - 对于 GRU seq2seq 模型,以下列表:

which will give us - for a GRU seq2seq model, the following list:

embedding_rnn_seq2seq/RNN/EmbeddingWrapper/embedding:0
embedding_rnn_seq2seq/RNN/GRUCell/Gates/Linear/Matrix:0
embedding_rnn_seq2seq/RNN/GRUCell/Gates/Linear/Bias:0
embedding_rnn_seq2seq/RNN/GRUCell/Candidate/Linear/Matrix:0
embedding_rnn_seq2seq/RNN/GRUCell/Candidate/Linear/Bias:0
embedding_rnn_seq2seq/embedding_rnn_decoder/embedding:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/GRUCell/Gates/Linear/Matrix:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/GRUCell/Gates/Linear/Bias:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/GRUCell/Candidate/Linear/Matrix:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/GRUCell/Candidate/Linear/Bias:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/OutputProjectionWrapper/Linear/Matrix:0
embedding_rnn_seq2seq/embedding_rnn_decoder/rnn_decoder/OutputProjectionWrapper/Linear/Bias:0

这告诉我们嵌入被称为 embedding_rnn_seq2seq/RNN/EmbeddingWrapper/embedding:0,然后我们可以使用它在我们之前的迭代器中检索指向该变量的指针:

This tells us that the embedding is called embedding_rnn_seq2seq/RNN/EmbeddingWrapper/embedding:0, which we can then use to retrieve a pointer to that variable in our earlier iterator:

for var in tf.trainable_variables():
    print var.name
    if var.name == 'embedding_rnn_seq2seq/RNN/EmbeddingWrapper/embedding:0':
        embedding_op = var

然后我们可以将其与其他操作一起传递给我们的会话运行:

This we can then pass along with other ops to our session-run:

_, loss_t, summary, embedding = sess.run([train_op, loss, summary_op, embedding_op], feed_dict)

我们有自己的(批量列表)嵌入......

and we have ourselves the (batch-list of) embeddings ...

这篇关于如何获得 Tensorflow seq2seq 嵌入输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆