我如何在张量流中使用非常大(> 2M)的单词嵌入? [英] how do I use a very large (>2M) word embedding in tensorflow?

查看:153
本文介绍了我如何在张量流中使用非常大(> 2M)的单词嵌入?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在运行一个模型,其中嵌入了很大的单词(> 2M个单词).当我使用tf.embedding_lookup时,它期望矩阵很大.当我运行时,我随后摆脱了GPU内存错误.如果我减小嵌入的大小,一切都会很好.

I am running a model with a very big word embedding (>2M words). When I use tf.embedding_lookup, it expects the matrix, which is big. When I run, I subsequently get out of GPU memory error. If I reduce the size of the embedding, everything works fine.

有没有办法处理较大的嵌入?

Is there a way to deal with larger embedding?

推荐答案

推荐的方法是使用

The recommended way is to use a partitioner to shard this large tensor across several parts:

embedding = tf.get_variable("embedding", [1000000000, 20],
                            partitioner=tf.fixed_size_partitioner(3))

这会将张量沿0轴拆分为3个分片,但程序的其余部分会将其视为普通张量.最大的好处是将分区程序与参数服务器复制一起使用,如下所示:

This will split the tensor into 3 shards along 0 axis, but the rest of the program will see it as an ordinary tensor. The biggest benefit is to use a partitioner along with parameter server replication, like this:

with tf.device(tf.train.replica_device_setter(ps_tasks=3)):
  embedding = tf.get_variable("embedding", [1000000000, 20],
                              partitioner=tf.fixed_size_partitioner(3))

此处的关键功能是 tf.train.replica_device_setter . 它允许您运行3个不同的进程,称为 参数服务器 ,存储所有模型变量.大的embedding张量将像这些图片一样在这些服务器之间分配.

The key function here is tf.train.replica_device_setter. It allows you to run 3 different processes, called parameter servers, that store all of model variables. The large embedding tensor will be split across these servers like on this picture.

这篇关于我如何在张量流中使用非常大(> 2M)的单词嵌入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆