平铺张量的张量流形状 [英] tensorflow shape of a tiled tensor

查看:28
本文介绍了平铺张量的张量流形状的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个维度为 (1, 5) 的变量 a,我想将其平铺"的次数与我的小批量大小一样多.例如,如果小批量大小为 32,那么我想构建一个维度为 (32, 5) 的张量 c,其中每行的值与原始 (1, 5) 变量 <代码>一个.

I have a variable a of dimension (1, 5) which I want to 'tile' as many times as the size of my mini-batch. For example, if the mini-batch size is 32 then I want to construct a tensor c of dimension (32, 5) where each row has values the same as the original (1, 5) variable a.

但我只知道运行时的小批量大小:它是占位符 b 的第 0 维的大小:tf.shape(b)[0]

But I only know the mini-batch size at run time: it's the size of dimension 0 of a placeholder b: tf.shape(b)[0]

这是我构造 c 的代码:

Here's my code to construct c:

a  = tf.Variable(np.random.uniform(size=(1,5)))
b = tf.placeholder(shape=[None, 12], dtype=tf.float32)
batch_size = tf.shape(b)[0]
c = tf.tile(a, tf.pack([batch_size, 1]))

这运行良好.但是c.get_shape() 返回 (?, ?).我不明白为什么这不返回 (?, 5) .

This runs fine. Howeverc.get_shape() returns (?, ?). I don't understand why this doesn't return (?, 5) instead.

当我构造一个矩阵变量 W 和列数 c.get_shape()[1] 时,这会在我的代码中引起问题,我希望返回它5 而不是 ?.

This is causing an issue later in my code when I construct a matrix variable W with number of columns c.get_shape()[1] which I expect to return 5 rather than ?.

任何帮助将不胜感激.谢谢.

Any help would be appreciated. Thanks.

推荐答案

[ 这已在 commit 于 2016 年 8 月 10 日使用 TensorFlow.]

[ This was fixed in a commit to TensorFlow on August 10, 2016.]

这是 TensorFlow 形状推断的一个已知限制:当 multiples 参数为 tf.tile() 是一个计算值(比如 tf.pack() 此处),其值并非微不足道在图构建时可计算(在这种情况下,因为它取决于 tf.placeholder(),它在被喂食之前没有任何价值),当前形状推断将举起双手并声明形状未知(但具有相同的排名作为输入,a).

This is a known limitation of TensorFlow's shape inference: when the multiples argument to tf.tile() is a computed value (such as the result of tf.pack() here), and its value is not trivially computable at graph construction time (in this case, because it depends on a tf.placeholder(), which has no value until it is fed), the current shape inference will throw its hands up and declare that the shape is unknown (but with the same rank as the input, a).

目前的解决方法是使用 Tensor.set_shape(),它允许您作为程序员在您知道的比形状推断更多时提供额外的形状信息.例如,您可以这样做:

The current workaround is to use Tensor.set_shape(), which allows you as the programmer to provide additional shape information when you know more than the shape inference does. For example, you could do:

a = tf.Variable(np.random.uniform(size=(1, 5)))
b = tf.placeholder(shape=[None, 12], dtype=tf.float32)
batch_size = tf.shape(b)[0]
c = tf.tile(a, tf.pack([batch_size, 1]))
c.set_shape([None, a.get_shape()[1]])  # or `c.set_shape([None, 5])`

然而,我们最近添加了一些功能,可以传播部分计算的值,这些值可以用作形状,这可以用来帮助 tf.tile() 的形状函数.我创建了一个 GitHub 问题 来跟踪这个问题,我现在正在测试一个修复程序.

However, we recently added some features that make it possible to propagate partially computed values that may be used as shapes, and this can be adapted to aid the shape function for tf.tile(). I have created a GitHub issue to track this, and I have a fix being tested right now.

这篇关于平铺张量的张量流形状的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆