如何在 Tensorflow 中进行切片分配 [英] How to do slice assignment in Tensorflow
问题描述
我发现 Tensorflow 提供了 scatter_update()
来为 0 维的张量切片赋值.例如,如果张量 T
是三维的,我可以将值 v[1, :, :]
赋值给 T[i, :, :]代码>.
I found that Tensorflow provides scatter_update()
to assign values to the slice of a tensor in the 0 dimension. For example, if the tensor T
is three dimensional, I can assign value v[1, :, :]
to T[i, :, :]
.
a = tf.Variable(tf.zeros([10,36,36]))
value = np.ones([1,36,36])
d = tf.scatter_update(a,[0],value)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print a.eval()
sess.run(d)
print a.eval()
但是如何将值 v[1,1,:]
赋值给 T[i,j,:]
?
But how to assign values v[1,1,:]
to T[i,j,:]
?
a = tf.Variable(tf.zeros([10,36,36]))
value1 = np.random.randn(1,1,36)
e = tf.scatter_update(a,[0],value1) #Error
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print a.eval()
sess.rum(e)
print a.eval()
TF 是否提供了其他任何功能或一种简单的方法来执行此操作?
Is there any other function that TF provide or a simple way to do this?
推荐答案
我相信你需要的是 票 #206.不过目前还没有.
I believe what you need is the assign_slice_update
discussed in ticket #206. It is not yet available, though.
更新:现已实施.见jdehesa的回答:https://stackoverflow.com/a/43139565/6531137
在 assign_slice_update
(或 scatter_nd()
)可用之前,您可以构建所需行的块,其中包含您不想修改的值以及要更新的所需值,如下所示:
Until assign_slice_update
(or scatter_nd()
) is available, you could build a block of the desired row containing the values you don't want to modify along with the desired values to update, like so:
import tensorflow as tf
a = tf.Variable(tf.ones([10,36,36]))
i = 3
j = 5
# Gather values inside the a[i,...] block that are not on column j
idx_before = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [j]), [-1, 1]), tf.reshape(tf.range(j), [-1, 1])])
values_before = tf.gather_nd(a, idx_before)
idx_after = tf.concat(1, [tf.reshape(tf.tile(tf.Variable([i]), [36-j-1]), [-1, 1]), tf.reshape(tf.range(j+1, 36), [-1, 1])])
values_after = tf.gather_nd(a, idx_after)
# Build a subset of tensor `a` with the values that should not be touched and the values to update
block = tf.concat(0, [values_before, 5*tf.ones([1, 36]), values_after])
d = tf.scatter_update(a, i, block)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(d)
print(a.eval()[3,4:7,:]) # Print a subset of the tensor to verify
该示例生成一个 1 张量并执行 a[i,j,:] = 5
.大多数复杂性在于获取我们不想修改的值,a[i,~j,:]
(否则 scatter_update()
将替换这些值).
The example generate a tensor of ones and performs a[i,j,:] = 5
. Most of the complexity lies into getting the values that we don't want to modify, a[i,~j,:]
(otherwise scatter_update()
will replace those values).
如果你想按照你的要求执行 T[i,k,:] = a[1,1,:]
,你需要替换 5*tf.ones([1, 36])
在前面的例子中由 tf.gather_nd(a, [[1, 1]])
.
If you want to perform T[i,k,:] = a[1,1,:]
as you asked, you need to replace 5*tf.ones([1, 36])
in the previous example by tf.gather_nd(a, [[1, 1]])
.
另一种方法是为 tf.select()
从中创建所需元素的掩码并将其分配回变量,如下所示:
Another approach would be to create a mask to tf.select()
the desired elements from it and assign it back to the variable, as such:
import tensorflow as tf
a = tf.Variable(tf.zeros([10,36,36]))
i = tf.Variable([3])
j = tf.Variable([5])
# Build a mask using indices to perform [i,j,:]
atleast_2d = lambda x: tf.reshape(x, [-1, 1])
indices = tf.concat(1, [atleast_2d(tf.tile(i, [36])), atleast_2d(tf.tile(j, [36])), atleast_2d(tf.range(36))])
mask = tf.cast(tf.sparse_to_dense(indices, [10, 36, 36], 1), tf.bool)
to_update = 5*tf.ones_like(a)
out = a.assign( tf.select(mask, to_update, a) )
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
sess.run(out)
print(a.eval()[2:5,5,:])
它在内存方面的效率可能较低,因为它需要两倍的内存来处理 a
-like to_update
变量,但您可以轻松地将最后一个示例修改为从 tf.select(...)
节点获取梯度保留操作.您可能也有兴趣查看其他 StackOverflow 问题:张量值的条件分配在 TensorFlow 中.
It is potentially less efficient in terms of memory since it requires twice the memory to handle the a
-like to_update
variable, but you could easily modify this last example to get a gradient-preserving operation from the tf.select(...)
node. You might also be interested in looking at this other StackOverflow question: Conditional assignment of tensor values in TensorFlow.
那些不雅的扭曲应该被替换为对可用的适当 TensorFlow 函数的调用.
Those inelegant contortions should be replaced to a call to the proper TensorFlow function as it becomes available.
这篇关于如何在 Tensorflow 中进行切片分配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!