张量流中具有聚集操作的函数梯度 [英] Function gradients with gather operations in tensorflow

查看:27
本文介绍了张量流中具有聚集操作的函数梯度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 tensorflow 编写一个复杂的计算图并计算关于函数参数的符号梯度.但是,当我的函数/图形涉及某些参数的收集操作时,我正在为此苦苦挣扎.问题是 Session.run 返回的梯度不仅仅是一个张量,而是 IndexedSlices 对象.而且我不知道如何将其正确转换为张量.

I am trying to write a complicated computational graph using tensorflow and compute symbolic gradients with respect to function parameters. However I am struggling with this when my function/graph involves gather operations of some of the parameters. The problem is that gradient returned by the Session.run is not just a tensor but the IndexedSlices object. And I don't know how to properly convert it to a tensor.

这是一个说明问题的玩具示例

Here is a toy example that illustrates the issue

import tensorflow as tf
import numpy as np
from tensorflow.python.ops import gradients_impl as GI

T_W = tf.placeholder(tf.float32, [2], 'W') # parameter vector
T_data = tf.placeholder(tf.float32, [10], 'data') # data vector
T_Di = tf.placeholder(tf.int32, [10], 'Di') # indices vector
T_pred = tf.gather(T_W,T_Di)
T_loss = tf.reduce_sum(tf.square(T_data-T_pred)) # loss function

T_grad = tf.gradients(T_loss,[T_W])
#T_grad=GI._IndexedSlicesToTensor(T_grad)                                       

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    feed_dict={T_W: [1.,2.],
           T_data: np.arange(10)**2,
           T_Di: np.arange(10)%2}
    dl, dgrad = sess.run(
    [T_loss, T_grad], feed_dict=feed_dict)
    grad = np.array(dgrad)
    print (grad)

哪些输出

[[array([   4.,    4.,   -4.,  -12.,  -28.,  -44.,  -68.,  -92., -124.,
   -156.], dtype=float32)
array([0, 1, 0, 1, 0, 1, 0, 1, 0, 1], dtype=int32)
array([2], dtype=int32)]]

在这里,我得到了这个 indexedSlices 对象,而不是一个应该是两个元素的向量的渐变.

Here, instead of having a gradient which should been a a vector of two elements I get this indexedSlices object.

我看到内部模块 tensorflow.python.ops.gradients_impl 有某种内部转换器 _indexedSlicesToTensor,但我发现没有官方"方法将梯度作为张量获取很奇怪.例如,在 theano 中,没有这样的问题.

I see that the internal module tensorflow.python.ops.gradients_impl has some kind of internal converter _indexedSlicesToTensor, but I find it weird that there is no 'official' way to get the gradient as a tensor. In theano, there was no such issue for example.

推荐答案

答案很简单.我只需要使用 tf.convert_to_tensor() 函数

The answer was very simple. I just needed to use the tf.convert_to_tensor() function

T_grad = tf.gradients(T_loss,[T_W])
T_grad = tf.convert_to_tensor(T_grad[0])

这篇关于张量流中具有聚集操作的函数梯度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆