Tensorflow梯度:无自动隐式求和 [英] Tensorflow gradients: without automatic implicit sum

查看:83
本文介绍了Tensorflow梯度:无自动隐式求和的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在张量流中,如果一个具有两个张量xy,并且一个人想要使用tf.gradients(y,x)相对于x具有y的梯度.那么实际上得到的是:

In tensorflow, if one has two tensors x and y and one want to have the gradients of y with respect to x using tf.gradients(y,x). Then what one actually gets is :

gradient[n,m] = sum_ij d y[i,j]/ d x[n,m]

在y的索引上有一个和,是否有办法避免此隐式和?要获得整个梯度张量gradient[i,j,n,m]?

There is a sum over the indices of y, is there a way to avoid this implicit sum? To get the whole gradient tensor gradient[i,j,n,m]?

推荐答案

这是我的工作,只是取每个分量的导数(也由@Yaroslav提到),然后在第2级的情况下将它们重新打包在一起张量(矩阵):

Here is my work around just taking the derivative of each component (as also mentionned by @Yaroslav) and then packing them all together again in the case of rank 2 tensors (Matrices):

import tensorflow as tf

def twodtensor2list(tensor,m,n):
    s = [[tf.slice(tensor,[j,i],[1,1]) for i in range(n)] for j in range(m)]
    fs = []
    for l in s:
        fs.extend(l)
    return fs

def grads_all_comp(y, shapey, x, shapex):
    yl = twodtensor2list(y,shapey[0],shapey[1])
    grads = [tf.gradients(yle,x)[0] for yle in yl]
    gradsp = tf.pack(grads)
    gradst = tf.reshape(gradsp,shape=(shapey[0],shapey[1],shapex[0],shapex[1]))
    return gradst

现在grads_all_comp(y, shapey, x, shapex)将以所需格式输出4级张量.这是一种非常低效的方法,因为需要将所有内容切成薄片并重新包装在一起,因此,如果有人发现更好的产品,我将非常有兴趣看到它.

Now grads_all_comp(y, shapey, x, shapex) will output the rank 4 tensor in the desired format. It is a very inefficient way because everything needs to be sliced up and repacked together, so if someone finds a better I would be very interested to see it.

这篇关于Tensorflow梯度:无自动隐式求和的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆