tf.gradients如何管理复杂功能? [英] How does tf.gradients manages complex functions?

查看:105
本文介绍了tf.gradients如何管理复杂功能?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用复数值神经网络。

I am working with complex-valued neural networks.

对于复杂值神经网络,通常使用Wirtinger微积分。然后,导数的定义(考虑到由于Liouville定理,函数是非全纯的):

For Complex-valued neural networks Wirtinger calculus is normally used. The definition of the derivate is then (take into acount that functions are non-Holomorphic because of Liouville's theorem):

如果您读过Akira Hirose的著作复杂值神经网络:先进应用,第4章公式4.9定义:

If you take Akira Hirose book "Complex-Valued Neural Networks: Advances and Applications", Chapter 4 equation 4.9 defines:

当然,偏导数也是使用Wirtinger演算来计算的。

Where the partial derivative is also calculated using Wirtinger calculus of course.

张量流是这种情况吗?还是以其他方式定义?我找不到关于该主题的任何很好的参考。

Is this the case for tensorflow? or is it defined in some other way? I cannot find any good reference on the topic.

推荐答案

好,所以我在 github / tensorflow 和@charmasaur找到了响应,Tensorflow用于渐变的方程为:

Ok, so I discussed this in an existing thread in github/tensorflow and @charmasaur found the response, the equation used by Tensorflow for the gradient is:

当使用偏导数wrt z和z *的定义时,它使用Wirtinger微积分。

When using the definition of the partial derivatives wrt z and z* it uses Wirtinger Calculus.

对于一个或多个复杂变量的实值标量函数,这种定义变为:

For cases of a real-valued scalar function of one or several complex variables, this definitions becomes:

实际上是复数值神经网络( CVNN)应用程序(在此应用程序中,该函数是损失/误差函数,它确实是真实的)。

Which is indeed the definition used in Complex-Valued Neural Networks (CVNN) applications (In this applications, the function is the loss/error function which is indeed real).

这篇关于tf.gradients如何管理复杂功能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆