Tensorflow中逻辑运算符的梯度 [英] Gradients of Logical Operators in Tensorflow

查看:68
本文介绍了Tensorflow中逻辑运算符的梯度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Tensorflow中针对生成的数据创建一个非常简单的二进制分类器.

I'm trying to create a very simple binary classifier in Tensorflow on generated data.

我正在从两个单独的正态分布中生成随机数据.然后,如果结果数据小于或大于数字A,则将其分类为二进制类.

I'm generating random data from two separate normal distributions. Then I will classify the resulting data to a binary class if it it less than or greater than a number, A.

理想情况下,A将是两个法线中间的一个截止点.例如.如果我的数据是由N(1,1)+ N(-1,1)生成的,则A应该约为0.

Ideally, A will be a cutoff in the middle of both normals. E.g. if my data is generated by N(1,1) + N(-1,1), then A should be approximately 0.

我遇到了没有为任何变量提供渐变..."错误.具体来说:

I'm runnning into a "No gradients provided for any variable..." error. Specifically:

No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7fd9e3fae710>),)

我认为这可能与Tensorflow无法计算逻辑运算符的梯度有关.我对给定A值的分类应该是这样的:

I think it may have to do with the fact that Tensorflow cannot calculate gradients for logical operators. My classification for any given A value is supposed to be something like:

给出一个数据点x和一个A值:

Given a data point x and an A value:

[1,0]:如果x<A

[1,0] : if x < A

[0,1]:如果x> = A

[0,1] : if x >= A

给出这个想法,这是我在Tensorflow中对输出的计算:

Given that idea, here is my calculation in Tensorflow for the output:

my_output = tf.concat(0,[tf.to_float(tf.less(x_data, A)), tf.to_float(tf.greater_equal(x_data, A))])

这是实现此输出的错误方法吗?有非逻辑功能上的等效项吗?

Is this the wrong way to implement this output? Is there a non-logical functional equivalent?

谢谢.如果您想查看我的整个代码,请按照以下要点进行: https://gist.github.com/nfmcclure/46c323f0a55ae1628808f7a58b5d437f

Thanks. If you want to see my whole code, here is a gist: https://gist.github.com/nfmcclure/46c323f0a55ae1628808f7a58b5d437f

修改:完整堆栈跟踪:

Traceback (most recent call last):

  File "<ipython-input-182-f8837927493d>", line 1, in <module>
    runfile('/.../back_propagation.py', wdir='/')

  File "/usr/local/lib/python3.4/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 699, in runfile
execfile(filename, namespace)

  File "/usr/local/lib/python3.4/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 88, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)

  File "/.../back_propagation.py", line 94, in <module>
train_step = my_opt.minimize(xentropy)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/training/optimizer.py", line 192, in minimize
name=name)

  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/training/optimizer.py", line 286, in apply_gradients
(grads_and_vars,))

ValueError: No gradients provided for any variable: ((None, <tensorflow.python.ops.variables.Variable object at 0x7fd9e3fae710>),)

推荐答案

通常,您将使用sigmoid函数将函数的输出固定在0到1的范围内.您想训练以下函数:

Typically you would use a sigmoid function to pin the output of your function to the range of 0 to 1. You want to train the following function:

y = a * x_input + b,其中a和b是可训练的变量.

y = a*x_input + b, where a and b are trainable variables.

您将使用的损失函数为tf.sigmoid_cross_entropy_with_logits

The loss function you would use would then be tf.sigmoid_cross_entropy_with_logits

要评估该类,您将评估sigmoid(y)> 0.5.大于逻辑运算符没有创建优化函数的梯度.

And to evaluate the class you would evaluate sigmoid(y) > 0.5. The greater than logical operator does not have a gradient to create an optimization function.

这篇关于Tensorflow中逻辑运算符的梯度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆