tf.constant和tf.placeholder的行为不同 [英] tf.constant and tf.placeholder behave differently

查看:172
本文介绍了tf.constant和tf.placeholder的行为不同的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将tf.metrics封装在Sonnet模块上以测量每个批次的性能,以下是我已经完成的工作:

I want to wrap the tf.metrics around a Sonnet module for measuring performance of each batch, and the following is the work I have done:

import tensorflow as tf
import sonnet as snt

class Metrics(snt.AbstractModule):
    def __init__(self, indicator, summaries = None, name = "metrics"):
        super(Metrics, self).__init__(name = name)
        self._indicator = indicator
        self._summaries = summaries

    def _build(self, labels, logits):
        if self._indicator == "accuracy":
            metric, metric_update = tf.metrics.accuracy(labels, logits)
            with tf.control_dependencies([metric_update]):
                outputs = tf.identity(metric)
        elif self._indicator == "precision":
            metric, metric_update = tf.metrics.precision(labels, logits)
            with tf.control_dependencies([metric_update]):
                outputs = tf.identity(metric)
        elif self._indicator == "recall":
            metric, metric_update = tf.metrics.recall(labels, logits)
            with tf.control_dependencies([metric_update]):
                outputs = tf.identity(metric)
        elif self._indicator == "f1_score":
            metric_recall, metric_update_recall = tf.metrics.recall(labels, logits)
            metric_precision, metric_update_precision = tf.metrics.precision(labels, logits)
            with tf.control_dependencies([metric_update_recall, metric_update_precision]):
                outputs = 2.0 / (1.0 / metric_recall + 1.0 / metric_precision)
        else:
            raise ValueError("unsupported metrics")

        if type(self._summaries) == list:
            self._summaries.append(tf.summary.scalar(self._indicator, outputs))

        return outputs

但是,当我要测试该模块时,以下代码有效:

However, when I want to test the module, the following code works:

def test3():
    import numpy as np

    labels = tf.constant([1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], tf.int32)
    logits = tf.constant([1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], tf.int32)

    metrics = Metrics("accuracy")
    accuracy = metrics(labels, logits)

    metrics2 = Metrics("f1_score")
    f1_score = metrics2(labels, logits)

    writer = tf.summary.FileWriter("utils-const", tf.get_default_graph())
    with tf.Session() as sess:
        sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])

        accu, f1 = sess.run([accuracy, f1_score])
        print(accu)
        print(f1)

    writer.close()

但是以下代码不起作用:

However the following code does NOT work:

def test4():
    from tensorflow.python import debug as tf_debug
    import numpy as np

    tf_labels = tf.placeholder(dtype=tf.int32, shape=[None])
    tf_logits = tf.placeholder(dtype=tf.int32, shape=[None])

    labels = np.array([1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], np.int32)
    logits = np.array([1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], np.int32)

    metrics = Metrics("accuracy")
    accuracy = metrics(tf_labels, tf_logits)

    metrics2 = Metrics("f1_score")
    f1_score = metrics2(tf_labels, tf_logits)

    writer = tf.summary.FileWriter("utils-feed", tf.get_default_graph())
    with tf.Session() as sess:
        sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])

        sess = tf_debug.LocalCLIDebugWrapperSession(sess)

        accu, f1 = sess.run([accuracy, f1_score], feed_dict = {tf_labels: labels, tf_logits: logits})
        print(accu)
        print(f1)

    writer.close()

test3()的输出正确,为0.88. test4()的输出错误,为0.0.但是,它们应该是等效的.

The output of test3() is correct, 0.88. The output of test4() is wrong, 0.0. However, they should be equivalent.

有人有什么主意吗?

推荐答案

您确定不是tf.constant版本失败吗?我发现tf.metricstf.constant组合时具有怪异的行为:

Are you sure it is not the tf.constant version that fails? I find tf.metrics having a weird behavior in combination with tf.constant:

import tensorflow as tf

a = tf.constant(1.)
mean_a, mean_a_uop = tf.metrics.mean(a)
with tf.control_dependencies([mean_a_uop]):
  mean_a = tf.identity(mean_a)

sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()

for _ in range(10):
  print(sess.run(mean_a))

在GPU上运行时返回

0.0
2.0
1.5
1.3333334
1.25
1.2
1.1666666
1.1428572
1.125
1.1111112

而不是1.看起来好像计数滞后了一个. (我假设第一个值将是inf,但由于count上的某些条件,该值为零).另一方面,此代码的占位符版本正在按预期运行.

instead of 1s. It looks as if the count is lagging by one. (I am assuming the first value would be inf but is zero due to some conditions on count). A placeholder version of this code is running as expected on the other hand.

在CPU上,行为甚至更奇怪,因为输出是不确定的.输出示例:

On the CPU, the behavior is even weirder, as the output is non-deterministic. Example of output:

0.0
1.0
1.0
0.75
1.0
1.0
0.85714287
0.875
1.0
0.9

看起来像一个错误,您可以登录 tensorflow的github存储库. (请注意,在常量上使用运行指标没有用,但仍然是一个错误).

Looks like a bug you could log on tensorflow's github repo. (Note that using running metrics on constants is less than useful -- but it is still a bug).

编辑现在,我也偶然发现了带有tf.placeholder的奇怪示例,不幸的是,似乎tf.metrics有一个错误,该错误不仅限于与tf.constant s一起使用.

EDIT Now I also stumbled on weird examples with a tf.placeholder, it seems that tf.metrics has a bug that is unfortunately not limited to its use with tf.constants.

这篇关于tf.constant和tf.placeholder的行为不同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆