np.mean和tf.reduce_mean有什么区别? [英] What is the difference between np.mean and tf.reduce_mean?

查看:368
本文介绍了np.mean和tf.reduce_mean有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

MNIST入门教程中,声明

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

tf.cast基本上改变了对象的张量类型,但是 tf.reduce_mean ?

tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean?

这是 tf.reduce_mean

reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

input_tensor:要减小的张量.应该是数字类型.

input_tensor: The tensor to reduce. Should have numeric type.

reduction_indices:要减小的尺寸.如果None(默认),请缩小所有尺寸.

reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions.

# 'x' is [[1., 1. ]]
#         [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1.,  2.]

对于一维矢量,它看起来像np.mean == tf.reduce_mean,但我不了解tf.reduce_mean(x, 1) ==> [1., 2.]中发生了什么. tf.reduce_mean(x, 0) ==> [1.5, 1.5]很有道理,因为[1, 2][1, 2]的均值为[1.5, 1.5],但是tf.reduce_mean(x, 1)怎么回事?

For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don't understand what's happening in tf.reduce_mean(x, 1) ==> [1., 2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1, 2] and [1, 2] is [1.5, 1.5], but what's going on with tf.reduce_mean(x, 1)?

推荐答案

numpy.meantensorflow.reduce_mean的功能相同.他们做同样的事情.在文档中,对于 numpy tensorflow .让我们来看一个例子,

The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example,

c = np.array([[3.,4], [5.,6], [6.,7]])
print(np.mean(c,1))

Mean = tf.reduce_mean(c,1)
with tf.Session() as sess:
    result = sess.run(Mean)
    print(result)

输出

[ 3.5  5.5  6.5]
[ 3.5  5.5  6.5]

在这里您可以看到,当axis(numpy)或reduction_indices(tensorflow)为1时,它将计算(3,4)和(5,6)和(6,7)的均值,因此定义在哪个轴上计算平均值.当它为0时,将在(3,5,6)和(4,6,7)之间计算平均值,依此类推.我希望你能明白.

Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.

现在它们之间有什么区别?

Now what are the differences between them?

您可以在python的任何位置计算numpy操作.但是为了进行张量流操作,必须在张量流Session内部完成.您可以在此处了解更多信息.因此,当您需要对张量流图(或结构)进行任何计算时,必须在张量流Session内部进行.

You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session.

让我们看看另一个例子.

Lets look at another example.

npMean = np.mean(c)
print(npMean+1)

tfMean = tf.reduce_mean(c)
Add = tfMean + 1
with tf.Session() as sess:
    result = sess.run(Add)
    print(result)

我们可以像在自然环境中那样在numpy中将均值增加1,但是要在tensorflow中进行均值化,您需要在Session中执行该操作,而无需使用Session,您将无法做到这一点.换句话说,当您计算tfMean = tf.reduce_mean(c)时,tensorflow不会对其进行计算.它仅在Session中进行计算.但是,当您编写np.mean()时,numpy会立即进行计算.

We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean().

我希望这是有道理的.

这篇关于np.mean和tf.reduce_mean有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆