为什么floatX的标志会影响在Theano中是否使用GPU? [英] Why does the floatX's flag impact whether GPU is used in Theano?

查看:203
本文介绍了为什么floatX的标志会影响在Theano中是否使用GPU?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用提供的脚本在GPU上测试Theano为此目的在本教程中:

# Start gpu_test.py
# From http://deeplearning.net/software/theano/tutorial/using_gpu.html#using-gpu
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')
# End gpu_test.py

如果指定floatX=float32,它将在GPU上运行:

If I specify floatX=float32, it runs on GPU:

francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float32' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(Gp
Looping 1000 times took 1.458473 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu

如果未指定floatX=float32,它将在CPU上运行:

If I do not specify floatX=float32, it runs on CPU:

francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2'
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.086261 seconds
Result is [ 1.23178032  1.61879341  1.52278065 ...,  2.20771815  2.29967753
  1.62323285]
Used the cpu

如果指定floatX=float64,它将在CPU上运行:

If I specify floatX=float64, it runs on CPU:

francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float64' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.148040 seconds
Result is [ 1.23178032  1.61879341  1.52278065 ...,  2.20771815  2.29967753
  1.62323285]
Used the cpu

为什么floatX标志会影响Theano是否使用GPU?

Why does the floatX flag impact whether GPU is used in Theano?

我使用:

  • Theano 0.7.0(根据pip freeze),
  • Python 2.7.6 64位(根据import platform; platform.architecture()),
  • Nvidia-smi 361.28(根据nvidia-smi),
  • CUDA 7.5.17(根据nvcc --version),
  • GeForce GTX Titan X(根据nvidia-smi),
  • Ubuntu 14.04.4 LTS x64(根据lsb_release -auname -i).
  • Theano 0.7.0 (according to pip freeze),
  • Python 2.7.6 64 bits (according to import platform; platform.architecture()),
  • Nvidia-smi 361.28 (according to nvidia-smi),
  • CUDA 7.5.17 (according to nvcc --version),
  • GeForce GTX Titan X (according to nvidia-smi),
  • Ubuntu 14.04.4 LTS x64 (according to lsb_release -a and uname -i).

我阅读了 floatX 上的文档,但是没有帮助.它只是说:

I read the documentation on floatX but it didn't help. It simply says:

config.floatX
字符串值:"float64"或"float32"
默认值:"float64"

config.floatX
String value: either ‘float64’ or ‘float32’
Default: ‘float64’

这将设置tensor.matrix()返回的默认dtype, tensor.vector()和类似的函数.它还设置了默认值 作为Python浮点传递的参数的theano位宽 数字.

This sets the default dtype returned by tensor.matrix(), tensor.vector(), and similar functions. It also sets the default theano bit width for arguments passed as Python floating-point numbers.

推荐答案

据我所知,这是因为它们尚未为GPU实现float64.

As far as I know, it's because they haven't yet implemented float64 for GPUs.

http://deeplearning.net/software/theano/tutorial/using_gpu.html :

只有使用float32数据类型的计算才可以加速.预计在即将到来的硬件中会更好地支持float64,但float64计算仍然相对较慢(2010年1月).

Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware but float64 computations are still relatively slow (Jan 2010).

这篇关于为什么floatX的标志会影响在Theano中是否使用GPU?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆