如何让 TensorFlow XLA 知道 CUDA 路径 [英] How to let TensorFlow XLA know the CUDA path

查看:126
本文介绍了如何让 TensorFlow XLA 知道 CUDA 路径的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我通过命令安装了 TensorFlow nightly build 版本pip install tf-nightly-gpu --prefix=/tf/install/path

I installed TensorFlow nightly build version via the command pip install tf-nightly-gpu --prefix=/tf/install/path

当我尝试运行任何 XLA 示例时,TensorFlow 出现错误无法找到 libdevice 目录.使用 '.'未能将 ptx 编译为 cubin.将尝试让 GPU 驱动程序编译 ptx.未找到:/usr/local/cuda-10.0/bin/ptxas not found".

When I tried to run any XLA example, TensorFlow has error "Unable to find libdevice dir. Using '.' Failed to compile ptx to cubin. Will attempt to let GPU driver compile the ptx. Not found: /usr/local/cuda-10.0/bin/ptxas not found".

显然 TensorFlow 找不到我的 CUDA 路径.在我的系统中,CUDA 安装在/cm/shared/apps/cuda/toolkit/10.0.130 中.由于我没有从源代码构建 TensorFlow,默认情况下 XLA 搜索文件夹/user/local/cuda-*.但是因为我没有这个文件夹,所以会报错.

So apparently TensorFlow cannot find my CUDA path. In my system, the CUDA is installed in /cm/shared/apps/cuda/toolkit/10.0.130. Since I didn't build TensorFlow from source, by default XLA searches the folder /user/local/cuda-*. But since I do not have this folder, it will issue an error.

目前我的解决方法是创建一个符号链接.我查看了tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc中的TensorFlow源代码.文件中有一条注释//用户通过 --xla_gpu_cuda_data_dir 明确指定的 CUDA 位置具有最高优先级."那么如何将值传递给这个标志呢?我尝试了以下两个环境变量,但它们都不起作用:

Currently my workaround is to create a symbolic link. I checked the TensorFlow source code in tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc. There is a comment in the file "// CUDA location explicitly specified by user via --xla_gpu_cuda_data_dir has highest priority." So how to pass values to this flag? I tried the following two environment variables, but neither of them works:

export XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"
export TF_XLA_FLAGS="--xla_gpu_cuda_data_dir=/cm/shared/apps/cuda10.0/toolkit/10.0.130/"

那么如何使用标志--xla_gpu_cuda_data_dir"?谢谢.

So how to use the flag "--xla_gpu_cuda_data_dir"? Thanks.

推荐答案

你可以在终端运行export XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda

这篇关于如何让 TensorFlow XLA 知道 CUDA 路径的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆