如何在TensorFlow中从XLA获取LLVM IR转储? [英] How can I get the LLVM IR dump from XLA in TensorFlow?

查看:85
本文介绍了如何在TensorFlow中从XLA获取LLVM IR转储?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在TensorFlow中获取XLA编译器生成的LLVM IR。我知道整个LLVM上下文都包含在 llvm_module 对象中。然后使用 Compile() llvm_ir :: DumpModuleToString(* llvm_module)将其转换为字符串>文件中的函数: // tensorflow / compiler / xla / service / cpu.cpu_compiler.cc

I am trying to get the LLVM IR generated by the XLA Compiler in TensorFlow. I know that the entire LLVM Context is contained in the llvm_module object. This is then converted to a string with the utility function llvm_ir::DumpModuleToString(*llvm_module) function in the Compile() function in the file: //tensorflow/compiler/xla/service/cpu.cpu_compiler.cc.

但是我一直试图使用 tensorflow / core / logging.h 中的 VLOG(2)记录它。没有显示日志。但是,其他文件中剩余的VLOG(2)语句记录在我的Python运行中。

But I have been trying to log it using VLOG(2) from tensorflow/core/logging.h. No logs are shown. However, the remaining VLOG(2) statements from other files are logged in my Python run.

>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
2017-03-10 22:36:43.226843: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 8 visible devices
2017-03-10 22:36:43.227931: I tensorflow/compiler/xla/service/service.cc:183] XLA service 0x2821510 executing computations on platform Host. Devices:
2017-03-10 22:36:43.227951: I tensorflow/compiler/xla/service/service.cc:191]   StreamExecutor device (0): <undefined>, <undefined>
b'Hello, TensorFlow!'


推荐答案

[仅供参考,因为我刚加入,而且显然还没有声誉。]

[FYI I can't leave comments, since I just joined and apparently don't have a reputation yet.]

首先,请务必阅读以下内容,包括已加星标的蓝色框。特别要注意的是,为整个会话打开XLA仅对GPU执行JIT,目前对CPU不执行。
https://www.tensorflow.org/performance/xla/jit

First off, make sure to read this, including the starred blue boxes. In particular note that turning on XLA for your whole session only performs JIT for GPU, and not CPU at the moment. https://www.tensorflow.org/performance/xla/jit

现在让我们假设您已经正确设置了所有内容。您的示例中的程序由于以下两个原因而不会使用XLA进行编译:

Now let's assume you've got everything set up correctly. The program in your example won't use XLA to compile for 2 reasons:


  1. 正如@mrry所指出的那样,XLA无法处理字符串

  2. 即使您用数字替换字符串,您仍然看不到任何IR转储,因为它只是一个常量,而XLA会将其进行常量折叠。 / li>
  1. As @mrry has noted, XLA doesn't handle strings.
  2. Even if you replaced the string with a number, you still wouldn't see any IR dump, because it's just a single constant, and XLA will have constant-folded it away.

在您提到的评论中,您提到的是在mnist_softmax上运行,大概是按照上面链接的说明进行的。如果您确实要在CPU上编译和运行,则唯一剩下的问题就是使用VLOG(2)。只有设置了命令行标志才能启用VLOG。

In the comments you mentioned running on mnist_softmax, presumably following the instructions on the link above. If you're indeed compiling and running on CPU, the only remaining issue is using VLOG(2). VLOG is only enabled if you set command-line flags to turn it on.

因此,请尝试将VLOG(2)替换为LOG(INFO),然后应该会在您的日志中看到IR转储。

这篇关于如何在TensorFlow中从XLA获取LLVM IR转储?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆