我应该对所有功能都使用@ tf.function吗? [英] Should I use @tf.function for all functions?

查看:271
本文介绍了我应该对所有功能都使用@ tf.function吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

@tf.function上的官方教程说:

要获得最佳性能并使模型可部署到任何地方, 使用tf.function从程序中绘制图形.谢谢 AutoGraph,可以使用数量惊人的Python代码 tf.function,但仍有一些陷阱需要警惕.

To get peak performance and to make your model deployable anywhere, use tf.function to make graphs out of your programs. Thanks to AutoGraph, a surprising amount of Python code just works with tf.function, but there are still pitfalls to be wary of.

主要要点和建议是:

  • 不要依赖于对象更改或列表追加之类的Python副作用.
  • tf.function最适合TensorFlow操作,而不是NumPy操作或Python原语.
  • 如有疑问,请在y惯用法中使用for.
  • Don't rely on Python side effects like object mutation or list appends.
  • tf.function works best with TensorFlow ops, rather than NumPy ops or Python primitives.
  • When in doubt, use the for x in y idiom.

它仅提及如何实施@tf.function带注释的功能,而没有提及何时使用它.

It only mentions how to implement @tf.function annotated functions but not when to use it.

是否有试探性决定如何确定是否至少应尝试使用tf.function注释功能?似乎没有理由不这样做,除非我懒于消除副作用或更改诸如range()-> tf.range()之类的东西.但是,如果我愿意这样做...

Is there a heuristic on how to decide whether I should at least try to annotate a function with tf.function? It seems that there are no reasons not to do it, unless I am to lazy to remove side effects or change some things like range()-> tf.range(). But if I am willing to do this...

是否有理由不对所有功能使用@tf.function?

Is there any reason not to use @tf.function for all functions?

推荐答案

TLDR:这取决于您的功能以及您是从事生产还是开发.如果您希望能够轻松调试功能,或者该功能受AutoGraph或tf.v1代码兼容性的限制,请不要使用tf.function. 我强烈建议观看Inside TensorFlow关于 AutoGraph

TLDR: It depends on your function and whether you are in production or development. Don't use tf.function if you want to be able to debug your function easily, or if it falls under the limitations of AutoGraph or tf.v1 code compatibility. I would highly recommend watching the Inside TensorFlow talks about AutoGraph and Functions, not Sessions.

在下文中,我将细分原因,这些原因均来自Google在线提供的信息.

In the following I'll break down the reasons, which are all taken from information made available online by Google.

通常,tf.function装饰器会导致将函数编译为可执行TensorFlow图的可调用函数.这需要:

In general, the tf.function decorator causes a function to be compiled as a callable that executes a TensorFlow graph. This entails:

  • 如果需要,可以通过AutoGraph转换代码(包括从带注释的函数调用的任何函数)
  • 跟踪并执行生成的图形代码

有关以下内容的详细信息背后的设计思路.

  • 执行速度更快,尤其是当函数由许多小操作组成时(源)
  • Faster execution, especially if the function consists of many small ops (Source)

如果要使用AutoGraph,与直接调用AutoGraph相比,强烈建议使用tf.function. 这样做的原因包括:自动控制依赖项,某些API需要它,更多的缓存,以及异常帮助器

If you want to use AutoGraph, using tf.function is highly recommended over calling AutoGraph directly. Reasons for this include: Automatic control dependencies, it is required for some APIs, more caching, and exception helpers (Source).

  • 没有异常捕获(应该在急切模式下;在修饰的函数之外)(来源)
  • 调试要困难得多
  • 由于隐藏的副作用和TF控制流程而造成的限制

有关AutoGraph限制的详细信息是可用.

  • 不允许在tf.function中多次创建变量,但这可能会随着tf.v1代码的淘汰而改变
  • It is not allowed to create variables more than once in tf.function, but this is subject to change as tf.v1 code is phased out (Source)
  • 没有具体缺点

不允许多次创建变量,例如以下示例中的v:

It is not allowed to create variables more than once, such as v in the following example:

@tf.function
def f(x):
    v = tf.Variable(1)
    return tf.add(x, v)

f(tf.constant(2))

# => ValueError: tf.function-decorated function tried to create variables on non-first call.

在下面的代码中,通过确保仅创建一次self.v可以缓解这种情况:

In the following code, this is mitigated by making sure that self.v is only created once:

class C(object):
    def __init__(self):
        self.v = None
    @tf.function
    def f(self, x):
        if self.v is None:
            self.v = tf.Variable(1)
        return tf.add(x, self.v)

c = C()
print(c.f(tf.constant(2)))

# => tf.Tensor(3, shape=(), dtype=int32)

AutoGraph无法捕获的隐藏副作用

在此示例中,例如self.a的更改无法隐藏,这会导致错误,因为尚未完成跨功能分析(尚未)

Hidden side effects not captured by AutoGraph

Changes such as to self.a in this example can't be hidden, which leads to an error since cross-function analysis is not done (yet) (Source):

class C(object):
    def change_state(self):
        self.a += 1

    @tf.function
    def f(self):
        self.a = tf.constant(0)
        if tf.constant(True):
            self.change_state() # Mutation of self.a is hidden
        tf.print(self.a)

x = C()
x.f()

# => InaccessibleTensorError: The tensor 'Tensor("add:0", shape=(), dtype=int32)' cannot be accessed here: it is defined in another function or code block. Use return values, explicit Python locals or TensorFlow collections to access it. Defined in: FuncGraph(name=cond_true_5, id=5477800528); accessed from: FuncGraph(name=f, id=5476093776).

完全改变是没有问题的:

Changes in plain sight are no problem:

class C(object):
    @tf.function
    def f(self):
        self.a = tf.constant(0)
        if tf.constant(True):
            self.a += 1 # Mutation of self.a is in plain sight
        tf.print(self.a)

x = C()
x.f()

# => 1

由于TF控制流程而受到限制的示例

此if语句会导致错误,因为需要为TF控制流定义else的值:

Example of limitation due to TF control flow

This if statement leads to an error because the value for else needs to be defined for TF control flow:

@tf.function
def f(a, b):
    if tf.greater(a, b):
        return tf.constant(1)

# If a <= b would return None
x = f(tf.constant(3), tf.constant(2))   

# => ValueError: A value must also be returned from the else branch. If a value is returned from one branch of a conditional a value must be returned from all branches.

这篇关于我应该对所有功能都使用@ tf.function吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆