张量流中的名称范围和变量范围有什么区别? [英] What's the difference of name scope and a variable scope in tensorflow?

查看:20
本文介绍了张量流中的名称范围和变量范围有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这些功能有什么区别?

<块引用>

tf.variable_op_scope(values, name, default_name, initializer=None)

返回用于定义创建变量的操作的上下文管理器.此上下文管理器验证给定值来自同一个图,确保该图是默认图,并推送名称范围和变量范围.

<小时><块引用>

tf.op_scope(values, name, default_name=None)

返回上下文管理器以在定义 Python 操作时使用.这个上下文管理器验证给定的值来自同一个图,确保该图是默认图,并推送名称范围.

<小时><块引用>

tf.name_scope(name)

使用默认图形的 Graph.name_scope() 包装器.有关详细信息,请参阅 Graph.name_scope().

<小时><块引用>

tf.variable_scope(name_or_scope,reuse=None,initializer=None)

返回变量作用域的上下文.变量范围允许创建新变量并共享已创建的变量,同时提供检查以防止意外创建或共享.有关详细信息,请参阅 Variable Scope How To,这里我们仅提供一些基本示例.

解决方案

让我们先简要介绍一下变量共享.它是 TensorFlow 中的一种机制,允许共享在代码的不同部分访问的变量,而无需传递对变量的引用.

方法 tf.get_variable 可以是与变量名称一起用作参数,以创建具有此类名称的新变量或检索之前创建的变量.这与使用 tf.Variable 构造函数不同这将在每次调用时创建一个新变量(如果已存在具有此类名称的变量,则可能会向变量名称添加后缀).

为了实现变量共享机制,引入了一种单独的作用域(变量作用域).

因此,我们最终有两种不同类型的作用域:

两个作用域对所有操作以及使用 tf.Variable 创建的变量都有相同的影响,即作用域将作为前缀添加到操作或变量名称.

然而,tf.get_variable 会忽略名称范围.我们可以在下面的例子中看到:

with tf.name_scope("my_scope"):v1 = tf.get_variable("var1", [1], dtype=tf.float32)v2 = tf.Variable(1, name="var2", dtype=tf.float32)a = tf.add(v1, v2)打印(v1.name)#var1:0打印(v2.name)#my_scope/var2:0打印(a.name)# my_scope/Add:0

将使用 tf.get_variable 访问的变量放入作用域的唯一方法是使用变量作用域,如下例所示:

with tf.variable_scope("my_scope"):v1 = tf.get_variable("var1", [1], dtype=tf.float32)v2 = tf.Variable(1, name="var2", dtype=tf.float32)a = tf.add(v1, v2)打印(v1.name)# my_scope/var1:0打印(v2.name)#my_scope/var2:0打印(a.name)# my_scope/Add:0

这使我们可以轻松地在程序的不同部分共享变量,即使在不同的名称范围内:

with tf.name_scope("foo"):使用 tf.variable_scope("var_scope"):v = tf.get_variable("var", [1])使用 tf.name_scope("bar"):使用 tf.variable_scope("var_scope", 重用 = True):v1 = tf.get_variable("var", [1])断言 v1 == v打印(v.name)#var_scope/var:0打印(v1.name)#var_scope/var:0

<小时>

更新

从 r0.11 版本开始,op_scopevariable_op_scope 都是 已弃用并替换为 name_scopevariable_scope.

What's the differences between these functions?

tf.variable_op_scope(values, name, default_name, initializer=None)

Returns a context manager for defining an op that creates variables. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.


tf.op_scope(values, name, default_name=None)

Returns a context manager for use when defining a Python op. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.


tf.name_scope(name)

Wrapper for Graph.name_scope() using the default graph. See Graph.name_scope() for more details.


tf.variable_scope(name_or_scope, reuse=None, initializer=None)

Returns a context for variable scope. Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.

解决方案

Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.

The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).

It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.

As a result, we end up having two different types of scopes:

Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.

However, name scope is ignored by tf.get_variable. We can see that in the following example:

with tf.name_scope("my_scope"):
    v1 = tf.get_variable("var1", [1], dtype=tf.float32)
    v2 = tf.Variable(1, name="var2", dtype=tf.float32)
    a = tf.add(v1, v2)

print(v1.name)  # var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:

with tf.variable_scope("my_scope"):
    v1 = tf.get_variable("var1", [1], dtype=tf.float32)
    v2 = tf.Variable(1, name="var2", dtype=tf.float32)
    a = tf.add(v1, v2)

print(v1.name)  # my_scope/var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

This allows us to easily share variables across different parts of the program, even within different name scopes:

with tf.name_scope("foo"):
    with tf.variable_scope("var_scope"):
        v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
    with tf.variable_scope("var_scope", reuse=True):
        v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name)   # var_scope/var:0
print(v1.name)  # var_scope/var:0


UPDATE

As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.

这篇关于张量流中的名称范围和变量范围有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆