在哪个阶段建立张量流图? [英] At what stage is a tensorflow graph set up?

查看:21
本文介绍了在哪个阶段建立张量流图?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

优化器通常在多个步骤中运行相同的计算图直到收敛.tensorflow 是否在开始时设置图形并在每一步重复使用它?如果我在训练期间更改批量大小怎么办?如果我对图形进行一些负更改(例如更改损失函数)会怎样?如果我对图表进行了一些重大更改怎么办?tensorflow 是否预先生成所有可能的图形?当图发生变化时,tensorflow 是否知道如何优化整个计算?

解决方案

正如 keveman 所说,从客户的角度来看有是一个单一的 TensorFlow 图.在运行时,可以有多个 修剪 子图,其中只包含计算值 t1t2 等所需的节点.调用 sess.run([t1, t2, ...]).

如果您调用 sess.run([t1, t2]) 会将整个图 (sess.graph) 修剪为计算这些值所需的子图:即产生 t1t2 及其所有前提的操作.如果您随后调用 sess.run([t3, t4]),运行时会将图修剪为计算 t3t4.每次你传递一个新的值组合来获取时,TensorFlow 都会计算一个新的修剪图并缓存它——这就是为什么第一个 sess.run() 可能比后续的慢一些.>

如果修剪后的图重叠,TensorFlow 将为共享的操作重用内核".这是相关的,因为某些操作(例如 tf.Variabletf.FIFOQueue) 是有状态的,它们的内容可以在两个剪枝图中使用.例如,这允许您用一个子图初始化变量(例如 sess.run(tf.initialize_all_variables())),用另一个子图训练它们(例如 sess.run(train_op)),并使用第三个(例如 sess.run(loss, feed_dict={x: ...}))评估您的模型.它还允许您使用一个子图将元素排入队列,并使用另一个子图将它们出列,这是 输入管道.

An optimizer typically run the same computation graph for many steps until convergence. Does tensorflow setup the graph at the beginning and reuse it for every step? What if I change the batch size during training? What if I make some minus change to the graph like changing the loss function? What if I made some major change to the graph? Does tensorflow pre-generate all possible graphs? Does tensorflow know how to optimize the entire computation when the graph changes?

解决方案

As keveman says, from the client's perspective there is a single TensorFlow graph. In the runtime, there can be multiple pruned subgraphs that contain just the nodes that are necessary to compute the values t1, t2 etc. that you fetch when calling sess.run([t1, t2, ...]).

If you call sess.run([t1, t2]) will prune the overall graph (sess.graph) down to the subgraph required to compute those values: i.e. the operations that produce t1 and t2 and all of their antecedents. If you subsequently call sess.run([t3, t4]), the runtime will prune the graph down to the subgraph required to compute t3 and t4. Each time you pass a new combination of values to fetch, TensorFlow will compute a new pruned graph and cache it—this is why the first sess.run() can be somewhat slower than subsequent ones.

If the pruned graphs overlap, TensorFlow will reuse the "kernel" for the ops that are shared. This is relevant because some ops (e.g. tf.Variable and tf.FIFOQueue) are stateful, and their contents can be used in both pruned graphs. This allows you, for example, to initialize your variables with one subgraph (e.g. sess.run(tf.initialize_all_variables())), train them with another (e.g. sess.run(train_op)), and evaluate your model with a third (e.g. sess.run(loss, feed_dict={x: ...})). It also lets you enqueue elements to a queue with one subgraph, and dequeue them with another, which is the foundation of the input pipelines.

这篇关于在哪个阶段建立张量流图?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆