如何从TensorFlow图中删除节点? [英] How to remove nodes from TensorFlow graph?

查看:1225
本文介绍了如何从TensorFlow图中删除节点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要编写一个程序,其中TensorFlow节点的一部分需要保持在那里存储一些全局信息(主要是变量和摘要),而另一部分需要在程序运行时进行更改/重组.

I need to write a program where part of the TensorFlow nodes need to keep being there storing some global information(mainly variables and summaries) while the other part need to be changed/reorganized as program runs.

我现在的方法是在每次迭代中重建整个图形.但是然后,我必须在每次迭代中手动从检查点文件或numpy数组中存储和加载这些信息,这使我的代码确实很混乱且容易出错.

The way I do now is to reconstruct the whole graph in every iteration. But then, I have to store and load those information manually from/to checkpoint files or numpy arrays in every iteration, which makes my code really messy and error prone.

我想知道是否有一种方法可以删除/修改我的计算图的一部分,而不是重置整个图?

I wonder if there is a way to remove/modify part of my computation graph instead of reset the whole graph?

推荐答案

实际上不可能更改TensorFlow图的结构.具体来说,没有一种从图上删除节点的干净方法,因此删除子图并添加另一个图是不切实际的. (我已经尝试过了,这涉及到内部零件的外科手术.最终,这比付出的努力要多得多,而您却在要求维护方面的麻烦.)

Changing the structure of TensorFlow graphs isn't really possible. Specifically, there isn't a clean way to remove nodes from a graph, so removing a subgraph and adding another isn't practical. (I've tried this, and it involves surgery on the internals. Ultimately, it's way more effort than it's worth, and you're asking for maintenance headaches.)

有一些解决方法.

您的重建就是其中之一.您似乎对此方法有一个很好的把握,所以我不会对此进行竖琴,但是对于其他偶然发现此方法的人来说,一种非常相似的方法是对图形进行过滤的深层副本.也就是说,您在某些条件下对元素进行迭代并添加它们.如果已将图形提供给您(即,您首先没有构建它的功能),或者更改很小,这是最可行的.您仍然要付出重建图表的代价,但有时加载和存储可能是透明的.不过,考虑到您的情况,这可能不是一个很好的选择.

Your reconstruction is one of them. You seem to have a pretty good handle on this method, so I won't harp on it, but for the benefit of anyone else who stumbles upon this, a very similar method is a filtered deep copy of the graph. That is, you iterate over the elements and add them in, predicated on some condition. This is most viable if the graph was given to you (i.e., you don't have the functions that built it in the first place) or if the changes are fairly minor. You still pay the price of rebuilding the graph, but sometimes loading and storing can be transparent. Given your scenario, though, this probably isn't a good match.

另一种选择是将问题重现为您尝试评估并依赖于数据流行为的所有可能图形的超集.换句话说,构建一个图形,其中包含您要输入的每种输入类型,并且仅询问您需要的输出.这可能行得通的好兆头是:您的网络是参数化的(也许只是在增加/减少宽度或图层),更改是次要的(可能包括/不包括输入),并且您的操作可以处理可变的输入(在整个维度上减少,例如).在您的情况下,如果只有少量有限的树结构,则可以很好地工作.您可能只需要为全局信息添加一些聚合或重新规范化即可.

Another option is to recast the problem as a superset of all possible graphs you're trying to evaluate and rely on dataflow behavior. In other words, build a graph which includes every type of input you're feeding it and only ask for the outputs you need. Good signs this might work are: your network is parametric (perhaps you're just increasing/decreasing widths or layers), the changes are minor (maybe including/excluding inputs), and your operations can handle variable inputs (reductions across a dimension, for instance). In your case, if you have only a small, finite number of tree structures, this could work well. You'll probably just need to add some aggregation or renormalization for your global information.

第三个选择是将网络视为物理拆分.因此,与其考虑一个具有可变组件的网络,不如将固定件和变化件之间的边界视为两个独立网络的输入和输出.这确实使某些事情变得更加困难:例如,现在两者之间的反向传播都很难看(这听起来可能对您来说是个问题).但是,如果可以避免这种情况,那么两个网络就可以很好地工作.最终感觉很像是要处理一个单独的预培训阶段,您已经对此很熟悉了.

A third option is to treat the networks as physically split. So instead of thinking of one network with mutable components, treat the boundaries between fixed and changing pieces are inputs and outputs of two separate networks. This does make some things harder: for instance, backprop across both is now ugly (which it sounds like might be a problem for you). But if you can avoid that, then two networks can work pretty well. It ends up feeling a lot like dealing with a separate pretraining phase, which you many already be comfortable with.

这些变通办法中的大多数都解决了相当有限的问题,因此它们可能对您没有帮助.也就是说,您不必全力以赴.如果部分拆分网络或只为进行一些更改而创建一个上标,则可能是您只需要担心几种情况下的保存/恢复,就可以减轻您的麻烦.

Most of these workarounds have a fairly narrow range of problems that they work for, so they might not help in your case. That said, you don't have to go all-or-nothing. If partially splitting the network or creating a supergraph for just some changes works, then it might be that you only have to worry about save/restore for a few cases, which may ease your troubles.

希望这会有所帮助!

这篇关于如何从TensorFlow图中删除节点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆