Tensorflow:将导入的图形操作应用于 2d 张量的每个元素 [英] Tensorflow: applying an imported graph operation to each element of 2d tensor

查看:25
本文介绍了Tensorflow:将导入的图形操作应用于 2d 张量的每个元素的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有些问题回答了我的部分问题,但我无法将各个部分联系在一起.假设我有一个图形,它对只有 2 个元素的一维数组进行操作

There are questions answering parts of my question but I can't connect the pieces together. Suppose I have a graph that operates on a 1d array of just 2 elements

input = tf.placeholder(tf.float32, [2], name="input")

我想构建一个图形,该图形可以接收此类元素的任意长二维数组并在其上运行第一个图形

I want to build a graph that can receive an arbitrary long 2d array of such elements and run the first graph on it

 x = tf.placeholder(tf.float32, [None, 2], name = 'x')

我知道如何导入第一个图 (tf.import_graph_def) 以及如何使用 tf.map_fn 对数组运行一些操作.但是我怎样才能将两者结合起来呢?对于网络的每次运行,我需要向它传递不同的输入.但是映射是在 tf.import_graph_def 内部完成的.我应该每次都在循环中调用的函数中进行导入吗?听起来不对……

I know how to import the first graph (tf.import_graph_def) and how to run some operation on an array using tf.map_fn. But how can I combine the two? For each run of the network I need to pass it a different input. But the mapping is done inside tf.import_graph_def. Should I do the import each time in the function called in the loop? Sounds wrong ...

下面的代码有效,但我相信有更好的方法:

The code below works, but I believe there is a better way:

with tf.Graph().as_default() as g_1:
input = tf.placeholder(tf.float32, [2], name="input")
y = tf.add(input[0], input[1])
output = tf.identity(y, name="output")

gdef_1 = g_1.as_graph_def()

tf.reset_default_graph()
with tf.Graph().as_default() as g_combined:
    x = tf.placeholder(tf.float32, [None, 2], name = 'x')

    def calc_z(el):
        y, = tf.import_graph_def(gdef_1, input_map={"input:0": el},
                               return_elements=["output:0"])
        return y

    final_result = tf.map_fn(calc_z, x)

    init = tf.global_variables_initializer()

with tf.Session(graph=g_combined) as sess:
    # For tensorboard
    # run it as tensorboard --logdir=graphs
    writer = tf.summary.FileWriter('./graphs', sess.graph)
    # Run the initializer
    sess.run(init)
    print(sess.run([final_result], feed_dict = {x:[[1,2],[3,4],[5,6]]}))
    writer.close()

更新:我试图达到相同的结果,但要保持导入的图形可训练,但未能做到.import_meta_graph 的 return_elements 参数似乎被忽略了,只返回了保护程序.然后调用恢复失败并出现错误

Update: I tried to achieve the same result, but to keep the imported graph trainable, but fail to do so. The return_elements argument to import_meta_graph seems to be just ignored and only the saver is returned. Then a call to restore fails with the error

Tensor Tensor("map/while/save/Const:0", shape=(), dtype=string) 可能无法输入我正在使用以下代码:

Tensor Tensor("map/while/save/Const:0", shape=(), dtype=string) may not be fed I'm using the code below:

tf.reset_default_graph()
xx = tf.placeholder(tf.float32, [2], name="xx")
yy = tf.add(xx[0], xx[1])
yy = tf.identity(yy, name = 'yy')
#need at least 1 varaible to save the graph
_ = tf.Variable(initial_value='fake_variable')

config = tf.ConfigProto(log_device_placement=False)
config.gpu_options.allow_growth = True

with tf.Session(config=config) as sess:    
    saver = tf.train.Saver()
    sess.run(tf.initialize_all_variables())
    saver.save(sess, "./model_ex2")

tf.reset_default_graph()
with tf.Session() as sess:
    x = tf.placeholder(tf.float32, [None, 2], name = 'x')

    def calc_z(el):
#         saver, yy  = tf.train.import_meta_graph("./model_ex2.meta", 
#                                            input_map={"xx:0": el}, return_elements=["yy:0"])
#         saver.restore(sess, "./model_ex2")
#         return yy
        # return_elements argument seems to be ignored and only the saver is returned.
        saver = tf.train.import_meta_graph("./model_ex2.meta", 
                                           input_map={"xx:0": el})
        saver.restore(sess, "./model_ex2")
        return yy

    final_result = tf.map_fn(calc_z, x)

init = tf.global_variables_initializer()
with tf.Session(config=config) as sess:
    sess.run(init)
    print(sess.run([final_result, op], feed_dict = {x:[[1,2],[3,4],[5,6]]}))

推荐答案

您当前的解决方案实际上已经很好了.该图仅在 g_combined 构建时导入一次,而不是 x 中的每个元素一次,因此它可以执行您想要的操作.

Your current solution is actually good already. The graph is imported only once when g_combined is constructed, not once per element in x, so it does what you would like to.

如果你有一个元图,它应该与tf.train.import_meta_graph,因为 input_mapreturn_elements 也应该与它一起使用(但请注意,此函数也会返回导入的保护程序).但是,您也可以在不同的图形中导入元图,将其冻结(例如使用 tf.graph_util.convert_variables_to_constants) 然后将该图 def 导入到最终图中.

If you have a metagraph instead, it should work similarly with tf.train.import_meta_graph, since input_map and return_elements should also be usable with it (note however this functions returns the imported saver too). However, you can also import the metagraph in a different graph, freeze it (e.g. using tf.graph_util.convert_variables_to_constants) and then import that graph def into the final graph.

import tensorflow as tf

meta_graph_path = ...
meta_graph_save_path = ...
with tf.Graph().as_default() as g_meta_import, tf.Session() as sess:
    saver = tf.train.import_meta_graph(meta_graph_path)
    saver.restore(sess, meta_graph_save_path)
    frozen_graph = tf.graph_util.convert_variables_to_constants(
        sess, tf.get_default_graph().as_graph_def(), 'output')

with tf.Graph().as_default() as g_combined:
    x = tf.placeholder(tf.float32, [None, 2], name = 'x')
    def calc_z(el):
        y, = tf.import_graph_def(frozen_graph, input_map={'input:0': el},
                                 return_elements=['output:0'])
        return y
    final_result = tf.map_fn(calc_z, x)
    init = tf.global_variables_initializer()

这个解决方案的唯一问题是,导入的部分显然会被冻结并且无法训练.

The only catch of this solution is that it the imported part will obviously be frozen and not trainable.

这篇关于Tensorflow:将导入的图形操作应用于 2d 张量的每个元素的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆