TensorFlow Graph到Keras模型? [英] TensorFlow Graph to Keras Model?

查看:306
本文介绍了TensorFlow Graph到Keras模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以在本机TensorFlow中定义一个图,然后将该图转换为Keras模型?

Is it possible to define a graph in native TensorFlow and then convert this graph to a Keras model?

我的意图是简单地(对我而言)将两个世界的精华结合起来.

My intention is simply combining (for me) the best of the two worlds.

我真的很喜欢Keras模型API进行原型设计和进行新的实验,例如,使用很棒的multi_gpu_model(model, gpus=4)进行多个GPU的训练,使用oneliner保存/加载权重整个模型,以及所有便利功能例如.fit().predict()等.

I really like the Keras model API for prototyping and new experiments, i.e. using the awesome multi_gpu_model(model, gpus=4) for training with multiple GPUs, saving/loading weights or whole models with oneliners, all the convenience functions like .fit(), .predict(), and others.

但是,我更喜欢在本机TensorFlow中定义我的模型. TF中的上下文管理器很棒,我认为,使用它们来实现诸如GAN之类的东西要容易得多:

However, I prefer to define my model in native TensorFlow. Context managers in TF are awesome and, in my opinion, it is much easier to implement stuff like GANs with them:

with tf.variable_scope("Generator"):
    # define some layers
with tf.variable_scope("Discriminator"):
    # define some layers

# model losses
G_train_op = ...AdamOptimizer(...)
    .minimize(gloss,
    var_list=tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, 
                               scope="Generator")
D_train_op = ...AdamOptimizer(...)
    .minimize(dloss, 
    var_list=tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, 
                               scope="Discriminator")

另一个好处是可以通过这种方式构造图表.在TensorBoard中调试复杂的本地Keras模型非常麻烦,因为它们根本没有结构化.在本机TF中大量使用变量作用域,您可以解开"图形,并查看用于调试的复杂模型的结构化版本.

Another bonus is structuring the graph this way. In TensorBoard debugging complicated native Keras models are hell since they are not structured at all. With heavy use of variable scopes in native TF you can "disentangle" the graph and look at a very structured version of a complicated model for debugging.

利用此功能,我可以直接设置自定义损失函数,而不必在每次训练迭代中冻结任何内容,因为TF仅会在正确的范围内更新权重,(至少在我看来)这比Keras容易得多遍历所有现有层并设置.trainable = False的解决方案.

By utilizing this I can directly setup custom loss function and do not have to freeze anything in every training iteration since TF will only update the weights in the correct scope, which is (at least in my opinion) far easier than the Keras solution to loop over all the existing layers and set .trainable = False.

TL; DR:

长话短说:我喜欢直接访问TF中的所有内容,但是大多数时候,简单的Keras模型足以用于训练,推理…………之后.在Keras中,模型API更加容易和方便.

Long story short: I like the direct access to everything in TF, but most of the time a simple Keras model is sufficient for training, inference, ... later on. The model API is much easier and more convenient in Keras.

因此,我希望在本机TF中建立一个图形并将其转换为Keras进行训练,评估等.有什么办法吗?

Hence, I would prefer to set up a graph in native TF and convert it to Keras for training, evaluation, and so on. Is there any way to do this?

推荐答案

我认为不可能为任何TF图创建一个通用的自动转换器,它将以有意义的图层和适当的名称命名等等.只是因为图比一系列Keras图层更灵活.

I don't think it is possible to create a generic automated converter for any TF graph, that will come up with a meaningful set of layers, with proper namings etc. Just because graphs are more flexible than a sequence of Keras layers.

但是,您可以使用 Lambda层包装模型.在函数内构建模型,用Lambda封装,然后在Keras中保存它:

However, you can wrap your model with the Lambda layer. Build your model inside a function, wrap it with Lambda and you have it in Keras:

def model_fn(x):
    layer_1 = tf.layers.dense(x, 100)
    layer_2 = tf.layers.dense(layer_1, 100)
    out_layer = tf.layers.dense(layer_2, num_classes)
    return out_layer

model.add(Lambda(model_fn))

当您使用

That is what sometimes happens when you use multi_gpu_model: You come up with three layers: Input, model, and Output.

Keras辩护法

但是,TensorFlow和Keras之间的集成可以更加紧密和有意义.有关使用案例,请参见本教程

However, integration between TensorFlow and Keras can be much more tighter and meaningful. See this tutorial for use cases.

例如,变量作用域可以像在TensorFlow中一样使用:

For instance, variable scopes can be used pretty much like in TensorFlow:

x = tf.placeholder(tf.float32, shape=(None, 20, 64))
with tf.name_scope('block1'):
    y = LSTM(32, name='mylstm')(x)

与手动放置设备相同:

with tf.device('/gpu:0'):
    x = tf.placeholder(tf.float32, shape=(None, 20, 64))
    y = LSTM(32)(x)  # all ops / variables in the LSTM layer will live on GPU:0

此处讨论了自定义损失: Keras:多输出和自定义损失函数的清晰实现吗?

Custom losses are discussed here: Keras: clean implementation for multiple outputs and custom loss functions?

这是我在Keras中定义的模型在Tensorboard中的外观:

This is how my model defined in Keras looks in Tensorboard:

因此,Keras实际上只是TensorFlow的简化前端,因此您可以灵活地混合它们.我建议您检查 Keras模型动物园的源代码,以获取可用于构建的聪明解决方案和模式使用Keras的干净API的复杂模型.

So, Keras is indeed only a simplified frontend to TensorFlow so you can mix them quite flexibly. I would recommend you to inspect source code of Keras model zoo for clever solutions and patterns that allows you to build complex models using clean API of Keras.

这篇关于TensorFlow Graph到Keras模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆