带有自定义训练循环的 Tensorboard Graph 不包括我的模型 [英] Tensorboard Graph with custom training loop does not include my Model

查看:29
本文介绍了带有自定义训练循环的 Tensorboard Graph 不包括我的模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了自己的循环,如 TF 2 迁移指南中所示

I have created my own loop as shown in the TF 2 migration guide here.
I am currently able to see the graph for only the --- VISIBLE --- section of the code below. How do I make my model (defined in the ---NOT VISIBLE--- section) visible in tensorboard?

If I was not using a custom training loop, I could have gone with the documented model.fit approach:

model.fit(..., callbacks=[keras.callbacks.TensorBoard(log_dir=logdir)])

In TF 1, the approach used to be quite straightforward:

tf.compat.v1.summary.FileWriter(LOGDIR, sess.graph)

The Tensorboard migration guide clearly states (here) that:

No direct writing of tf.compat.v1.Graph - instead use @tf.function and trace functions

configure_default_gpus()
tf.summary.trace_on(graph=True)
K = tf.keras
dataset = sanity_dataset(BATCH_SIZE)

#-------------------------- NOT VISIBLE -----------------------------------------
model = K.models.Sequential([
    K.layers.Flatten(input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),
    K.layers.Dense(10, activation=K.layers.LeakyReLU()),
    K.layers.Dense(IMG_WIDTH * IMG_HEIGHT * IMG_CHANNELS, activation=K.layers.LeakyReLU()),
    K.layers.Reshape((IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS)),
])
#--------------------------------------------------------------------------------

optimizer = tf.keras.optimizers.Adam()
loss_fn = K.losses.Huber()


@tf.function
def train_step(inputs, targets):
    with tf.GradientTape() as tape:
        predictions = model(inputs, training=True)
#-------------------------- VISIBLE ---------------------------------------------
        pred_loss = loss_fn(targets, predictions)

    gradients = tape.gradient(pred_loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
#--------------------------------------------------------------------------------
    return pred_loss, predictions


with tf.summary.create_file_writer(LOG_DIR).as_default() as writer:
    for epoch in range(5):
        for step, (input_batch, target_batch) in enumerate(dataset):
            total_loss, predictions = train_step(input_batch, target_batch)

            if step == 0:
                tf.summary.trace_export(name="all", step=step, profiler_outdir=LOG_DIR)
            tf.summary.scalar('loss', total_loss, step=step)
            writer.flush()
writer.close()

There's a similar unanswered question where the OP was unable to view any graph.

解决方案

I'm sure there's a better way, but I just realized that a simple workaround is to just use the existing tensorboard callback logic:

tb_callback = tf.keras.callbacks.TensorBoard(LOG_DIR)
tb_callback.set_model(model) # Writes the graph to tensorboard summaries using an internal file writer

If you want, you could write your own summaries into the same directory it uses: tf.summary.create_file_writer(LOG_DIR + '/train').

这篇关于带有自定义训练循环的 Tensorboard Graph 不包括我的模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆