keras张量板:在同一图中绘制训练图和验证标量 [英] keras tensorboard: plot train and validation scalars in a same figure

查看:88
本文介绍了keras张量板:在同一图中绘制训练图和验证标量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我在keras中使用tensorboard.在tensorflow中,可以使用两个不同的摘要编写器来训练和验证标量,以便tensorboard可以将它们绘制在同一图中.

So I am using tensorboard within keras. In tensorflow one could use two different summarywriters for train and validation scalars so that tensorboard could plot them in a same figure. Something like the figure in

> TensorBoard-在同一张图?

在喀拉拉邦有办法吗?

谢谢.

推荐答案

要使用单独的编写器处理验证日志,您可以编写一个环绕原始TensorBoard方法的自定义回调.

To handle the validation logs with a separate writer, you can write a custom callback that wraps around the original TensorBoard methods.

import os
import tensorflow as tf
from keras.callbacks import TensorBoard

class TrainValTensorBoard(TensorBoard):
    def __init__(self, log_dir='./logs', **kwargs):
        # Make the original `TensorBoard` log to a subdirectory 'training'
        training_log_dir = os.path.join(log_dir, 'training')
        super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)

        # Log the validation metrics to a separate subdirectory
        self.val_log_dir = os.path.join(log_dir, 'validation')

    def set_model(self, model):
        # Setup writer for validation metrics
        self.val_writer = tf.summary.FileWriter(self.val_log_dir)
        super(TrainValTensorBoard, self).set_model(model)

    def on_epoch_end(self, epoch, logs=None):
        # Pop the validation logs and handle them separately with
        # `self.val_writer`. Also rename the keys so that they can
        # be plotted on the same figure with the training metrics
        logs = logs or {}
        val_logs = {k.replace('val_', ''): v for k, v in logs.items() if k.startswith('val_')}
        for name, value in val_logs.items():
            summary = tf.Summary()
            summary_value = summary.value.add()
            summary_value.simple_value = value.item()
            summary_value.tag = name
            self.val_writer.add_summary(summary, epoch)
        self.val_writer.flush()

        # Pass the remaining logs to `TensorBoard.on_epoch_end`
        logs = {k: v for k, v in logs.items() if not k.startswith('val_')}
        super(TrainValTensorBoard, self).on_epoch_end(epoch, logs)

    def on_train_end(self, logs=None):
        super(TrainValTensorBoard, self).on_train_end(logs)
        self.val_writer.close()

  • __init__中,为培训和验证日志设置了两个子目录
  • set_model中,为验证日志创建了写者self.val_writer
  • on_epoch_end中,将验证日志与训练日志分开,并使用self.val_writer
  • 写入文件

    • In __init__, two subdirectories are set up for training and validation logs
    • In set_model, a writer self.val_writer is created for the validation logs
    • In on_epoch_end, the validation logs are separated from the training logs and written to file with self.val_writer
    • 以MNIST数据集为例:

      Using the MNIST dataset as an example:

      from keras.models import Sequential
      from keras.layers import Dense
      from keras.datasets import mnist
      
      (x_train, y_train), (x_test, y_test) = mnist.load_data()
      x_train = x_train.reshape(60000, 784)
      x_test = x_test.reshape(10000, 784)
      x_train = x_train.astype('float32')
      x_test = x_test.astype('float32')
      x_train /= 255
      x_test /= 255
      
      model = Sequential()
      model.add(Dense(64, activation='relu', input_shape=(784,)))
      model.add(Dense(10, activation='softmax'))
      model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      
      model.fit(x_train, y_train, epochs=10,
                validation_data=(x_test, y_test),
                callbacks=[TrainValTensorBoard(write_graph=False)])
      

      然后您可以在TensorBoard中的同一图形上可视化两条曲线.

      You can then visualize the two curves on a same figure in TensorBoard.

      编辑:我对类进行了一些修改,以便可以将其用于急切的执行中.

      I've modified the class a bit so that it can be used with eager execution.

      最大的变化是,我在以下代码中使用了tf.keras.似乎独立Keras中的TensorBoard回调尚不支持eager模式.

      The biggest change is that I use tf.keras in the following code. It seems that the TensorBoard callback in standalone Keras does not support eager mode yet.

      import os
      import tensorflow as tf
      from tensorflow.keras.callbacks import TensorBoard
      from tensorflow.python.eager import context
      
      class TrainValTensorBoard(TensorBoard):
          def __init__(self, log_dir='./logs', **kwargs):
              self.val_log_dir = os.path.join(log_dir, 'validation')
              training_log_dir = os.path.join(log_dir, 'training')
              super(TrainValTensorBoard, self).__init__(training_log_dir, **kwargs)
      
          def set_model(self, model):
              if context.executing_eagerly():
                  self.val_writer = tf.contrib.summary.create_file_writer(self.val_log_dir)
              else:
                  self.val_writer = tf.summary.FileWriter(self.val_log_dir)
              super(TrainValTensorBoard, self).set_model(model)
      
          def _write_custom_summaries(self, step, logs=None):
              logs = logs or {}
              val_logs = {k.replace('val_', ''): v for k, v in logs.items() if 'val_' in k}
              if context.executing_eagerly():
                  with self.val_writer.as_default(), tf.contrib.summary.always_record_summaries():
                      for name, value in val_logs.items():
                          tf.contrib.summary.scalar(name, value.item(), step=step)
              else:
                  for name, value in val_logs.items():
                      summary = tf.Summary()
                      summary_value = summary.value.add()
                      summary_value.simple_value = value.item()
                      summary_value.tag = name
                      self.val_writer.add_summary(summary, step)
              self.val_writer.flush()
      
              logs = {k: v for k, v in logs.items() if not 'val_' in k}
              super(TrainValTensorBoard, self)._write_custom_summaries(step, logs)
      
          def on_train_end(self, logs=None):
              super(TrainValTensorBoard, self).on_train_end(logs)
              self.val_writer.close()
      

      想法是一样的-

      • 检查TensorBoard回调的源代码
      • 看看如何设置作者
      • 在此自定义回调中执行相同的操作
      • Check the source code of TensorBoard callback
      • See what it does to set up the writer
      • Do the same thing in this custom callback

      同样,您可以使用MNIST数据进行测试,

      Again, you can use the MNIST data to test it,

      from tensorflow.keras.datasets import mnist
      from tensorflow.keras.models import Sequential
      from tensorflow.keras.layers import Dense
      from tensorflow.train import AdamOptimizer
      
      tf.enable_eager_execution()
      
      (x_train, y_train), (x_test, y_test) = mnist.load_data()
      x_train = x_train.reshape(60000, 784)
      x_test = x_test.reshape(10000, 784)
      x_train = x_train.astype('float32')
      x_test = x_test.astype('float32')
      x_train /= 255
      x_test /= 255
      y_train = y_train.astype(int)
      y_test = y_test.astype(int)
      
      model = Sequential()
      model.add(Dense(64, activation='relu', input_shape=(784,)))
      model.add(Dense(10, activation='softmax'))
      model.compile(loss='sparse_categorical_crossentropy', optimizer=AdamOptimizer(), metrics=['accuracy'])
      
      model.fit(x_train, y_train, epochs=10,
                validation_data=(x_test, y_test),
                callbacks=[TrainValTensorBoard(write_graph=False)])
      

      这篇关于keras张量板:在同一图中绘制训练图和验证标量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆