在Tensorboard上显示图像(通过Keras) [英] Displaying images on Tensorboard (through Keras)

查看:593
本文介绍了在Tensorboard上显示图像(通过Keras)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的X_test是128x128x3图像,而我的Y_test是512x512x3图像.我想在每个时期之后显示输入(X_test)的外观,预期输出(Y_test)的外观,以及实际输出的外观.到目前为止,我只想出了如何在Tensorboard中添加前两个.这是调用回调的代码:

My X_test are 128x128x3 images and my Y_test are 512x512x3 images. I want to show, after each epoch, how the input (X_test) looked, how the expected output (Y_test) looked, but also how the actual output looked. So far, I've only figured out how to add the first 2 in Tensorboard. Here is the code that calls the Callback:

model.fit(X_train,
          Y_train,
          epochs=epochs,
          verbose=2,
          shuffle=False,
          validation_data=(X_test, Y_test),
          batch_size=batch_size,
          callbacks=get_callbacks())

这是回调代码:

import tensorflow as tf
from keras.callbacks import Callback
from keras.callbacks import TensorBoard

import io
from PIL import Image

from constants import batch_size


def get_callbacks():
    tbCallBack = TensorBoard(log_dir='./logs',
                             histogram_freq=1,
                             write_graph=True,
                             write_images=True,
                             write_grads=True,
                             batch_size=batch_size)

    tbi_callback = TensorBoardImage('Image test')

    return [tbCallBack, tbi_callback]


def make_image(tensor):
    """
    Convert an numpy representation image to Image protobuf.
    Copied from https://github.com/lanpa/tensorboard-pytorch/
    """
    height, width, channel = tensor.shape
    print(tensor)
    image = Image.fromarray(tensor.astype('uint8'))  # TODO: maybe float ?

    output = io.BytesIO()
    image.save(output, format='JPEG')
    image_string = output.getvalue()
    output.close()

    return tf.Summary.Image(height=height,
                            width=width,
                            colorspace=channel,
                            encoded_image_string=image_string)


class TensorBoardImage(Callback):
    def __init__(self, tag):
        super().__init__()
        self.tag = tag

    def on_epoch_end(self, epoch, logs={}):
        # Load image
        img_input = self.validation_data[0][0]  # X_train
        img_valid = self.validation_data[1][0]  # Y_train

        print(self.validation_data[0].shape)  # (8, 128, 128, 3)
        print(self.validation_data[1].shape)  # (8, 512, 512, 3)

        image = make_image(img_input)
        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()

        image = make_image(img_valid)
        summary = tf.Summary(value=[tf.Summary.Value(tag=self.tag, image=image)])
        writer = tf.summary.FileWriter('./logs')
        writer.add_summary(summary, epoch)
        writer.close()

        return

我想知道从哪里/如何获得网络的实际输出.

I'm wondering where/how I can get the actual output of the network.

我遇到的另一个问题是,这是移植到TensorBoard中的一张图像的示例:

Another issue I'm having is that here is a sample of one of the images that is being ported into TensorBoard:

[[[0.10909907 0.09341043 0.08224604]
  [0.11599099 0.09922747 0.09138277]
  [0.15596421 0.13087936 0.11472746]
  ...
  [0.87589591 0.72773653 0.69428956]
  [0.87006552 0.7218123  0.68836991]
  [0.87054225 0.72794635 0.6967475 ]]

 ...

 [[0.26142332 0.16216267 0.10314116]
  [0.31526875 0.18743924 0.12351286]
  [0.5499796  0.35461449 0.24772873]
  ...
  [0.80937942 0.62956016 0.53784871]
  [0.80906054 0.62843601 0.5368183 ]
  [0.81046278 0.62453899 0.53849678]]]

这是为什么我的image = Image.fromarray(tensor.astype('uint8'))行可能生成的图像看起来根本不像实际输出的原因吗?这是TensorBoard的示例:

Is that the reason why my image = Image.fromarray(tensor.astype('uint8')) line might be generating images that do not look at all like the actual output? Here is a sample from TensorBoard:

我确实尝试过.astype('float64'),但是它引发了一个错误,因为它显然不受支持.

I did try .astype('float64') but it launched an error because it is apparently not a type that is supported.

无论如何,我不确定这确实是问题所在,因为我在TensorBoard中显示的其余图像全部都是白色/灰色/黑色正方形(这是conv2D_7,实际上是我的网络,因此应该显示输出的实际图像,不是吗?):

Anyhow, I'm unsure this really is the problem since the rest of my displayed images in the TensorBoard are all just white/gray/black squares (this one right there, conv2D_7, is actually the very last layer of my network and should thus display the actual images that are outputted, no?):

最终,我想要这样的东西,在通过matplot进行训练后,我已经显示了它:

Ultimately, I would like something like this, which I'm already displaying after the training through matplot:

最后,我想了解一下此回调需要很长时间才能处理的事实.有没有更有效的方法可以做到这一点?这几乎使我的训练时间增加了一倍(可能是因为它需要先将numpy转换为图像,然后再将其保存到TensorBoard Log文件中.)

Finally, I would like to adress the fact that this callback is taking a long time to process. Is there a more efficient way to do that? It almost doubles my training time (probably because it needs to convert the numpy into images before saving them in the TensorBoard Log file).

推荐答案

以下代码将输入输入模型,将模型输出和地面真实情况保存到Tensorboard.该模型是分割模型,因此每个样本3张图像.

The below code takes input to model, output of model and ground truth and saves to Tensorboard. The model is segmentation, thus 3 images per sample.

代码非常简单明了.但还是有一些解释:-

The code is quite simple and straightforward. But still a few explanation:-

make_image_tensor-该方法转换numpy图像并创建一个张量以保存在张量板摘要中.

make_image_tensor - The method converts the numpy image and creates a tensor to save in tensorboard summary.

TensorboardWriter-不是必需的,但可以很好地将Tensorboard功能与其他模块分开.允许可重用​​性.

TensorboardWriter - Not required, but its good to keep Tensorboard functionality separate from other modules. Allows Reusability.

ModelDiagonoser-使用生成器并通过self.model进行预测的类(由Keras设置为所有回调). ModelDiagonoser接受输入,输出和地面真实性,然后传递到Tensorboard以保存图像.

ModelDiagonoser - The class which takes a generator, and predicts over self.model(set by Keras to all callbacks). The ModelDiagonoser takes input, output and groundtruth and passes to Tensorboard to save the images.

import os

import io
import numpy as np
import tensorflow as tf
from PIL import Image
from keras.callbacks import Callback
# Depending on your keras version:-
from keras.engine.training import GeneratorEnqueuer, Sequence, OrderedEnqueuer
#from keras.utils import GeneratorEnqueuer, Sequence, OrderedEnqueuer


def make_image_tensor(tensor):
    """
    Convert an numpy representation image to Image protobuf.
    Adapted from https://github.com/lanpa/tensorboard-pytorch/
    """
    if len(tensor.shape) == 3:
        height, width, channel = tensor.shape
    else:
        height, width = tensor.shape
        channel = 1
    tensor = tensor.astype(np.uint8)
    image = Image.fromarray(tensor)
    output = io.BytesIO()
    image.save(output, format='PNG')
    image_string = output.getvalue()
    output.close()
    return tf.Summary.Image(height=height,
                            width=width,
                            colorspace=channel,
                            encoded_image_string=image_string)


class TensorboardWriter:

    def __init__(self, outdir):
        assert (os.path.isdir(outdir))
        self.outdir = outdir
        self.writer = tf.summary.FileWriter(self.outdir,
                                            flush_secs=10)

    def save_image(self, tag, image, global_step=None):
        image_tensor = make_image_tensor(image)
        self.writer.add_summary(tf.Summary(value=[tf.Summary.Value(tag=tag, image=image_tensor)]),
                                global_step)

    def close(self):
        """
        To be called in the end
        """
        self.writer.close()


class ModelDiagonoser(Callback):

    def __init__(self,
                 data_generator,
                 batch_size,
                 num_samples,
                 output_dir,
                 normalization_mean):
        self.batch_size = batch_size
        self.num_samples = num_samples
        self.tensorboard_writer = TensorBoardWriter(output_dir)
        self.normalization_mean = normalization_mean
        is_sequence = isinstance(self.data_generator, Sequence)
        if is_sequence:
            self.enqueuer = OrderedEnqueuer(self.data_generator,
                                            use_multiprocessing=True,
                                            shuffle=False)
        else:
            self.enqueuer = GeneratorEnqueuer(self.data_generator,
                                              use_multiprocessing=True,
                                              wait_time=0.01)
        self.enqueuer.start(workers=4, max_queue_size=4)

    def on_epoch_end(self, epoch, logs=None):
        output_generator = self.enqueuer.get()
        steps_done = 0
        total_steps = int(np.ceil(np.divide(self.num_samples, self.batch_size)))
        sample_index = 0
        while steps_done < total_steps:
            generator_output = next(output_generator)
            x, y = generator_output[:2]
            y_pred = self.model.predict(x)
            y_pred = np.argmax(y_pred, axis=-1)
            y_true = np.argmax(y, axis=-1)

            for i in range(0, len(y_pred)):
                n = steps_done * self.batch_size + i
                if n >= self.num_samples:
                    return
                img = np.squeeze(x[i, :, :, :])
                img = 255. * (img + self.normalization_mean)  # mean is the training images normalization mean
                img = img[:, :, [2, 1, 0]]  # reordering of channels

                pred = y_pred[i]
                pred = pred.reshape(img.shape[0:2])

                ground_truth = y_true[i]
                ground_truth = ground_truth.reshape(img.shape[0:2])

                self.tensorboard_writer.save_image("Epoch-{}/{}/x"
                                                   .format(self.epoch_index, sample_index), img)
                self.tensorboard_writer.save_image("Epoch-{}/{}/y"
                                                   .format(self.epoch_index, sample_index), ground_truth)
                self.tensorboard_writer.save_image("Epoch-{}/{}/y_pred"
                                                   .format(self.epoch_index, sample_index), pred)
                sample_index += 1

            steps_done += 1

    def on_train_end(self, logs=None):
        self.enqueuer.stop()
        self.tensorboard_writer.close()

这篇关于在Tensorboard上显示图像(通过Keras)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆