Keras单个时期内的地块损失演变 [英] Plot loss evolution during a single epoch in Keras

查看:80
本文介绍了Keras单个时期内的地块损失演变的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Keras是否具有内置方法来输出(和以后绘制)单个时期训练期间的损耗演变?

Does Keras have a built-in method to output (and later plot) the loss evolution during the training of a single epoch?

使用函数keras.callbacks.History()的常用方法可以为每个时期输出损失.但是在我的情况下,训练集相当大,因此我将单个纪元传递给了NN.由于我想绘制训练期间训练(和开发人员)损失的演变情况,有没有办法做到这一点?

The usual method of using the function keras.callbacks.History() can output loss for each epoch. However in my case the training set is fairly large, and therefore I am passing a single epoch to the NN. Since I would like to plot the evolution of the training (and dev) loss during training, is there a way to do this?

我目前正在通过以下方式解决此问题:将训练集划分为不同的批次,然后在一个时期内依次对每个训练集进行训练,并每次保存模型.但是也许有内置的方法可以做到这一点?

I am currently solving this by dividing the training set into different batches and then training on each sequentially with a single epoch and saving the model each time. But maybe there is a built-in way to do this?

我正在使用TensorFlow后端.

I am using TensorFlow backend.

推荐答案

您可以为此使用回调.

使用 Keras MNIST CNN示例(不是将整个代码复制到此处),并进行以下更改/添加:

Using the Keras MNIST CNN example (not copying the whole code here), with the following changes/additions:

from keras.callbacks import Callback

class TestCallback(Callback):
    def __init__(self, test_data):
        self.test_data = test_data

    def on_batch_end(self, batch, logs={}):
        x, y = self.test_data
        loss, acc = self.model.evaluate(x, y, verbose=0)
        print('\nTesting loss: {}, acc: {}\n'.format(loss, acc))

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=1,
          verbose=1,
          validation_data=(x_test, y_test),
          callbacks=[TestCallback((x_test, y_test))]
         )

用于评估每个批次末端的测试/验证集,我们得到以下信息:

for evaluating the test/validation set on each batch end, we get this:

Train on 60000 samples, validate on 10000 samples
Epoch 1/1

Testing loss: 0.0672039743446745, acc: 0.9781

  128/60000 [..............................] - ETA: 7484s - loss: 0.1450 - acc: 0.9531

/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.416976). Check your callbacks.
  % delta_t_median)


Testing loss: 0.06644540682602673, acc: 0.9781

  256/60000 [..............................] - ETA: 7476s - loss: 0.1187 - acc: 0.9570

/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.450395). Check your callbacks.
  % delta_t_median)


Testing loss: 0.06575664376271889, acc: 0.9782

但是,正如您可能会自己看到的那样,这具有严重的缺点,即放慢代码(并适当地发出一些相关的警告).作为一种折衷,如果您可以在每次批处理结束时只获得 training 性能,则可以使用略有不同的回调:

However, as you will probably see for yourself, this has the severe drawback of slowing down the code significantly (and duly producing some relevant warnings). As a compromise, if you are OK with getting only the training performance at the end of each batch, you could use a slightly different callback:

class TestCallback2(Callback):
    def __init__(self, test_data):
        self.test_data = test_data

    def on_batch_end(self, batch, logs={}):
        print()  # just a dummy print command

现在的结果(替换model.fit()中的callbacks=[TestCallback2((x_test, y_test)))要快得多,但是每批结束时只给出训练指标:

The results now (replacing callbacks=[TestCallback2((x_test, y_test)) in model.fit()) are much faster, but giving only the training metrics at the end of each batch:

Train on 60000 samples, validate on 10000 samples
Epoch 1/1

  128/60000 [..............................] - ETA: 346s - loss: 0.8503 - acc: 0.7188
  256/60000 [..............................] - ETA: 355s - loss: 0.8496 - acc: 0.7109
  384/60000 [..............................] - ETA: 339s - loss: 0.7718 - acc: 0.7396
  [...]

更新

以上所有方法都可以,但造成的损失&精度没有存储在任何地方,因此无法进行绘制;因此,这是另一个回调解决方案,它实际上将指标存储在训练集中:

All the above may be fine, but the resulting losses & accuracies are not stored anywhere, and hence they cannot be plotted; so, here is another callback solution that actually stores the metrics on the training set:

from keras.callbacks import Callback

class Histories(Callback):

    def on_train_begin(self,logs={}):
        self.losses = []
        self.accuracies = []

    def on_batch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
        self.accuracies.append(logs.get('acc'))


histories = Histories()

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=1,
          verbose=1,
          validation_data=(x_test, y_test),
          callbacks=[histories]
         )

导致训练期间每批结束时的指标分别存储在histories.losseshistories.accuracies中-这是每项的前5个条目:

which results in the metrics at the end of each batch during training being stored in histories.losses and histories.accuracies, respectively - here are the first 5 entries of each:

histories.losses[:5]
# [2.3115866, 2.3008101, 2.2479887, 2.1895032, 2.1491694]

histories.accuracies[:5]
# [0.0703125, 0.1484375, 0.1875, 0.296875, 0.359375]

这篇关于Keras单个时期内的地块损失演变的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆