使用预先训练的VGG16模型时无法节省重量 [英] Unable to save weights while using pre-trained VGG16 model

查看:210
本文介绍了使用预先训练的VGG16模型时无法节省重量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用预训练的VGG16模型时,我无法保存最佳模型的权重.我使用以下代码:

checkpointer = [
                # Stop if the accuracy is not improving after 7 iterations
                EarlyStopping(monitor='val_loss', patience=3, verbose=1),
                # Saving the best model and re-use it while prediction 
                ModelCheckpoint(filepath="C:/Users/skumarravindran/Documents/keras_save_model/vgg16_v1.hdf5", verbose=1, monitor='val_acc', save_best_only=True),
                #            
]

然后出现以下错误:

C:\ Users \ skumarravindran \ AppData \ Local \ Continuum \ Anaconda2 \ envs \ py35gpu1 \ lib \ site-packages \ keras \ callbacks.py:405:RuntimeWarning:只能在可用val_acc的情况下保存最佳模型,跳过. 正在跳过." %(self.monitor),RuntimeWarning)

解决方案

我遇到了两种情况,会出现此错误:

  1. 介绍自定义指标
  2. 使用多个输出

在两种情况下均不计算accval_acc.奇怪的是,Keras确实计算了总体loss​​和val_loss.

您可以通过在指标中添加accuracy来纠正第一种情况,但这可能会有副作用,我不确定.但是,在这两种情况下,您都可以在回调中自己添加accval_acc.我为多输出情况添加了一个示例,在该示例中,我创建了一个自定义回调,在其中通过对输出层的所有val和val_acc求平均值来计算自己的accval_acc结果.

我有一个模型,该模型的末尾有5个密集的输出层,标记为D0..D4.一个纪元的输出如下:

3540/3540 [==============================] - 21s 6ms/step - loss: 14.1437 - 
D0_loss: 3.0446 - D1_loss: 2.6544 - D2_loss: 3.0808 - D3_loss: 2.7751 -
D4_loss: 2.5889 - D0_acc: 0.2362 - D1_acc: 0.3681 - D2_acc: 0.1542 - D3_acc: 0.1161 - 
D4_acc: 0.3994 - val_loss: 8.7598 - val_D0_loss: 2.0797 - val_D1_loss: 1.4088 - 
val_D2_loss: 2.0711 - val_D3_loss: 1.9064 - val_D4_loss: 1.2938 - 
val_D0_acc: 0.2661 - val_D1_acc: 0.3924 - val_D2_acc: 0.1763 - 
val_D3_acc: 0.1695 - val_D4_acc: 0.4627

如您所见,它输出整体loss​​和val_loss,并且对于每个输出层:Di_loss,Di_acc,val_Di_loss和val_Di_acc,对于0..4中的i.所有这些都是logs词典的内容,该内容在回调的on_epoch_beginon_epoch_end中作为参数传输.回调具有更多的事件处理程序,但就我们的目的而言,这两个是最相关的.当您有5个输出(如我的情况)时,字典的大小是5(4(acc,loss,val_acc,val_loss)+ 2(loss + val_loss))的乘积.

我所做的是计算所有准确性和验证准确性的平均值,以向logs添加两个项目:

logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs

请确保在检查点回调之前之前添加此回调,否则不会看到"您提供的额外信息.如果所有步骤都正确实施,则错误消息将不再显示,并且模型将很容易地建立检查点. 下面提供了针对多个输出案例的回调代码.

    class ExtraLogInfo(keras.callbacks.Callback):
        def on_epoch_begin(self, epoch, logs):
            self.timed = time.time()

            return

        def on_epoch_end(self, epoch, logs):
            print(logs.keys())
            som_acc = 0.0
            som_val_acc = 0.0
            n_accs = (len(logs) - 2) // 4
            for i in range(n_accs):
                acc_ptn = 'D{:d}_acc'.format(i)
                val_acc_ptn = 'val_D{:d}_acc'.format(i)
                som_acc += logs[acc_ptn]
                som_val_acc += logs[val_acc_ptn]

            logs['acc'] = som_acc / n_accs
            logs['val_acc'] = som_val_acc / n_accs
            logs['time'] = time.time() - self.timed

            return

While using the pre-trained VGG16 model I am unable to save the weights of the best model. I use this code:

checkpointer = [
                # Stop if the accuracy is not improving after 7 iterations
                EarlyStopping(monitor='val_loss', patience=3, verbose=1),
                # Saving the best model and re-use it while prediction 
                ModelCheckpoint(filepath="C:/Users/skumarravindran/Documents/keras_save_model/vgg16_v1.hdf5", verbose=1, monitor='val_acc', save_best_only=True),
                #            
]

And I get the following error:

C:\Users\skumarravindran\AppData\Local\Continuum\Anaconda2\envs\py35gpu1\lib\site-packages\keras\callbacks.py:405: RuntimeWarning: Can save best model only with val_acc available, skipping. 'skipping.' % (self.monitor), RuntimeWarning)

解决方案

I experienced two situations where this error arises:

  1. introducing a custom metric
  2. using multiple outputs

In both cases the acc and val_acc are not computed. Strangely, Keras does compute an overall loss and val_loss.

You can remedy the first situation by adding accuracy to the metrics but that may have side effects, I am not sure. In both cases however, you can add acc and val_acc yourself in a callback. I have added an example for the multi output case where I have created a custom callback in which I compute my own acc and val_acc results by averaging over all val's and val_acc's of the output layers.

I have a model having are 5 dense output layers at the end, labeled D0..D4. The output of one epoch is as follows:

3540/3540 [==============================] - 21s 6ms/step - loss: 14.1437 - 
D0_loss: 3.0446 - D1_loss: 2.6544 - D2_loss: 3.0808 - D3_loss: 2.7751 -
D4_loss: 2.5889 - D0_acc: 0.2362 - D1_acc: 0.3681 - D2_acc: 0.1542 - D3_acc: 0.1161 - 
D4_acc: 0.3994 - val_loss: 8.7598 - val_D0_loss: 2.0797 - val_D1_loss: 1.4088 - 
val_D2_loss: 2.0711 - val_D3_loss: 1.9064 - val_D4_loss: 1.2938 - 
val_D0_acc: 0.2661 - val_D1_acc: 0.3924 - val_D2_acc: 0.1763 - 
val_D3_acc: 0.1695 - val_D4_acc: 0.4627

As you can see it outputs an overall loss and val_loss and for each output layer: Di_loss, Di_acc, val_Di_loss and val_Di_acc, for i in 0..4. All of this is the content of the logs dictionary which is transmitted as a parameter in on_epoch_begin and on_epoch_end of a callback. Callbacks have more event handlers but for our purpose these two are the most relevant. When you have 5 outputs (as in my case) then the size of the dictionary is 5 times 4(acc, loss, val_acc, val_loss) + 2 (loss+val_loss).

What I did is compute the average of all accuracies and validation accuracies to add two items to logs:

logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs

Be sure you add this callback before the checkpoint callback, else the extra information you provide will not bee 'seen'. If all is implemented correctly the error message does not appear anymore and the model is happily checkpointing. The code of my callback for the multiple output case is provided below.

    class ExtraLogInfo(keras.callbacks.Callback):
        def on_epoch_begin(self, epoch, logs):
            self.timed = time.time()

            return

        def on_epoch_end(self, epoch, logs):
            print(logs.keys())
            som_acc = 0.0
            som_val_acc = 0.0
            n_accs = (len(logs) - 2) // 4
            for i in range(n_accs):
                acc_ptn = 'D{:d}_acc'.format(i)
                val_acc_ptn = 'val_D{:d}_acc'.format(i)
                som_acc += logs[acc_ptn]
                som_val_acc += logs[val_acc_ptn]

            logs['acc'] = som_acc / n_accs
            logs['val_acc'] = som_val_acc / n_accs
            logs['time'] = time.time() - self.timed

            return

这篇关于使用预先训练的VGG16模型时无法节省重量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆