为什么历史记录以递增的整数(auc_2,auc_4,...)存储auc和val_auc? [英] Why is history storing auc and val_auc with incrementing integers (auc_2, auc_4, ...)?

查看:141
本文介绍了为什么历史记录以递增的整数(auc_2,auc_4,...)存储auc和val_auc?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是keras的初学者,今天遇到了我不知道如何处理的这类问题。 auc val_auc 的值存储在历史记录中第一个偶数整数,例如 auc auc_2 auc_4 auc_6 ...等等。

I am beginner with keras and today I bumped into this sort of issue I don't know how to handle. The values for auc and val_auc are being stored in history with the first even integers, like auc, auc_2, auc_4, auc_6... and so on.

这使我无法通过Kfold交叉验证来管理和研究这些值,因为我无法访问 history.history ['auc'] 值,因为并不总是有这样的键'auc'。这是代码:

This is preventing me from managing and studying those values along my Kfold cross validation, as I cannot access history.history['auc'] value because there is not always such key 'auc'. Here is the code:

from tensorflow.keras.models import Sequential # pylint: disable= import-error
from tensorflow.keras.layers import Dense # pylint: disable= import-error
from tensorflow.keras import Input # pylint: disable= import-error
from sklearn.model_selection import StratifiedKFold
from keras.utils.vis_utils import plot_model
from keras.metrics import AUC, Accuracy # pylint: disable= import-error

BATCH_SIZE  = 32
EPOCHS      = 10
K           = 5
N_SAMPLE    = 1168
METRICS     = ['AUC', 'accuracy']

SAVE_PATH   = '../data/exp/final/submodels/'


def create_mlp(model_name, keyword, n_sample= N_SAMPLE, batch_size= BATCH_SIZE, epochs= EPOCHS):

    df = readCSV(n_sample)
    skf = StratifiedKFold(n_splits = K, random_state = 7, shuffle = True)

    for train_index, valid_index in skf.split(np.zeros(n_sample), df[['target']]):

        x_train, y_train, x_valid, y_valid = get_train_valid_dataset(keyword, df, train_index, valid_index)
        model = get_model(keyword)

        history = model.fit(
            x = x_train,
            y = y_train,
            validation_data = (x_valid, y_valid),
            epochs = epochs
        )

def get_train_valid_dataset(keyword, df, train_index, valid_index):
    aux = df[[c for c in columns[keyword]]]
    return aux.iloc[train_index].values, df['target'].iloc[train_index].values, aux.iloc[valid_index].values, df['target'].iloc[valid_index].values

def create_callbacks(model_name, save_path, fold_var):
    checkpoint = ModelCheckpoint(
        save_path + model_name + '_' +str(fold_var),
        monitor=CALLBACK_MONITOR, 
        verbose=1,
        save_best_only= True,
        save_weights_only= True,
        mode='max'
    )

    return [checkpoint]

main.py 中,我称 create_mlp('model0','euler',n_sample = 100),并且日志为(仅相关行):

In main.py I call create_mlp('model0', 'euler', n_sample=100), and the log is (only relevant lines):

Epoch 9/10
32/80 [===========>..................] - ETA: 0s - loss: 0.6931 - auc: 0.5000 - acc: 0.5625
Epoch 00009: val_auc did not improve from 0.50000
80/80 [==============================] - 0s 1ms/sample - loss: 0.6931 - auc: 0.5000 - acc: 0.5000 - val_loss: 0.6931 - val_auc: 0.5000 - val_acc: 0.5000
Epoch 10/10
32/80 [===========>..................] - ETA: 0s - loss: 0.6932 - auc: 0.5000 - acc: 0.4375
Epoch 00010: val_auc did not improve from 0.50000
80/80 [==============================] - 0s 1ms/sample - loss: 0.6931 - auc: 0.5000 - acc: 0.5000 - val_loss: 0.6931 - val_auc: 0.5000 - val_acc: 0.5000
Train on 80 samples, validate on 20 samples
Epoch 1/10
32/80 [===========>..................] - ETA: 0s - loss: 0.7644 - auc_2: 0.3075 - acc: 0.5000WARNING:tensorflow:Can save best model only with val_auc available, skipping.
80/80 [==============================] - 1s 10ms/sample - loss: 0.7246 - auc_2: 0.4563 - acc: 0.5250 - val_loss: 0.6072 - val_auc_2: 0.8250 - val_acc: 0.6500
Epoch 2/10
32/80 [===========>..................] - ETA: 0s - loss: 0.7046 - auc_2: 0.4766 - acc: 0.5000WARNING:tensorflow:Can save best model only with val_auc available, skipping.
80/80 [==============================] - 0s 1ms/sample - loss: 0.6511 - auc_2: 0.6322 - acc: 0.5625 - val_loss: 0.5899 - val_auc_2: 0.8000 - val_acc: 0.6000

任何帮助都会不胜感激。我正在使用:

Any help will be appreciated. I am using:

keras==2.3.1
tensorflow==1.14.0


推荐答案

使用tf.keras.backend.clear_session()

Use tf.keras.backend.clear_session()

https://www.tensorflow.org/api_docs/python/tf/ keras / backend / clear_session

这篇关于为什么历史记录以递增的整数(auc_2,auc_4,...)存储auc和val_auc?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆