是否可以为MLPClassifier的每次迭代获取测试成绩? [英] Is it possible to get test scores for each iteration of MLPClassifier?

查看:397
本文介绍了是否可以为MLPClassifier的每次迭代获取测试成绩?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想同时查看训练数据和测试数据的损耗曲线.当前,使用clf.loss_curve来获得每次迭代的训练集损失似乎很简单(请参见下文).

I would like to look at the loss curves for training data and test data side by side. Currently it seems straightforward to get the loss on the training set for each iteration using clf.loss_curve (See below).

from sklearn.neural_network import MLPClassifier
clf = MLPClassifier()
clf.fit(X,y)
clf.loss_curve_ # this seems to have loss for the training set

但是,我还想在测试数据集上绘制性能.这可以吗?

However, I would also like to plot performance on a test data set. Is this available?

推荐答案

clf.loss_curve_不属于

clf.loss_curve_ is not part of the API-docs (although used in some examples). The only reason it's there is because it's used internally for early-stopping.

如汤姆(Tom)所述,还有一些使用validation_scores_的方法.

As Tom mentions, there is also some approach to use validation_scores_.

除此之外,更复杂的设置可能需要以更手动的方式进行培训,您可以在其中控制何时,什么以及如何测量某物.

Apart from that, more complex setups might need to do a more manual way of training, where you can control when, what and how to measure something.

在阅读汤姆的答案后,也许会说得很明智:如果只需要跨时间段计算,那么他组合warm_startmax_iter的方法可以节省一些代码(并使用sklearn的原始代码更多).这里的代码也可以进行历时内计算(如果需要;可以与keras进行比较).

After reading Tom's answer, it might be wise to say: if only inter-epoch calculations are needed, his approach of combining warm_start and max_iter saves some code (and uses more of sklearn's original code). This code here could do intra-epoch calculations (if needed; compare with keras) too.

简单(原型)示例:

import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_mldata
from sklearn.neural_network import MLPClassifier
np.random.seed(1)

""" Example based on sklearn's docs """
mnist = fetch_mldata("MNIST original")
# rescale the data, use the traditional train/test split
X, y = mnist.data / 255., mnist.target
X_train, X_test = X[:60000], X[60000:]
y_train, y_test = y[:60000], y[60000:]

mlp = MLPClassifier(hidden_layer_sizes=(50,), max_iter=10, alpha=1e-4,
                    solver='adam', verbose=0, tol=1e-8, random_state=1,
                    learning_rate_init=.01)

""" Home-made mini-batch learning
    -> not to be used in out-of-core setting!
"""
N_TRAIN_SAMPLES = X_train.shape[0]
N_EPOCHS = 25
N_BATCH = 128
N_CLASSES = np.unique(y_train)

scores_train = []
scores_test = []

# EPOCH
epoch = 0
while epoch < N_EPOCHS:
    print('epoch: ', epoch)
    # SHUFFLING
    random_perm = np.random.permutation(X_train.shape[0])
    mini_batch_index = 0
    while True:
        # MINI-BATCH
        indices = random_perm[mini_batch_index:mini_batch_index + N_BATCH]
        mlp.partial_fit(X_train[indices], y_train[indices], classes=N_CLASSES)
        mini_batch_index += N_BATCH

        if mini_batch_index >= N_TRAIN_SAMPLES:
            break

    # SCORE TRAIN
    scores_train.append(mlp.score(X_train, y_train))

    # SCORE TEST
    scores_test.append(mlp.score(X_test, y_test))

    epoch += 1

""" Plot """
fig, ax = plt.subplots(2, sharex=True, sharey=True)
ax[0].plot(scores_train)
ax[0].set_title('Train')
ax[1].plot(scores_test)
ax[1].set_title('Test')
fig.suptitle("Accuracy over epochs", fontsize=14)
plt.show()

输出:

或更紧凑:

plt.plot(scores_train, color='green', alpha=0.8, label='Train')
plt.plot(scores_test, color='magenta', alpha=0.8, label='Test')
plt.title("Accuracy over epochs", fontsize=14)
plt.xlabel('Epochs')
plt.legend(loc='upper left')
plt.show()

输出:

这篇关于是否可以为MLPClassifier的每次迭代获取测试成绩?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆