Sklearn SGDClassifier 部分拟合 [英] Sklearn SGDClassifier partial fit

查看:29
本文介绍了Sklearn SGDClassifier 部分拟合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 SGD 对大型数据集进行分类.由于数据太大而无法放入内存,我想使用 partial_fit 方法来训练分类器.我选择了一个适合内存的数据集样本(100,000 行)来测试 fitpartial_fit:

I'm trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I'd like to use the partial_fit method to train the classifier. I have selected a sample of the dataset (100,000 rows) that fits into memory to test fit vs. partial_fit:

from sklearn.linear_model import SGDClassifier

def batches(l, n):
    for i in xrange(0, len(l), n):
        yield l[i:i+n]

clf1 = SGDClassifier(shuffle=True, loss='log')
clf1.fit(X, Y)

clf2 = SGDClassifier(shuffle=True, loss='log')
n_iter = 60
for n in range(n_iter):
    for batch in batches(range(len(X)), 10000):
        clf2.partial_fit(X[batch[0]:batch[-1]+1], Y[batch[0]:batch[-1]+1], classes=numpy.unique(Y))

然后我使用相同的测试集测试两个分类器.在第一种情况下,我的准确度为 100%.据我了解,SGD 默认通过 5 次训练数据 (n_iter = 5).

I then test both classifiers with an identical test set. In the first case I get an accuracy of 100%. As I understand it, SGD by default passes 5 times over the training data (n_iter = 5).

在第二种情况下,我必须将数据传递 60 次才能达到相同的精度.

In the second case, I have to pass 60 times over the data to reach the same accuracy.

为什么会有这种差异(5 对 60)?还是我做错了什么?

Why this difference (5 vs. 60)? Or am I doing something wrong?

推荐答案

我终于找到了答案.您需要在每次迭代之间打乱训练数据,因为在实例化模型时设置 shuffle=True 不会在使用 partial_fit 时打乱数据(它仅适用于fit).注意:在 sklearn.linear_model.SGDClassifier 页面.

I have finally found the answer. You need to shuffle the training data between each iteration, as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit (it only applies to fit). Note: it would have been helpful to find this information on the sklearn.linear_model.SGDClassifier page.

修改后的代码如下:

from sklearn.linear_model import SGDClassifier
import random
clf2 = SGDClassifier(loss='log') # shuffle=True is useless here
shuffledRange = range(len(X))
n_iter = 5
for n in range(n_iter):
    random.shuffle(shuffledRange)
    shuffledX = [X[i] for i in shuffledRange]
    shuffledY = [Y[i] for i in shuffledRange]
    for batch in batches(range(len(shuffledX)), 10000):
        clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y))

这篇关于Sklearn SGDClassifier 部分拟合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆