Keras如何处理多标签分类? [英] How does Keras handle multilabel classification?
问题描述
我不确定在以下情况下如何解释Keras的默认行为:
I am unsure how to interpret the default behavior of Keras in the following situation:
我的Y(基本事实)是使用scikit-learn的MultilabelBinarizer
()设置的.
My Y (ground truth) was set up using scikit-learn's MultilabelBinarizer
().
因此,举一个随机的例子,我的y
列的一行是按如下方式热编码的:
[0,0,0,1,0,1,0,0,0,0,1]
.
Therefore, to give a random example, one row of my y
column is one-hot encoded as such:
[0,0,0,1,0,1,0,0,0,0,1]
.
因此,我有11个可以预测的类,并且不只一个是真的.因此,问题的多标签性质.此特定样本有三个标签.
So I have 11 classes that could be predicted, and more than one can be true; hence the multilabel nature of the problem. There are three labels for this particular sample.
我像对待非多标签问题(照常营业)一样训练模型,但没有任何错误.
I train the model as I would for a non multilabel problem (business as usual) and I get no errors.
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy',])
model.fit(X_train, y_train,epochs=5,batch_size=2000)
score = model.evaluate(X_test, y_test, batch_size=2000)
score
当Keras遇到我的y_train
并看到它是多"一次热编码时,意味着什么?这意味着y_train
的每一行中都存在一个以上的"1"吗?基本上,Keras是否会自动执行多标签分类?得分指标的解释有何不同?
What does Keras do when it encounters my y_train
and sees that it is "multi" one-hot encoded, meaning there is more than one 'one' present in each row of y_train
? Basically, does Keras automatically perform multilabel classification? Any differences in the interpretation of the scoring metrics?
推荐答案
简而言之
不要使用softmax
.
使用sigmoid
激活输出层.
将binary_crossentropy
用于损失功能.
使用predict
进行评估.
在softmax
中,当增加一个标签的得分时,所有其他标签的得分都会降低(这是概率分布).拥有多个标签时,您不希望这样做.
In softmax
when increasing score for one label, all others are lowered (it's a probability distribution). You don't want that when you have multiple labels.
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(5000, activation='relu', input_dim=X_train.shape[1]))
model.add(Dropout(0.1))
model.add(Dense(600, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(y_train.shape[1], activation='sigmoid'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
optimizer=sgd)
model.fit(X_train, y_train, epochs=5, batch_size=2000)
preds = model.predict(X_test)
preds[preds>=0.5] = 1
preds[preds<0.5] = 0
# score = compare preds and y_test
这篇关于Keras如何处理多标签分类?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!