Keras分类预测中的返回概率? [英] Returning probabilities in a classification prediction in Keras?

查看:214
本文介绍了Keras分类预测中的返回概率?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试做一个简单的概念证明,以便可以看到给定预测的不同类别的概率.

I am trying to make a simple proof-of-concept where I can see the probabilities of different classes for a given prediction.

但是,即使我使用的是softmax激活,我尝试的所有操作似乎都只能输出预测的类.我是机器学习的新手,所以不确定我是否犯了一个简单的错误,或者这是Keras中不可用的功能.

However, everything I try seems to only output the predicted class, even though I am using a softmax activation. I am new to machine learning, so I'm not sure if I am making a simple mistake or if this is a feature not available in Keras.

我正在使用Keras + TensorFlow.我改编了Keras给出的基本示例之一,用于分类MNIST数据集.

I'm using Keras + TensorFlow. I have adapted one of the basic examples given by Keras for classifying the MNIST dataset.

下面的代码与示例完全相同,除了几行(有注释)的额外行,这些行将模型导出到本地文件.

My code below is exactly the same as the example, except for a few (commented) extra lines that exports the model to a local file.

'''Trains a simple deep NN on the MNIST dataset.
Gets to 98.40% test accuracy after 20 epochs
(there is *a lot* of margin for parameter tuning).
2 seconds per epoch on a K520 GPU.
'''

from __future__ import print_function

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop

import h5py # added import because it is required for model.save
model_filepath = 'test_model.h5' # added filepath config

batch_size = 128
num_classes = 10
epochs = 20

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy',
          optimizer=RMSprop(),
          metrics=['accuracy'])

history = model.fit(x_train, y_train,
                batch_size=batch_size,
                epochs=epochs,
                verbose=1,
                validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

model.save(model_filepath) # added saving model
print('Model saved') # added log

然后,此部分的第二部分是一个简单的脚本,该脚本应导入模型,为某些给定数据预测类,并打印出每个类的概率. (我使用Keras代码库中包含的同一mnist类来使示例尽可能简单).

Then the second part of this is a simple script that should import the model, predict the class for some given data, and print out the probabilities for each class. (I am using the same mnist class included with the Keras codebase to make an example as simple as possible).

import keras
from keras.datasets import mnist
from keras.models import Sequential
import keras.backend as K

import numpy

# loading model saved locally in test_model.h5
model_filepath = 'test_model.h5'
prev_model = keras.models.load_model(model_filepath)

# these lines are copied from the example for loading MNIST data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)

# for this example, I am only taking the first 10 images
x_slice = x_train[slice(1, 11, 1)]

# making the prediction
prediction = prev_model.predict(x_slice)

# logging each on a separate line
for single_prediction in prediction:
    print(single_prediction)

如果我运行第一个脚本导出模型,然后运行第二个脚本对一些示例进行分类,则会得到以下输出:

If I run the first script to export the model, then the second script to classify some examples, I get the following output:

[ 1.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
[ 0.  0.  0.  0.  1.  0.  0.  0.  0.  0.]
[ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]
[ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]
[ 0.  0.  1.  0.  0.  0.  0.  0.  0.  0.]
[ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]
[ 0.  0.  0.  1.  0.  0.  0.  0.  0.  0.]
[ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]
[ 0.  0.  0.  0.  1.  0.  0.  0.  0.  0.]
[ 0.  0.  0.  1.  0.  0.  0.  0.  0.  0.]

这对于查看每个类别预计为哪个类非常有用,但是如果我想查看每个示例的每个类别的相对概率怎么办?我正在寻找更像这样的东西:

This is great for seeing which class each is predicted to be, but what if I want to see the relative probabilities of each class for each example? I am looking for something more like this:

[ 0.94 0.01 0.02 0. 0. 0.01 0. 0.01 0.01 0.]
[ 0. 0. 0. 0. 0.51 0. 0. 0. 0.49 0.]
...

换句话说,我需要知道每个预测的确定性,而不仅是预测本身.我以为看到相对概率是在模型中使用softmax激活的一部分,但是我似乎在Keras文档中找不到能给我概率而不是预测答案的任何东西.我是在犯一些愚蠢的错误,还是此功能不可用?

In other words, I need to know how sure each prediction is, not just the prediction itself. I thought seeing the relative probabilities was a part of using a softmax activation in the model, but I can't seem to find anything in the Keras documentation that would give me probabilities instead of the predicted answer. Am I making some kind of silly mistake, or is this feature not available?

推荐答案

因此,事实证明问题出在我没有完全标准化预测脚本中的数据.

So it turns out that the problem was I was not fully normalizing the data in the prediction script.

我的预测脚本应该包含以下几行:

My prediction script should have had the following lines:

# these lines are copied from the example for loading MNIST data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_train = x_train.astype('float32') # this line was missing
x_train /= 255 # this line was missing too

由于数据不是强制转换为浮点数,而是除以255(因此它将介于0和1之间),因此它只显示为1s和0s.

Because the data was not cast to float, and divided by 255 (so it would be between 0 and 1), it was just showing up as 1s and 0s.

这篇关于Keras分类预测中的返回概率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆