为什么Keras训练得很好,但返回错误的预测? [英] Why is Keras training well but returning wrong predictions?

查看:78
本文介绍了为什么Keras训练得很好,但返回错误的预测?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如果我给模型喂了五朵Setosa花,我无法让我的模型预测它们确实是Setosas.

If I feed the model five Setosa flowers, I cannot get my model to predict that they are indeed Setosas.

这是我的代码设置:

# Load libraries
import numpy as np
import pandas as pd
from keras import models
from keras import layers
from keras.models import Sequential
from keras.layers import Dense
from sklearn.utils import shuffle
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV

# Set random seed
np.random.seed(0)

# Step 1: Load data
iris = pd.read_csv("iris.csv")

X = iris.drop('species', axis=1)
y = pd.get_dummies(iris['species']).values

# Step 2: Preprocess data
scaler = preprocessing.StandardScaler() 
X = scaler.fit_transform(X)

X, y = shuffle(X, y)

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

network = models.Sequential()
network.add(layers.Dense(units=8, activation="relu", input_shape=(4,)))
network.add(layers.Dense(units=3, activation="softmax"))

# Compile neural network
network.compile(loss="categorical_crossentropy", 
                optimizer="adam", 
                metrics=["accuracy"]) 

# Train neural network
history = network.fit(X_train, # Features
                      y_train, # Target
                      epochs= 200, 
                      verbose= 1, 
                      batch_size=10, # Number of observations per batch
                      validation_data=(X_test, y_test)) # Test data

模型训练得很好,这是最后一个纪元:

The model trained well, here is the last epoch:

Epoch 200/200
112/112 [==============================] - 0s 910us/step - loss: 0.0740 - acc: 0.9911 - val_loss: 0.1172 - val_acc: 0.9737

现在,让我们做出一些预测.

Now, let's pull some predictions.

new_iris = iris.iloc[0:5, 0:4] # pull out the first five Setosas from original iris dataset; 
# prediction should give me Setosa since I am feeding it Setosas

np.around(network.predict(new_iris), decimals = 2) # predicts versicolor with high probability

array([[0.  , 0.95, 0.04],
       [0.  , 0.94, 0.06],
       [0.  , 0.96, 0.04],
       [0.  , 0.91, 0.09],
       [0.  , 0.96, 0.04]], dtype=float32)\

关于为什么会这样的任何想法?

Any ideas as to why this is the case?

推荐答案

您需要应用在测试时在培训中学到的转换.

You need to apply the transformation learned during training at test time.

new_iris = iris.iloc[0:5, 0:4] # pull out the first five Setosas from original iris dataset; 
new_iris = scaler.transform(new_iris)
np.around(network.predict(new_iris), decimals = 2) 

输出

array([[1.  , 0.  , 0.  ],
       [0.99, 0.01, 0.  ],
       [1.  , 0.  , 0.  ],
       [0.99, 0.01, 0.  ],
       [1.  , 0.  , 0.  ]], dtype=float32)

这篇关于为什么Keras训练得很好,但返回错误的预测?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆