AttributeError:“模型"对象没有属性"predict_classes" [英] AttributeError: 'Model' object has no attribute 'predict_classes'

查看:1084
本文介绍了AttributeError:“模型"对象没有属性"predict_classes"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用预先训练和微调的DL模型预测验证数据.该代码遵循Keras博客中有关使用很少的数据构建图像分类模型"的示例.这是代码:

I'm trying to predict on the validation data with pre-trained and fine-tuned DL models. The code follows the example available in the Keras blog on "building image classification models using very little data". Here is the code:

import numpy as np
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.models import Model
from keras.layers import Flatten, Dense
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.metrics import roc_auc_score
import itertools
from keras.optimizers import SGD
from sklearn.metrics import roc_curve, auc
from keras import applications
from keras import backend as K
K.set_image_dim_ordering('tf')

# Plotting the confusion matrix
def plot_confusion_matrix(cm, classes,
                          normalize=False, #if true all values in confusion matrix is between 0 and 1
                          title='Confusion matrix',
                          cmap=plt.cm.Blues):
    """
    This function prints and plots the confusion matrix.
    Normalization can be applied by setting `normalize=True`.
    """
    plt.imshow(cm, interpolation='nearest', cmap=cmap)
    plt.title(title)
    plt.colorbar()
    tick_marks = np.arange(len(classes))
    plt.xticks(tick_marks, classes, rotation=45)
    plt.yticks(tick_marks, classes)

    if normalize:
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        print("Normalized confusion matrix")
    else:
        print('Confusion matrix, without normalization')

    print(cm)
    thresh = cm.max() / 2.
    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
        plt.text(j, i, cm[i, j],
                 horizontalalignment="center",
                 color="white" if cm[i, j] > thresh else "black")
    plt.tight_layout()
    plt.ylabel('True label')
    plt.xlabel('Predicted label')

#plot data
def generate_results(validation_labels, y_pred):
    fpr, tpr, _ = roc_curve(validation_labels, y_pred) ##(this implementation is restricted to a binary classification task)
    roc_auc = auc(fpr, tpr)
    plt.figure()
    plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
    plt.plot([0, 1], [0, 1], 'k--')
    plt.xlim([0.0, 1.05])
    plt.ylim([0.0, 1.05])
    plt.xlabel('False Positive Rate (FPR)')
    plt.ylabel('True Positive Rate (TPR)')
    plt.title('Receiver operating characteristic (ROC) curve')
    plt.show()
    print('Area Under the Curve (AUC): %f' % roc_auc)

img_width, img_height = 100,100
top_model_weights_path = 'modela.h5'
train_data_dir = 'data4/train'
validation_data_dir = 'data4/validation'
nb_train_samples = 20
nb_validation_samples = 20
epochs = 50
batch_size = 10
def save_bottleneck_features():
   datagen = ImageDataGenerator(rescale=1. / 255)
   model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(100,100,3))
   generator = datagen.flow_from_directory(
               train_data_dir,
               target_size=(img_width, img_height),
               batch_size=batch_size,
               class_mode='binary',
               shuffle=False)
   bottleneck_features_train = model.predict_generator(
               generator, nb_train_samples // batch_size)
   np.save(open('bottleneck_features_train', 'wb'),bottleneck_features_train)

   generator = datagen.flow_from_directory(
               validation_data_dir,
               target_size=(img_width, img_height),
               batch_size=batch_size,
               class_mode='binary',
               shuffle=False)
   bottleneck_features_validation = model.predict_generator(
               generator, nb_validation_samples // batch_size)
   np.save(open('bottleneck_features_validation', 'wb'),bottleneck_features_validation)

def train_top_model():
   train_data = np.load(open('bottleneck_features_train', 'rb'))
   train_labels = np.array([0] * (nb_train_samples // 2) + [1] * (nb_train_samples // 2))
   validation_data = np.load(open('bottleneck_features_validation', 'rb'))
   validation_labels = np.array([0] * (nb_validation_samples // 2) + [1] * (nb_validation_samples // 2))
   model = Sequential()
   model.add(Flatten(input_shape=train_data.shape[1:]))
   model.add(Dense(512, activation='relu'))
   model.add(Dense(1, activation='sigmoid'))
   sgd = SGD(lr=1e-3, decay=0.00, momentum=0.99, nesterov=False) 
   model.compile(optimizer=sgd,
         loss='binary_crossentropy', metrics=['accuracy'])
   model.fit(train_data, train_labels,
          epochs=epochs,
          batch_size=batch_size,
   validation_data=(validation_data, validation_labels))
   model.save_weights(top_model_weights_path)
   print('Predicting on test data')
   y_pred = model.predict_classes(validation_data)
   print(y_pred.shape)
   print('Generating results')
   generate_results(validation_labels[:,], y_pred[:,])
   print('Generating the ROC_AUC_Scores') #Compute Area Under the Curve (AUC) from prediction scores
   print(roc_auc_score(validation_labels,y_pred)) #this implementation is restricted to the binary classification task or multilabel classification task in label indicator format.
   target_names = ['class 0(Normal)', 'class 1(Abnormal)']
   print(classification_report(validation_labels,y_pred,target_names=target_names))
   print(confusion_matrix(validation_labels,y_pred))
   cnf_matrix = (confusion_matrix(validation_labels,y_pred))
   np.set_printoptions(precision=2)
   plt.figure()
   # Plot non-normalized confusion matrix
   plot_confusion_matrix(cnf_matrix, classes=target_names,
                      title='Confusion matrix')
   plt.show()
save_bottleneck_features()
train_top_model()

# path to the model weights files.
weights_path = '../keras/examples/vgg16_weights.h5'
top_model_weights_path = 'modela.h5'
# dimensions of our images.
img_width, img_height = 100, 100
train_data_dir = 'data4/train'
validation_data_dir = 'data4/validation'
nb_train_samples = 20
nb_validation_samples = 20
epochs = 50
batch_size = 10

# build the VGG16 network
base_model = applications.VGG16(weights='imagenet', include_top=False, input_shape=(100,100,3))
print('Model loaded.')

train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_height, img_width),
    batch_size=batch_size,
    class_mode='binary') 
top_model = Sequential()
top_model.add(Flatten(input_shape=base_model.output_shape[1:]))
top_model.add(Dense(512, activation='relu'))
top_model.add(Dense(1, activation='softmax'))
top_model.load_weights(top_model_weights_path)
model = Model(inputs=base_model.input, outputs=top_model(base_model.output))
   # set the first 15 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:15]: #up to the layer before the last convolution block
        layer.trainable = False
model.summary()
   # fine-tune the model
model.compile(loss='binary_crossentropy', optimizer=SGD(lr=1e-4, momentum=0.99), metrics=['accuracy'])
model.fit_generator(train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size,
    verbose=1)
model.save_weights(top_model_weights_path)
bottleneck_features_validation = model.predict_generator(validation_generator, nb_validation_samples // batch_size)
np.save(open('bottleneck_features_validation','wb'), bottleneck_features_validation)
validation_data = np.load(open('bottleneck_features_validation', 'rb'))
y_pred1 = model.predict_classes(validation_data)

问题在于预训练的模型正在对数据进行训练,并完美地预测了类别,并给出了混淆矩阵.在继续微调模型时,我发现model.predict_classes无法正常工作.这是错误:

The problem is that the pre-trained model is getting trained on the data and predicts the classes perfectly and gives the confusion matrix as well. As I proceed to fine-tuning the model, I could find that model.predict_classes is not working. Here is the error:

File "C:/Users/rajaramans2/codes/untitled12.py", line 220, in <module>
    y_pred1 = model.predict_classes(validation_data)

AttributeError: 'Model' object has no attribute 'predict_classes'

我很困惑,因为model.predict_classes在预先训练的模型中效果很好,但是在微调阶段却不行.验证数据的大小为(20,1)和float32类型.任何帮助,将不胜感激.

I am confused because, model.predict_classes worked well with the pre-trained model, but not in the fine-tuning stage. The size of validation data is (20,1) and float32 type. Any help would be appreciated.

推荐答案

predict_classes方法仅适用于Sequential类(这是您的第一个模型的类),而不适用于Model类(第二个模型的类).

The predict_classes method is only available for the Sequential class (which is the class of your first model) but not for the Model class (the class of your second model).

使用Model类,可以使用predict方法,该方法将为您提供一个概率向量,然后使用(c7>)获得该向量的argmax.

With the Model class, you can use the predict method which will give you a vector of probabilities and then get the argmax of this vector (with np.argmax(y_pred1,axis=1)).

这篇关于AttributeError:“模型"对象没有属性"predict_classes"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆