keras无限期地预测内存交换增加 [英] keras predict memory swap increase indefinitely
问题描述
我使用keras实施了分类程序。我有很多图像,我想使用for循环预测每个图像。
I implemented a classification program using keras. I have a big set of images and I would like to predict each image using a for loop.
但是,每次计算新图像时,交换内存都会增加。我试图删除预报函数内部的所有变量(并且我确定它存在于此函数中,但有问题),但是内存仍然增加。
However, every time a new image is computed the swap memory increases. I tried to delete all variables inside of the predict function (and I'm sure that it is inside of this function that there is a problem) but the memory still increases.
for img in images:
predict(img, model, categ_par, gl_par)
和相应的函数:
def predict(image_path, model, categ_par, gl_par):
print("[INFO] loading and preprocessing image...")
orig = cv2.imread(image_path)
image = load_img(image_path, target_size=(gl_par.img_width, gl_par.img_height))
image = img_to_array(image)
# important! otherwise the predictions will be '0'
image = image / 255
image = np.expand_dims(image, axis=0)
# build the VGG16 network
if(categ_par.method == 'VGG16'):
model = applications.VGG16(include_top=False, weights='imagenet')
if(categ_par.method == 'InceptionV3'):
model = applications.InceptionV3(include_top=False, weights='imagenet')
# get the bottleneck prediction from the pre-trained VGG16 model
bottleneck_prediction = model.predict(image)
# build top model
model = Sequential()
model.add(Flatten(input_shape=bottleneck_prediction.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(categ_par.n_class, activation='softmax'))
model.load_weights(categ_par.top_model_weights_path)
# use the bottleneck prediction on the top model to get the final classification
class_predicted = model.predict_classes(bottleneck_prediction)
probability_predicted = (model.predict_proba(bottleneck_prediction))
classe = pd.DataFrame(list(zip(categ_par.class_indices.keys(), list(probability_predicted[0])))).\
rename(columns = {0:'type', 1: 'prob'}).reset_index(drop=True)
#print(classe)
del model
del bottleneck_prediction
del image
del orig
del class_predicted
del probability_predicted
return classe.set_index(['type']).T
推荐答案
如果您正在使用TensorFlow后端,则将为for循环中的每个img建立一个模型。 TensorFlow只是将图形追加到图形等上,这意味着内存刚刚增加。这是一个众所周知的情况,当您要构建许多模型时,必须在超参数优化期间进行处理。
If you are using TensorFlow backend you will be building a model for each img in the for loop. TensorFlow just keeps appending graph onto graph etc. which means memory just rises. This is a well known occurrence and must be dealt with during hyperparameter optimization when you will be building many models, but also here.
from keras import backend as K
并将其放在预测()的末尾:
and put this at the end of predict():
K.clear_session()
或者您可以只构建一个模型并将其作为预测函数的输入,因此不必每次都构建一个新模型。
Or you can just build one model and feed that as input to the predict function so you are not building a new one each time.
这篇关于keras无限期地预测内存交换增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!