卷积神经网络中超参数的保存优化 [英] Save Optimization of Hyperparameter in a Convolutional Neural Net

查看:161
本文介绍了卷积神经网络中超参数的保存优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在将卷积神经网络中保存超参数的训练过程方面面临一个问题。我已经阅读了几篇博客文章,但是以某种方式我无法做到这一点。

I am facing a problem regarding saving the training process of the hyperparameter in my Convolutional Neural Net. I have read couple of blog posts, but somehow I am unable to do that.

我有以下代码:

def ConvNet(embeddings, max_sequence_length, num_words, embedding_dim, trainable=False, extra_conv=True,
            lr=0.0001, dropout=0.7, filters = 128, momentum = 0.8, units = 32, pool_size = 3):
    embedding_layer = Embedding(num_words,
                                embedding_dim,
                                weights=[embeddings],
                                input_length=max_sequence_length,
                                trainable=trainable)

    sequence_input = Input(shape=(max_sequence_length,), dtype='int32')
    embedded_sequences = embedding_layer(sequence_input)
    convs = []
    filter_sizes = [3, 4, 5]
    for filter_size in filter_sizes:
        l_conv = Conv1D(filters=filters, kernel_size=filter_size, activation='relu')(embedded_sequences)
        l_pool = MaxPooling1D(pool_size=pool_size)(l_conv)
        l_conv2 = Conv1D(filters=filters, kernel_size=3, activation='relu')(l_pool)
        l_pool2 = MaxPooling1D(pool_size=pool_size)(l_conv2)

        convs.append(l_pool2)

    l_merge = concatenate(convs, axis=1)

    # add a 1D convnet with global maxpooling, instead of Yoon Kim model
    conv = Conv1D(filters=filters, kernel_size=3, activation='relu')(embedded_sequences)
    pool = MaxPooling1D(pool_size=pool_size)(conv)

    if extra_conv == True:
        x = Dropout(dropout)(l_merge)
    else:
        # Original Yoon Kim model
        x = Dropout(dropout)(pool)
    x = Flatten()(x)
    x = Dense(units = units, activation='relu')(x)
    preds = Dense(1, activation='linear')(x)

    model = Model(sequence_input, preds)
    sgd = keras.optimizers.SGD(learning_rate = lr, momentum= momentum)
    model.compile(loss= r_square_loss,
                  optimizer= sgd,
                  metrics=['mean_squared_error', rmse, r_square])

    model.summary()
    return model 

我正在使用以下函数优化hyper参数:

I am optimizing the hyper parameter with the following function:

from hyperopt import fmin, hp, tpe, space_eval, Trials

def train_and_score(args):
    # Train the model the fixed params plus the optimization args.
    # Note that this method should return the final History object.
    model = ConvNet(embeddings=train_embedding_weights, max_sequence_length= MAX_SEQUENCE_LENGTH,
                    num_words=len(train_word_index)+1, embedding_dim= EMBEDDING_DIM,
                   trainable=False, extra_conv=True,
                   lr=args['lr'], dropout=args['dropout'], filters=args['filters'],
                    momentum= args['momentum'], units = args['units'])
    early_stopping = EarlyStopping(monitor='mean_squared_error', patience=40, verbose=1, mode='auto')

    hist = model.fit(x_train, y_tr, epochs=args['epochs'], batch_size=args['batch_size'], validation_split=0.2, shuffle=True,
                     callbacks=[early_stopping])

    #Unpack and return the last validation loss from the history.
    return hist.history['val_loss'][-1]

 #Define the space to optimize over.
space = {
    'lr': hp.choice('lr', [0.1, 0.01, 0.001, 0.0001]),
    'dropout': hp.choice('dropout', [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]),
    'filters': hp.choice('filters', [32, 64, 128, 256]),
    'pool_size': hp.choice('pool_size', [2, 3]),
    'momentum': hp.choice('momentum', [0.4, 0.5, 0.6, 0.7, 0.8, 0.9]),
    'units': hp.choice('units', [32, 64, 128, 256]),
    'epochs': hp.choice('epochs', [20, 30, 40, 50, 60, 70]),
    'batch_size': hp.choice('batch_size', [20, 30, 40, 50, 60, 70, 80])
}

# Minimize the training score over the space.
trials = Trials()
best = fmin(fn=train_and_score,
            space=space,
            trials=trials,
            max_evals = 10,
            algo=tpe.suggest)

# Print details about the best results and hyperparameters.
print(best)
print(space_eval(space, best))

到目前为止,我的max_evals等于10,以查看是否一切正常。对于实际的培训过程,我想将其设置为500并让它运行一天... 这是我的问题:我如何保存培训过程?我认为仅仅将最好的文件保存在文件中或其他内容就足够了,因为这是一个大学项目,我必须交出我训练了CNN的证明。

As of now, I have max_evals equal to 10, to see if everything works. For the actual training process I would like to set it to 500 and let it run for one day... So here is my question: How do I save the training process? I think it would be enough just to save the best one in a file or something, as this is a university project and I have to hand in a "proof" that I trained the CNN.

其他问题:到目前为止,在进行10次评估之后,我采用了最佳参数并将其手动填充到上面提供的代码中,以预测测试集并计算一些统计数字,例如mse,r平方等。

Additional question: As of now, after the 10 evaluations, I am taking the best parameters and fill it manually into the above provided code to predict the test set and calculate some statistical numbers like mse, r-square etc.

model = ConvNet(train_embedding_weights, MAX_SEQUENCE_LENGTH, len(train_word_index)+1, EMBEDDING_DIM,
                trainable=False, extra_conv=True,
                lr=0.0001, dropout=0.6, filters= 128,
                momentum= 0.8, units = 32, pool_size = 2)

#define callbacks
early_stopping = EarlyStopping(monitor='mean_squared_error', patience=40, verbose=1, mode='auto')

hist = model.fit(x_train, y_tr, epochs=30, batch_size=20, validation_split=0.2, shuffle=False, callbacks=[early_stopping])

我的梦想是将max_eval设置为500,并将结果ist存储在输出文件(只需最好的超级参数组合就足够了),然后自动将获得的最佳超级参数用于计算x检验以及统计数字mse,r-square等。

有人可以帮忙吗?我在这里呆了很多很多小时。

Can anyone please help? I am stuck here for many, many, many hours.

谢谢!

推荐答案

我没有该问题的确切答案,但是有此处一个技巧;

I do not have the exact answer to that question, however there is here a "trick" that might do it.

它建议在代码末尾打印每个测试的内容。也许您可以保存试用版对象,也可以放在泡菜或其他东西中,以便稍后解析并检查自己。
我确实有完全相同的问题,而且我非常惊奇地发现没有简单的问题。

It proposes to print the content of every trial tested, at the end of the code. Maybe you can save the "trials" object as well in a pickle or something, so that you can parse and check for yourself later. I do have the exact same question and i'm very very suprise that there is no "easy" solution as with the keras callbacks to save the best trained model.

我还建议您更改任务的标题,至少添加 Hyperopt, callbacks。和节省模式关键词。有了更多的关注,也许我们的问题就会得到解答:)

I also suggest you to change the title of your questiopn, adding at least "Hyperopt", "callbacks" and "saving model" key words. With more attention maybe our question will be answered :)

这篇关于卷积神经网络中超参数的保存优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆