如何重塑文本数据以适合于Keras中的LSTM模型 [英] how to reshape text data to be suitable for LSTM model in keras

查看:85
本文介绍了如何重塑文本数据以适合于Keras中的LSTM模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

更新1:

我指的代码正是您可以在书中找到的代码

The code Im referring is exactly the code in the book which you can find it here.

唯一的事情是我不想在解码器部分使用embed_size.这就是为什么我认为根本不需要嵌入层的原因,因为如果我放置嵌入层,则需要在解码器部分中添加embed_size(如果我输入错误,请更正我).

The only thing is that I don't want to have embed_size in the decoder part. That's why I think I don't need to have embedding layer at all because If I put embedding layer, I need to have embed_size in the decoder part(please correct me if Im wrong).

总体而言,我试图在不使用嵌入层的情况下采用相同的代码,因为我需要o在解码器部分中包含vocab_size.

Overall, Im trying to adopt the same code without using the embedding layer, because I need o have vocab_size in the decoder part.

我认为评论中提供的建议可能是正确的(using one_hot_encoding),无论我如何面对此错误:

I think the suggestion provided in the comment could be correct (using one_hot_encoding) how ever I faced with this error:

当我做one_hot_encoding时:

tf.keras.backend.one_hot(indices=sent_wids, classes=vocab_size)

我收到此错误:

in check_num_samples you should specify the + steps_name + argument ValueError: If your data is in the form of symbolic tensors, you should specify the steps_per_epoch argument (instead of the batch_size argument, because symbolic tensors are expected to produce batches of input data)

in check_num_samples you should specify the + steps_name + argument ValueError: If your data is in the form of symbolic tensors, you should specify the steps_per_epoch argument (instead of the batch_size argument, because symbolic tensors are expected to produce batches of input data)

我准备数据的方式是这样的:

The way that I have prepared data is like this:

形状是(87716, 200),我想以可以将其输入LSTM的方式重塑它的形状. 这里的200代表sequence_lenght,而87716是我拥有的样本数.

shape of sent_lens is (87716, 200) and I want to reshape it in a way I can feed it into LSTM. here 200 stands for the sequence_lenght and 87716 is number of samples I have.

下面是LSTM Autoencoder的代码:

inputs = Input(shape=(SEQUENCE_LEN,VOCAB_SIZE), name="input")
encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs)
decoded = RepeatVector(SEQUENCE_LEN, name="repeater")(encoded)
decoded = LSTM(VOCAB_SIZE, return_sequences=True)(decoded)
autoencoder = Model(inputs, decoded)
autoencoder.compile(optimizer="sgd", loss='mse')
autoencoder.summary()
history = autoencoder.fit(Xtrain, Xtrain,batch_size=BATCH_SIZE, 
epochs=NUM_EPOCHS)

我仍然需要做些额外的事情吗?如果没有,为什么我无法使它正常工作?

Do I still need to do anything extra, if No, why I can not get this works?

请让我知道我要解释的部分不清楚.

Please let me know which part is not clear I will explain.

感谢您的帮助:)

推荐答案

因此,正如评论中所述,事实证明我只需要执行one_hot_encoding.

So as said in the comments it turns out I just needed to do one_hot_encoding.

当我使用tf.keras.backend进行one_hot编码时,会引发我在问题中更新的错误.

when I did one_hot encoding using the tf.keras.backend it throws the error that I have updated in my question.

然后我尝试了to_categorical(sent_wids, num_classes=VOCAB_SIZE)并修复了该问题(但是面对memory error:D却是另一回事)!

Then I tried to_categorical(sent_wids, num_classes=VOCAB_SIZE) and it fixed (however faced with memory error :D which is different story)!!!

我还应该提到我尝试了sparse_categorical_crossentropy而不是one_hot_encoding,尽管它不起作用!

I should also mention that I tried sparse_categorical_crossentropy instead of one_hot_encoding though it did not work!

感谢您的所有帮助:)

这篇关于如何重塑文本数据以适合于Keras中的LSTM模型的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆