从 Keras 的 imdb 数据集中恢复原始文本 [英] Restore original text from Keras’s imdb dataset

查看:18
本文介绍了从 Keras 的 imdb 数据集中恢复原始文本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从 Keras 的 imdb 数据集中恢复原始文本

我想从 Keras 的 imdb 数据集中恢复 imdb 的原始文本.

首先,当我加载 Keras 的 imdb 数据集时,它返回了单词索引的序列.

<预><代码>>>>(X_train, y_train), (X_test, y_test) = imdb.load_data()>>>X_train[0][1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 12, 8 4, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 17, 4, 4, 4, 4, 8, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 81, 530, 3, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 5242, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 22, 15, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 3, 3 6, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 29, 41, 6, 7, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 6, 6, 6, 6, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

我找到了 imdb.get_word_index 方法(),它返回词索引字典,如 {‘create’: 984, ‘make’: 94,...}.为了转换,我创建了索引词词典.<预><代码>>>>word_index = imdb.get_word_index()>>>index_word = {v:k for k,v in word_index.items()}

然后,我尝试恢复原始文本,如下所示.

<预><代码>>>>' '.join(index_word.get(w) for w in X_train[5])努力仍然是通常使完成吮吸结束 cbc 的努力,因为在之前,尽管有些东西知道新颖的女性,我慢慢地通过脚本的连接更新了他们的结果,我以欺骗性的方式结束了我"

我英语不好,但我知道这句话很奇怪.

为什么会这样?如何恢复原始文本?

解决方案

你的例子是胡言乱语,比只是缺少一些停用词更糟糕.

如果您重新阅读了 [keras.datasets.datasets 的 start_charoov_charindex_from 参数的文档.imdb.load_data](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification) 他们解释发生了什么的方法:

start_char:整数.序列的开始将用这个字符标记.设置为 1,因为 0 通常是填充字符.

oov_char:整数.由于 num_words 或 skip_top 限制而被删减的单词将被替换为该字符.

index_from:整数.使用此索引和更高的索引来索引实际单词.

您倒排的那本词典假定单词索引从 1 开始.

但是我的 keras 返回的索引有 作为索引 12.(并且假设您将使用 0 作为 ).

这对我有用:

导入kerasNUM_WORDS=1000 # 只使用前 1000 个单词INDEX_FROM=3 # 字索引偏移量train,test = keras.datasets.imdb.load_data(num_words=NUM_WORDS, index_from=INDEX_FROM)train_x,train_y = 火车test_x,test_y = 测试word_to_id = keras.datasets.imdb.get_word_index()word_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()}word_to_id[""] = 0word_to_id[""] = 1word_to_id [< unk>"] = 2word_to_id[""] = 3id_to_word = {value:key 的键,word_to_id.items() 中的值}print(' '.join(id_to_word[id] for id in train_x[0] ))

缺少标点符号,仅此而已:

 <启动>这部电影只是辉煌的铸造< unk>< unk>故事方向<UNK>真的<UNK>他们扮演的角色,你可以想象在那里罗伯特<UNK>真是个了不起的演员……"

Restore original text from Keras’s imdb dataset

I want to restore imdb’s original text from Keras’s imdb dataset.

First, when I load Keras’s imdb dataset, it returned sequence of word index.

>>> (X_train, y_train), (X_test, y_test) = imdb.load_data()
>>> X_train[0]
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

I found imdb.get_word_index method(), it returns word index dictionary like {‘create’: 984, ‘make’: 94,…}. For converting, I create index word dictionary.

>>> word_index = imdb.get_word_index()
>>> index_word = {v:k for k,v in word_index.items()}

Then, I tried to restore original text like following.

>>> ' '.join(index_word.get(w) for w in X_train[5])
"the effort still been that usually makes for of finished sucking ended cbc's an because before if just though something know novel female i i slowly lot of above freshened with connect in of script their that out end his deceptively i i"

I’m not good at English, but I know this sentence is something strange.

Why is this happened? How can I restore original text?

解决方案

Your example is coming out as gibberish, it's much worse than just some missing stop words.

If you re-read the docs for the start_char, oov_char, and index_from parameters of the [keras.datasets.imdb.load_data](https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification ) method they explain what is happening:

start_char: int. The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.

oov_char: int. words that were cut out because of the num_words or skip_top limit will be replaced with this character.

index_from: int. Index actual words with this index and higher.

That dictionary you inverted assumes the word indices start from 1.

But the indices returned my keras have <START> and <UNKNOWN> as indexes 1 and 2. (And it assumes you will use 0 for <PADDING>).

This works for me:

import keras
NUM_WORDS=1000 # only use top 1000 words
INDEX_FROM=3   # word index offset

train,test = keras.datasets.imdb.load_data(num_words=NUM_WORDS, index_from=INDEX_FROM)
train_x,train_y = train
test_x,test_y = test

word_to_id = keras.datasets.imdb.get_word_index()
word_to_id = {k:(v+INDEX_FROM) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
word_to_id["<UNUSED>"] = 3

id_to_word = {value:key for key,value in word_to_id.items()}
print(' '.join(id_to_word[id] for id in train_x[0] ))

The punctuation is missing, but that's all:

"<START> this film was just brilliant casting <UNK> <UNK> story
 direction <UNK> really <UNK> the part they played and you could just
 imagine being there robert <UNK> is an amazing actor ..."

这篇关于从 Keras 的 imdb 数据集中恢复原始文本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆