Keras IMDB数据集数据如何进行预处理? [英] How Keras IMDB dataset data is preprocessed?

查看:279
本文介绍了Keras IMDB数据集数据如何进行预处理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究情绪分析问题,并且有一个数据集,该数据集与Kears imdb数据集非常相似. 当我加载Keras的imdb数据集时,它返回了单词索引序列.

(X_train, y_train), (X_test, y_test) = imdb.load_data()
X_train[0]
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

但是,我想了解一下如何构造此序列. 在我的数据集中,我使用了CountVectorizer,在数据集中使用了ngram_range=(1,2)来标记单词,但是我想尝试复制Keras方法.

解决方案

将imdb数据集中的单词替换为一个整数,该整数表示它们在数据集中出现的频率.首次调用load_data函数时,它将下载数据集.

要查看该值是如何计算的,让我们从源代码中摘录一段代码(末尾提供链接)

idx = len(x_train)
x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])
x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

x_train是长度为x_train的列表xs中的numpy数组;

xs是由x_train和x_test中的所有单词组成的列表,方法是首先从数据集中提取每个项(电影评论),然后提取单词.然后将每个单词的位置添加到index_from,它指定要从其开始的实际索引(默认为3),然后添加到起始字符(默认为1,因此值从1开始,因为填充将从零开始)

以类似方式形成并由load_data函数返回的numpy数组x_train,y_train,x_test,y_test.

源代码在这里.

https://github.com/keras -team/keras/blob/master/keras/datasets/imdb.py

I'm working on a problem of sentiment analysis and have a dataset, which is very similar to Kears imdb dataset. When I load Keras’s imdb dataset, it returned sequence of word index.

(X_train, y_train), (X_test, y_test) = imdb.load_data()
X_train[0]
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 22665, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 21631, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 19193, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 10311, 8, 4, 107, 117, 5952, 15, 256, 4, 31050, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 12118, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

But, I want to understand, how this sequence is constructed. In my dataset, I used CountVectorizer, with ngram_range=(1,2) in my dataset to tokenize words, but I want to try to replicate Keras approach.

解决方案

The words in the imdb dataset is replaced with an integer representing how frequently they occur in the dataset. When you are calling the load_data function for the first time it will download the dataset.

To see how the value is calculated, let's take a snippet of code from the source code(link is provided at the end)

idx = len(x_train)
x_train, y_train = np.array(xs[:idx]), np.array(labels[:idx])
x_test, y_test = np.array(xs[idx:]), np.array(labels[idx:])

x_train is the numpy array from list xs of length x_train;

xs is the list formed from all the words in x_train and x_test by first extracting each item (movie review) from the dataset and then extracting the words. The position of each words are then added to index_from which specifies the actual index to start from (defaults to 3) and then added to starting character (1 by default so that the values start from 1 as padding will be done with zeros)

numpy arrays x_train, y_train, x_test, y_test formed in a similar manner and returned by the load_data function.

The source code is available here.

https://github.com/keras-team/keras/blob/master/keras/datasets/imdb.py

这篇关于Keras IMDB数据集数据如何进行预处理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆