如何从 Spark ML Lib 中的 TF Vector RDD 获取单词详细信息? [英] How to get word details from TF Vector RDD in Spark ML Lib?

查看:30
本文介绍了如何从 Spark ML Lib 中的 TF Vector RDD 获取单词详细信息?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Spark 中使用 HashingTF 创建了词频.我使用 tf.transform 为每个单词获得了词频.

I have created Term Frequency using HashingTF in Spark. I have got the term frequencies using tf.transform for each word.

但结果以这种格式显示.

But the results are showing in this format.

[<hashIndexofHashBucketofWord1>,<hashIndexofHashBucketofWord2> ...]
,[termFrequencyofWord1, termFrequencyOfWord2 ....]

例如:

(1048576,[105,3116],[1.0,2.0])

我可以使用 tf.indexOf("word") 获取哈希桶中的索引.

I am able to get the index in hash bucket, using tf.indexOf("word").

但是,如何使用索引获取单词?

But, how can I get the word using the index?

推荐答案

好吧,你不能.由于散列是非单射的,因此没有反函数.换句话说,无限数量的令牌可以映射到单个存储桶,因此无法分辨实际存在哪个令牌.

Well, you can't. Since hashing is non-injective there is no inverse function. In other words infinite number of tokens can map to a single bucket so it is impossible to tell which one is actually there.

如果您使用大哈希并且唯一标记的数量相对较少,那么您可以尝试从存储桶到数据集中可能的标记创建查找表.它是一对多映射,但如果满足上述条件,冲突数量应该相对较低.

If you're using a large hash and number of unique tokens is relatively low then you can try to create a lookup table from bucket to possible tokens from your dataset. It is one-to-many mapping but if above conditions are met number of conflicts should be relatively low.

如果您需要可逆转换,您可以使用 combine TokenizerStringIndexer 并手动构建稀疏特征向量.

If you need a reversible transformation you can use combine Tokenizer and StringIndexer and build a sparse feature vector manually.

另见:Spark 对 HashingTF 使用什么哈希函数,我该如何复制它?

编辑:

在 Spark 1.5+ (PySpark 1.6+) 中,您可以使用 CountVectorizer 应用可逆变换并存储词汇.

In Spark 1.5+ (PySpark 1.6+) you can use CountVectorizer which applies reversible transformation and stores vocabulary.

Python:

from pyspark.ml.feature import CountVectorizer

df = sc.parallelize([
    (1, ["foo", "bar"]), (2, ["foo", "foobar", "baz"])
]).toDF(["id", "tokens"])

vectorizer = CountVectorizer(inputCol="tokens", outputCol="features").fit(df)
vectorizer.vocabulary
## ('foo', 'baz', 'bar', 'foobar')

斯卡拉:

import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}

val df = sc.parallelize(Seq(
    (1, Seq("foo", "bar")), (2, Seq("foo", "foobar", "baz"))
)).toDF("id", "tokens")

val model: CountVectorizerModel = new CountVectorizer()
  .setInputCol("tokens")
  .setOutputCol("features")
  .fit(df)

model.vocabulary
// Array[String] = Array(foo, baz, bar, foobar)

其中第 0 个位置的元素对应索引 0,第 1 个位置的元素对应索引 1,依此类推.

where element at the 0th position corresponds to index 0, element at the 1st position to index 1 and so on.

这篇关于如何从 Spark ML Lib 中的 TF Vector RDD 获取单词详细信息?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆