如何从TF矢量RDD字的详细信息在星火ML库 [英] How to get word details from TF Vector RDD in Spark ML Lib
问题描述
我在使用的火花创建HashingTF词频。
我有使用,tf.transform每个字的词的频率。
I have created Term Frequency using HashingTF in spark. I have got the term frequencies using, tf.transform for each word.
但是,结果被示出在该格式
But, the results are showing in this format.
[<hashIndexofHashBucketofWord1>,<hashIndexofHashBucketofWord2> ...]
,[termFrequencyofWord1, termFrequencyOfWord2 ....]
例如:
(1048576,[105,3116],[1.0,2.0])
(1048576,[105,3116],[1.0,2.0])
我能够得到HashBuckec索引,使用tf.indexOf(字)
I am able to get the index in HashBuckec, using tf.indexOf("word")
但是,我怎么能使用索引得到了这个词?
But, how can i get the word using the index.?
推荐答案
好了,你不能。因为散列是非射没有反函数。换句话说令牌无限数目可映射到单个桶所以不可能告诉哪一个是实际存在
Well, you can't. Since hashing is non-injective there is no inverse function. In other words infinite number of tokens can map to a single bucket so it is impossible to tell which one is actually there.
如果您使用的是大的哈希和独特的令牌数量相对较低,那么你可以尝试从数据创建桶可能令牌的查找表。它是一对许多映射,但如果满足上述条件的冲突数目应该是相对低的。
If you're using a large hash and number of unique tokens is relatively low then you can try to create a lookup table from bucket to possible tokens from your dataset. It is one-to-many mapping but if above conditions are met number of conflicts should be relatively low.
如果你需要一个可逆的转变,你可以使用组合标记生成器
和 StringIndexer
和手工创建一个稀疏特征向量。
If you need a reversible transformation you can use combine Tokenizer
and StringIndexer
and build a sparse feature vector manually.
另请参阅:什么哈希函数没有火花HashingTF使用?如何复制它
修改
在星火1.5+(PySpark 1.6+),可以使用 CountVectorizer
适用可逆转换和存储的词汇。
In Spark 1.5+ (PySpark 1.6+) you can use CountVectorizer
which applies reversible transformation and stores vocabulary.
的Python:
from pyspark.ml.feature import CountVectorizer
df = sc.parallelize([
(1, ["foo", "bar"]), (2, ["foo", "foobar", "baz"])
]).toDF(["id", "tokens"])
vectorizer = CountVectorizer(inputCol="tokens", outputCol="features").fit(df)
vectorizer.vocabulary
## ('foo', 'baz', 'bar', 'foobar')
斯卡拉:
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}
val df = sc.parallelize(Seq(
(1, Seq("foo", "bar")), (2, Seq("foo", "foobar", "baz"))
)).toDF("id", "tokens")
val model: CountVectorizerModel = new CountVectorizer()
.setInputCol("tokens")
.setOutputCol("features")
.fit(df)
model.vocabulary
// Array[String] = Array(foo, baz, bar, foobar)
这篇关于如何从TF矢量RDD字的详细信息在星火ML库的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!