在 Pyspark 中将稀疏向量转换为密集向量 [英] Convert Sparse Vector to Dense Vector in Pyspark

查看:77
本文介绍了在 Pyspark 中将稀疏向量转换为密集向量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个像这样的稀疏向量

<预><代码>>>>countVectors.rdd.map(lambda 向量:vector[1]).collect()[SparseVector(13, {0: 1.0, 2: 1.0, 3: 1.0, 6: 1.0, 8: 1.0, 9: 1.0, 10: 1.0, 12: 1.0}), SparseVector(13, {0: 1.0, 1: 1.0, 2: 1.0, 4: 1.0}), SparseVector(13, {0: 1.0, 1: 1.0, 3: 1.0, 4: 1.0, 7: 1.0}), SparseVector(13, {1: 1.0, 2: 1.0, 5: 1.0, 11: 1.0})]

我正在尝试将其转换为 pyspark 2.0.0 中的密集向量

<预><代码>>>>frequencyVectors = countVectors.rdd.map(lambda 向量:向量[1])>>>frequencyVectors.map(lambda 向量:Vectors.dense(vector)).collect()

我收到这样的错误:

16/12/26 14:03:35 ERROR Executor: 阶段 13.0 (TID 13) 中任务 0.0 中的异常org.apache.spark.api.python.PythonException:回溯(最近一次调用):文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py",第172行,在main过程()文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py",第167行,正在处理中serializer.dump_stream(func(split_index, iterator), outfile)文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py",第 263 行,在 dump_stream 中vs = list(itertools.islice(iterator, batch))文件<stdin>",第 1 行,在 <lambda> 中文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/mllib/linalg/__init__.py",第878行,密集返回 DenseVector(元素)文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/mllib/linalg/__init__.py",第286行,在__init__ar = np.array(ar, dtype=np.float64)文件/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/ml/linalg/__init__.py",第701行,在__getitem__raise ValueError("索引 %d 超出范围." % index)ValueError:索引 13 越界.

我怎样才能实现这种转换?这里有什么问题吗?

解决方案

这解决了我的问题

frequencyDenseVectors = frequencyVectors.map(lambda 向量:DenseVector(vector.toArray()))

I have a sparse vector like this

>>> countVectors.rdd.map(lambda vector: vector[1]).collect()
[SparseVector(13, {0: 1.0, 2: 1.0, 3: 1.0, 6: 1.0, 8: 1.0, 9: 1.0, 10: 1.0, 12: 1.0}), SparseVector(13, {0: 1.0, 1: 1.0, 2: 1.0, 4: 1.0}), SparseVector(13, {0: 1.0, 1: 1.0, 3: 1.0, 4: 1.0, 7: 1.0}), SparseVector(13, {1: 1.0, 2: 1.0, 5: 1.0, 11: 1.0})]

I am trying to convert this into dense vector in pyspark 2.0.0 like this

>>> frequencyVectors = countVectors.rdd.map(lambda vector: vector[1])
>>> frequencyVectors.map(lambda vector: Vectors.dense(vector)).collect()

I am getting an error like this:

16/12/26 14:03:35 ERROR Executor: Exception in task 0.0 in stage 13.0 (TID 13)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 172, in main
    process()
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 167, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "<stdin>", line 1, in <lambda>
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/mllib/linalg/__init__.py", line 878, in dense
    return DenseVector(elements)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/mllib/linalg/__init__.py", line 286, in __init__
    ar = np.array(ar, dtype=np.float64)
  File "/opt/BIG-DATA/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/ml/linalg/__init__.py", line 701, in __getitem__
    raise ValueError("Index %d out of bounds." % index)
ValueError: Index 13 out of bounds.

How can I achieve this conversion? Is there anything wrong here?

解决方案

This resolved my issue

frequencyDenseVectors = frequencyVectors.map(lambda vector: DenseVector(vector.toArray()))

这篇关于在 Pyspark 中将稀疏向量转换为密集向量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆