如何在PySpark中建立稀疏矩阵? [英] How to build a sparse matrix in PySpark?
本文介绍了如何在PySpark中建立稀疏矩阵?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我是Spark的新手.我想将稀疏矩阵用作专门用于推荐引擎的用户ID项目ID矩阵.我知道我将如何在python中做到这一点.如何在PySpark中做到这一点?这是我将如何在矩阵中完成的操作.该表现在看起来像这样.
I am new to Spark. I would like to make a sparse matrix a user-id item-id matrix specifically for a recommendation engine. I know how I would do this in python. How does one do this in PySpark? Here is how I would have done it in matrix. The table looks like this now.
Session ID| Item ID | Rating
1 2 1
1 3 5
import numpy as np
data=df[['session_id','item_id','rating']].values
data
rows, row_pos = np.unique(data[:, 0], return_inverse=True)
cols, col_pos = np.unique(data[:, 1], return_inverse=True)
pivot_table = np.zeros((len(rows), len(cols)), dtype=data.dtype)
pivot_table[row_pos, col_pos] = data[:, 2]
推荐答案
喜欢:
from pyspark.mllib.linalg.distributed import CoordinateMatrix, MatrixEntry
# Create an RDD of (row, col, value) triples
coordinates = sc.parallelize([(1, 2, 1), (1, 3, 5)])
matrix = CoordinateMatrix(coordinates.map(lambda coords: MatrixEntry(*coords)))
这篇关于如何在PySpark中建立稀疏矩阵?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文