在重新分区一个pyspark密集矩阵 [英] repartition a dense matrix in pyspark

查看:1189
本文介绍了在重新分区一个pyspark密集矩阵的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在pyspark一个密集矩阵(100 * 100),我想再分配它分成十个组,每组包含10行。

I have a Dense matrix(100*100) in pyspark, and I want to repartition it into ten groups with each containing 10 rows.

from pyspark import SparkContext, SparkConf
from pyspark.mllib import *
sc = SparkContext("local", "Simple App")
dm2 = Matrices.dense(100, 100, RandomRDDs.uniformRDD(sc, 10000).collect())
newRdd = sc.parallelize(dm2.toArray())
rerdd = newRdd.repartition(10)

以上code结果 rerdd 包含100个元素。我想present这个矩阵 DM2 为逐行分区块(例如,10行的分区)。

the above code results in rerdd containing 100 elements. I want to present this matrix dm2 as row-wise partitioned blocks (e.g., 10 rows in a partition).

推荐答案

我没有多大意义,但你可以例如做这样的事情。

I doesn't make much sense but you can for example do something like this

mat =  Matrices.dense(100, 100, np.arange(10000))

n_par = 10
n_row = 100

rdd = (sc
    .parallelize(
        # Add indices
        enumerate(
            # Extract and reshape values
            mat.values.reshape(n_row, -1)))
    # Partition and sort by row index
    .repartitionAndSortWithinPartitions(n_par, lambda i: i // n_par))

每个分区的分区和行的支票号码:

Check number of partitions and rows per partition:

rdd.glom().map(len).collect()
## [10, 10, 10, 10, 10, 10, 10, 10, 10, 10

检查第一行包含所需数据:

Check if the first row contains desired data:

assert np.all(rdd.first()[1] == np.arange(100))

这篇关于在重新分区一个pyspark密集矩阵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆