Spark:如何在所有分区中平均分配我的记录 [英] Spark : how can evenly distribute my records in all partition
问题描述
我有一个具有30条记录的RDD(键/值对:键是时间戳,值是JPEG字节数组)
我正在运行30个执行程序.我想将此RDD重新分区为30个分区,以便每个分区都获得一个记录并分配给一个执行程序.
I have a RDD with 30 record (key/value pair : key is Time Stamp and Value is JPEG Byte Array)
and I am running 30 executors. I want to repartition this RDD in to 30 partitions so every partition gets one record and is assigned to one executor.
当我使用rdd.repartition(30)
时,它会将rdd重新分区为30个分区,但有些分区会得到2条记录,有些分区会得到1条记录,有些则没有得到任何记录.
When I used rdd.repartition(30)
it repartitions my rdd in 30 partitions but some partitions get 2 records, some get 1 record and some not getting any records.
Spark中有什么方法可以将记录均匀地分布到所有分区.
Is there any way in Spark I can evenly distribute my records to all partitions.
推荐答案
Salting 技术,该技术涉及添加新的假"密钥,并与当前密钥一起使用,以更好地分配数据.
Salting technique can be used which involves adding a new "fake" key and using alongside the current key for better distribution of data.
(此处是链接盐腌)
这篇关于Spark:如何在所有分区中平均分配我的记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!