dataFrameWriter partitionBy是否会通过随机播放数据? [英] Does dataFrameWriter partitionBy shuffle the data?

查看:93
本文介绍了dataFrameWriter partitionBy是否会通过随机播放数据?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经以一种方式对数据进行了分区,我只想以另一种方式对其进行分区. 所以基本上是这样的:

I have data partitioned in one way, I just want to partition it in another. So it basically gonna be something like this:

sqlContext.read().parquet("...").write().partitionBy("...").parquet("...")

我想知道这是否会触发随机播放,或者所有数据都将在本地重新分区,因为在这种情况下,分区意味着HDFS中只有目录,而来自同一分区的数据不必位于同一节点上写在HDFS的同一目录中.

I wonder does this will trigger shuffle or all data will be re-partition locally, because in this context a partition means just a directory in HDFS and data from the same partition doesn't have to be on the same node to be written in the same dir in HDFS.

推荐答案

parititionBybucketBy都不会对数据进行随机排序.但是,在某些情况下,最好先重新分区数据:

Neither parititionBy nor bucketBy shuffles the data. There are cases though, when repartitioning data first can be a good idea:

df.repartition(...).write.partitionBy(...)

否则,输出文件的数量将受到分区数*分区列的基数的限制.

Otherwise the number of the output files is bounded by number of partitions * cardinality of the partitioning column.

这篇关于dataFrameWriter partitionBy是否会通过随机播放数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆