Spark:read.jdbc(.. numPartitions ..)和repartition(.. numPartitions ..)中的numPartitions之间的差异 [英] Spark: Difference between numPartitions in read.jdbc(..numPartitions..) and repartition(..numPartitions..)
问题描述
在以下方法中,我对numPartitions
参数的行为感到困惑:
I'm perplexed between the behaviour of numPartitions
parameter in the following methods:
-
DataFrameReader.jdbc
-
Dataset.repartition
DataFrameReader.jdbc
Dataset.repartition
numPartitions :
分区数.这与lowerBound(含)和upperBound(不含)一起,为生成的WHERE子句表达式形成了分区步幅,该表达式用于均匀拆分列columnName.
numPartitions:
the number of partitions. This, along with lowerBound (inclusive), upperBound (exclusive), form partition strides for generated WHERE clause expressions used to split the column columnName evenly. 和
返回一个完全具有 Returns a new Dataset that has exactly
我目前的理解:
My current understanding:
我的问题:
My questions:
简短答案:两种方法中 Short answer: There is (almost) no difference in behaviour of 在这里,
此处 Here 因此,基本上,使用 So basically the 要回答确切的问题: 如果我通过DataFrameReader.jdbc读取了DataFrame,然后将其写入磁盘
(不调用分区方法),那么仍然会有
输出很多文件,因为如果我写了一个
将DataFrame调用到磁盘上之后,将其保存到磁盘上了吗? If I read DataFrame via DataFrameReader.jdbc and then write it to disk
(without invoking repartition method), then would there still be as
many files in output as there would've been had I written out a
DataFrame to disk after having invoked repartition on it? 是 假设通过提供适当的参数( Assuming that the read task had been parallelized by providing appropriate parameters ( numPartitions:可用于表读写的并行性的最大分区数.这也确定了并发JDBC连接的最大数量.如果要写入的分区数超过了此限制,我们可以通过在写入之前调用Coalesce(numPartitions)来将其降至此限制. numPartitions: The maximum number of partitions that can be used for parallelism in table reading and writing. This also determines the maximum number of concurrent JDBC connections. If the number of partitions to write exceeds this limit, we decrease it to this limit by calling coalesce(numPartitions) before writing.
是:那么在使用DataFrameReader.jdbc方法(带有numPartitions参数)读取的DataFrame上调用重新分区方法是否多余? Yes: Then is it redundant to invoke repartition method on a DataFrame that was read using DataFrameReader.jdbc method (with numPartitions parameter)? 是 除非调用 Unless you invoke the other variations of 这篇关于Spark:read.jdbc(.. numPartitions ..)和repartition(.. numPartitions ..)中的numPartitions之间的差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
numPartitions
分区的新数据集.
numPartitions
partitions.
DataFrameReader.jdbc
方法中的numPartition
参数控制从数据库读取数据时的并行度
Dataset.repartition
中的numPartition
参数控制将DataFrame
写入磁盘时将生成的输出文件数
numPartition
parameter in DataFrameReader.jdbc
method controls the degree of parallelism in reading the data from databasenumPartition
parameter in Dataset.repartition
controls the number of output files that will be generated when this DataFrame
would be written to disk
DataFrameReader.jdbc
读取DataFrame
,然后将其写入磁盘(不调用repartition
方法),那么输出的内容将仍然与我写出DataFrame
调用了磁盘上的repartition
后是否移到磁盘上?
DataFrameReader.jdbc
方法(带有numPartitions
参数)读取的DataFrame
上调用repartition
方法是否多余?DataFrameReader.jdbc
方法的numPartitions
参数称为'parallelism'之类的东西吗?
DataFrame
via DataFrameReader.jdbc
and then write it to disk (without invoking repartition
method), then would there still be as many files in output as there would've been had I written out a DataFrame
to disk after having invoked repartition
on it?
repartition
method on a DataFrame
that was read using DataFrameReader.jdbc
method (with numPartitions
parameter)?numPartitions
parameter of DataFrameReader.jdbc
method be called something like 'parallelism'?推荐答案
numPartitions
参数的行为(几乎)没有区别 numPartitions
parameter in the two methodsread.jdbc(..numPartitions..)
read.jdbc(..numPartitions..)
numPartitions
参数控制:
与
MySQL
(或任何其他RDBM
)读取数据到DataFrame
的DataFrame
上的所有后续操作,包括写入磁盘,直到在其上调用repartition
方法
MySQL
(or any other RDBM
) for reading the data into DataFrame
.DataFrame
including writing to disk until repartition
method is invoked on it
repartition(..numPartitions..)
repartition(..numPartitions..)
numPartitions
参数控制并行度,该并行度将在执行DataFrame
的任何操作中显示,包括写入磁盘.numPartitions
parameter controls the degree of parallelism that would be exhibited in performing any operation of the DataFrame
, including writing to disk.spark.read.jdbc(..numPartitions..)
方法读取MySQL
表时获得的DataFrame
的行为与相同(表现出相同的并行度). > read 而没有 parallelism ,并且随后调用了repartition(..numPartitions..)
方法(显然具有相同的numPartitions
值)DataFrame
obtained on reading MySQL
table using spark.read.jdbc(..numPartitions..)
method behaves the same (exhibits the same degree of parallelism in operations performed over it) as if it was read without parallelism and the repartition(..numPartitions..)
method was invoked on it afterwards (obviously with same value of numPartitions
)
columnName
,lowerBound
,upperBound
和numPartitions
),DataFrame 包括写入的所有操作均将并行执行.在此处引用官方文档 :columnName
, lowerBound
, upperBound
& numPartitions
), all operations on the resulting DataFrame
including write will be performed in parallel. Quoting the official docs here:
repartition
方法的其他变体(采用columnExprs
参数的变体),否则在这样的DataFrame
(具有相同的numPartitions
)参数上调用repartition
是多余的.但是,我不确定在已经并行化 DataFrame
上强制相同的并行度是否还会在repartition
method (the ones that take columnExprs
param), invoking repartition
on such a DataFrame
(with same numPartitions
) parameter is redundant. However, I'm not sure if forcing same degree of parallelism on an already-parallelized DataFrame
also invokes shuffling of data among executors
unnecessarily. Will update the answer once I come across it.