SparkSQL PostgresQL 数据帧分区 [英] SparkSQL PostgresQL Dataframe partitions
问题描述
我有一个非常简单的 SparkSQL 连接到 Postgres 数据库的设置,我正在尝试从一个表中获取一个 DataFrame,该 Dataframe 具有多个 X 分区(假设为 2).代码如下:
I have a very simple setup of SparkSQL connecting to a Postgres DB and I'm trying to get a DataFrame from a table, the Dataframe with a number X of partitions (lets say 2). The code would be the following:
Map<String, String> options = new HashMap<String, String>();
options.put("url", DB_URL);
options.put("driver", POSTGRES_DRIVER);
options.put("dbtable", "select ID, OTHER from TABLE limit 1000");
options.put("partitionColumn", "ID");
options.put("lowerBound", "100");
options.put("upperBound", "500");
options.put("numPartitions","2");
DataFrame housingDataFrame = sqlContext.read().format("jdbc").options(options).load();
出于某种原因,DataFrame 的一个分区几乎包含所有行.
For some reason, one partition of the DataFrame contains almost all rows.
据我所知,lowerBound/upperBound
是用于微调的参数.在 SparkSQL 的文档(Spark 1.4.0 - spark-sql_2.11)中,它说它们用于定义步幅,而不是过滤/范围分区列.但这引发了几个问题:
For what I can understand lowerBound/upperBound
are the parameters used to finetune this. In SparkSQL's documentation (Spark 1.4.0 - spark-sql_2.11) it says they are used to define the stride, not to filter/range the partition column. But that raises several questions:
- 步幅是 Spark 为每个执行程序(分区)查询数据库的频率(每个查询返回的元素数)?
- 如果不是,这个参数的目的是什么,它们依赖什么,我如何以稳定的方式平衡我的 DataFrame 分区(不要求所有分区包含相同数量的元素,只是有一个平衡 -例如 2 个分区 100 个元素 55/45 、 60/40 甚至 65/35 都可以)
似乎无法找到这些问题的明确答案,想知道你们中的一些人是否可以为我澄清这一点,因为现在在处理 X 百万行时影响了我的集群性能并且所有繁重的工作都进行了给一个单一的执行者.
Can't seem to find a clear answer to these questions around and was wondering if maybe some of you could clear this points for me, because right now is affecting my cluster performance when processing X million rows and all the heavy lifting goes to one single executor.
干杯并感谢您的时间.
推荐答案
下界确实是针对分区列使用的;请参阅此代码(编写此代码时的当前版本):
Lower bound are indeed used against the partitioning column; refer to this code (current version at the moment of writing this):
Function columnPartition
包含分区逻辑和下/上界使用的代码.
Function columnPartition
contains the code for the partitioning logic and the use of lower / upper bound.
这篇关于SparkSQL PostgresQL 数据帧分区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!