Spark SQL 如何决定从 Hive 表加载数据时将使用的分区数? [英] How does Spark SQL decide the number of partitions it will use when loading data from a Hive table?

查看:55
本文介绍了Spark SQL 如何决定从 Hive 表加载数据时将使用的分区数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个问题同

HadoopRDD 依次根据

然而,书中也提到了关于mapreduce.input.fileinputformat.split.maxsize:

<块引用>

此属性不存在在旧的 MapReduce API 中(除了合并文件输入格式).相反,它被间接计算为作业总投入的大小,除以指导数由 mapreduce.job.maps(或 setNumMapTasks()JobConf 上的方法).

this 帖子还使用总输入大小除以数量来计算 maxSize地图任务.

This question is same as Number of partitions of a spark dataframe created by reading the data from Hive table

But I think that question did not get a correct answer. Note that the question is asking how many partitions will be created when the dataframe is created as a result of executing a sql query against a HIVE table using SparkSession.sql method.

IIUC, the question above is distinct from asking how many partitions will be created when the dataframe is created as a result of executing some code like spark.read.json("examples/src/main/resources/people.json") which loads the data directly from the filesystem - which could be HDFS. I think the answer to this latter question is given by spark.sql.files.maxPartitionBytes

spark.sql.files.maxPartitionBytes 134217728 (128 MB) The maximum number of bytes to pack into a single partition when reading files.

Experimentally, I have tried creating a dataframe from a HIVE table and the # of partitions I get is not explained by total data in hive table / spark.sql.files.maxPartitionBytes

Also adding to the OP, it would be good to know how can the number of partitions be controlled i.e., when one wants to force spark to use a different number than it would by default.

References:

https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala

https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala

解决方案

TL;DR: The default number of partitions when reading data from Hive will be governed by the HDFS blockSize. The number of partitions can be increased by setting mapreduce.job.maps to appropriate value, and can be decreased by setting mapreduce.input.fileinputformat.split.minsize to appropriate value

Spark SQL creates an instance of HadoopRDD when loading data from a hive table.

An RDD that provides core functionality for reading data stored in Hadoop (e.g., files in HDFS, sources in HBase, or S3), using the older MapReduce API (org.apache.hadoop.mapred).

HadoopRDD in turn splits input files according to the computeSplitSize method defined in org.apache.hadoop.mapreduce.lib.input.FileInputFormat (the new API) and org.apache.hadoop.mapred.FileInputFormat (the old API).

New API:

protected long computeSplitSize(long blockSize, long minSize,
                                  long maxSize) {
    return Math.max(minSize, Math.min(maxSize, blockSize));
  }

Old API:

protected long computeSplitSize(long goalSize, long minSize,
                                       long blockSize) {
    return Math.max(minSize, Math.min(goalSize, blockSize));
  }

computeSplitSize splits files according to the HDFS blockSize but if the blockSize is less than minSize or greater than maxSize then it is clamped to those extremes. The HDFS blockSize can be obtained from

hdfs getconf -confKey dfs.blocksize

According to Hadoop the definitive guide Table 8.5, the minSize is obtained from mapreduce.input.fileinputformat.split.minsize and the maxSize is obtained from mapreduce.input.fileinputformat.split.maxsize.

However, the book also mentions regarding mapreduce.input.fileinputformat.split.maxsize that:

This property is not present in the old MapReduce API (with the exception of CombineFileInputFormat). Instead, it is calculated indirectly as the size of the total input for the job, divided by the guide number of map tasks specified by mapreduce.job.maps (or the setNumMapTasks() method on JobConf).

this post also calculates the maxSize using the total input size divided by the number of map tasks.

这篇关于Spark SQL 如何决定从 Hive 表加载数据时将使用的分区数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆