如何使用 hive 上下文在 spark 中有效地查询 hive 表? [英] How to efficiently query a hive table in spark using hive context?
问题描述
我有一个包含时间序列数据的 1.6T Hive 表.我正在使用 Hive 1.2.1
和 scala
中的 Spark 1.6.1
.
I have a 1.6T Hive table with time series data. I am using Hive 1.2.1
and Spark 1.6.1
in scala
.
以下是我在代码中的查询.但是我总是遇到Java 内存不足错误
.
Following is the query which I have in my code. But I always get Java out of memory error
.
val sid_data_df = hiveContext.sql(s"SELECT time, total_field, sid, year, date FROM tablename WHERE sid = '$stationId' ORDER BY time LIMIT 4320000 ")
通过从 hive 表中一次迭代选择几条记录,我试图在结果 dataframe
By iteratively selecting few records at a time from hive table, I am trying to do a sliding window on the resultant dataframe
我有一个由 4 个节点组成的集群,具有 122 GB 内存和 44 个 vCore.我正在使用 488 GB 内存中的 425 GB 可用内存.我给 spark-submit 提供了以下参数
I have a cluster of 4 nodes with 122 GB of memory, 44 vCores. I am using 425 GB memory out of 488 GB available. I am giving the spark-submit with the following parameters
--num-executors 16 --driver-memory 4g --executor-memory 22G --executor-cores 10
--conf "spark.sql.shuffle.partitions=1800"
--conf "spark.shuffle.memory.fraction=0.6"
--conf "spark.storage.memoryFraction=0.4"
--conf "spark.yarn.executor.memoryOverhead=2600"
--conf "spark.yarn.nodemanager.resource.memory-mb=123880"
--conf "spark.yarn.nodemanager.resource.cpu-vcores=43"
请给我一些关于如何优化它并成功从 hive 表中获取数据的建议.
kindly give me suggestions on how to optimize this and successfully fetch data from hive table.
谢谢
推荐答案
问题可能出在这里:
LIMIT 4320000
您应该避免使用 LIMIT
对大量记录进行子集化.在 Spark 中,LIMIT
将所有行移动到单个分区,很可能会导致严重的性能和稳定性问题.
You should avoid using LIMIT
to subset large number of records. In Spark, LIMIT
moves all rows to a single partition and is likely to cause serious performance and stability issues.
参见示例 如何优化下面的 spark 代码 (scala)?
我试图通过一次选择几条记录来迭代地在这个结果数据上做一个滑动窗口.
I am trying to do a sliding window on this resultant dataframeiteratively by selecting few records at a time.
这听起来不对.滑动窗口操作通常可以通过一些窗口函数和基于时间戳的组合来实现window
桶.
This doesn't sound right. Sliding window operations can be usually achieved with some combination of window function, and timestamp-based window
buckets.
这篇关于如何使用 hive 上下文在 spark 中有效地查询 hive 表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!