Spark中的虚拟内存如何计算? [英] How is virtual memory calculated in Spark?
问题描述
我在Hadoop上使用Spark,想知道Spark如何将虚拟内存分配给执行程序.
I am using Spark on Hadoop and want to know how Spark allocates the virtual memory to executor.
根据YARN vmem-pmem,它为容器提供了2.1倍的虚拟内存.
As per YARN vmem-pmem, it gives 2.1 times virtual memory to the container.
因此-如果XMX为1GB,则-> 1 GB * 2.1 = 2.1 GB分配给了容器.
Hence - if XMX is 1 GB then --> 1 GB * 2.1 = 2.1 GB is allocated to the container.
它在Spark上如何工作?下面的陈述是正确的吗?
How does it work on Spark? And is the below statement is correct?
如果我给Executor内存= 1 GB,那么
If I give Executor memory = 1 GB then,
总虚拟内存= 1 GB * 2.1 * spark.yarn.executor.memoryOverhead.这是真的吗?
Total virtual memory = 1 GB * 2.1 * spark.yarn.executor.memoryOverhead. Is this true?
如果没有,那么在Spark中如何为执行程序计算虚拟内存?
If not, then how is virtual memory for an executor calculated in Spark?
推荐答案
对于Spark执行程序资源,yarn-client和yarn-cluster模式使用相同的配置:
For Spark executor resources, yarn-client and yarn-cluster modes use the same configurations:
在spark-defaults.conf中,spark.executor.memory设置为2GB.
In spark-defaults.conf, spark.executor.memory is set to 2 GB.
我从以下地址获得此信息: Spark的资源分配配置在YARN上
I got this from: Resource Allocation Configuration for Spark on YARN
这篇关于Spark中的虚拟内存如何计算?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!