“spark.yarn.executor.memoryOverhead"的值环境? [英] The value of "spark.yarn.executor.memoryOverhead" setting?

查看:33
本文介绍了“spark.yarn.executor.memoryOverhead"的值环境?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

带有 YARN 的 Spark 作业中 spark.yarn.executor.memoryOverhead 的值应该分配给 App 还是只是最大值?

The value of spark.yarn.executor.memoryOverhead in a Spark job with YARN should be allocated to App or just the max value?

推荐答案

spark.yarn.executor.memoryOverhead

只是最大值.目标是将 OVERHEAD 计算为实际执行器内存的百分比,如 RDD 和 DataFrames 使用的那样

Is just the max value .The goal is to calculate OVERHEAD as a percentage of real executor memory, as used by RDDs and DataFrames

--executor-memory/spark.executor.memory

控制执行程序堆大小,但 JVM 也可以使用一些堆外内存,例如用于内部字符串和直接字节缓冲区.

controls the executor heap size, but JVMs can also use some memory off heap, for example for interned Strings and direct byte buffers.

spark.yarn.executor.memoryOverhead 的值属性被添加到执行器内存中,以确定每个执行器对 YARN 的完整内存请求.默认为 max(executorMemory * 0.10,最小值为 384).

The value of the spark.yarn.executor.memoryOverhead property is added to the executor memory to determine the full memory request to YARN for each executor. It defaults to max(executorMemory * 0.10, with minimum of 384).

执行器将使用基于 spark.executor.memory 属性的内存分配加上 spark.yarn.executor.memoryOverhead

The executors will use a memory allocation based on the property of spark.executor.memoryplus an overhead defined by spark.yarn.executor.memoryOverhead

这篇关于“spark.yarn.executor.memoryOverhead"的值环境?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆