星火忽略SPARK_WORKER_MEMORY? [英] Spark ignores SPARK_WORKER_MEMORY?

查看:185
本文介绍了星火忽略SPARK_WORKER_MEMORY?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是独立的群集模式,1.5.2。

I'm using standalone cluster mode, 1.5.2.

虽然我设置 SPARK_WORKER_MEMORY spark-env.sh ,它看起来像这样设置将被忽略。

Even though I'm setting SPARK_WORKER_MEMORY in spark-env.sh, it looks like this setting is ignored.

我无法找到脚本任何迹象显示在斌/ sbin目录 -Xms / -Xmx 是组。

I can't find any indications at the scripts under bin/sbin that -Xms/-Xmx are set.

如果我用 PS 命令工人 PID ,它看起来像内存设置为 1G

If I use ps command the worker pid, it looks like memory set to 1G:

[hadoop@sl-env1-hadoop1 spark-1.5.2-bin-hadoop2.6]$ ps -ef | grep 20232
hadoop   20232     1  0 02:01 ?        00:00:22 /usr/java/latest//bin/java 
-cp /workspace/3rd-party/spark/spark-1.5.2-bin-hadoop2.6/sbin/../conf/:/workspace/
3rd-party/spark/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/workspace/
3rd-party/spark/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/workspace/
3rd-party/spark/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/workspace/
3rd-party/spark/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/workspace/
3rd-party/hadoop/2.6.3//etc/hadoop/ -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker 
--webui-port 8081 spark://10.52.39.92:7077

火花defaults.conf:

spark-defaults.conf:

spark.master            spark://10.52.39.92:7077
spark.serializer        org.apache.spark.serializer.KryoSerializer
spark.executor.memory   2g
spark.executor.cores    1

spark-env.sh:

spark-env.sh:

export SPARK_MASTER_IP=10.52.39.92
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=12g

我缺少的东西吗?

Am I missing something?

感谢。

推荐答案

这是我的集群模式配置,在火花default.conf

This is my configuration on cluster mode, on spark-default.conf

spark.driver.memory 5g
spark.executor.memory   6g
spark.executor.cores    4

确实有这样的事情?

Did have something like this?

如果您不添加此code(与你的选择)星火遗嘱执行人将得到拉姆的1GB为默认值。

If you don't add this code (with your options) Spark executor will get 1gb of Ram as default.

否则,你可以添加这些选项 /火花提交这样的:

Otherwise you can add these options on ./spark-submit like this :

# Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn \
  --deploy-mode cluster \  # can be client for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000

尝试检查主机(IP /船长的姓名):8080当你,如果资源被分配给正确运行的应用程序

Try to check on master(ip/name of master):8080 when you run an application if resources have been allocated correctly.

这篇关于星火忽略SPARK_WORKER_MEMORY?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆