PySpark 内核(JupyterHub)可以在纱线客户端模式下运行吗? [英] Can a PySpark Kernel(JupyterHub) run in yarn-client mode?

查看:86
本文介绍了PySpark 内核(JupyterHub)可以在纱线客户端模式下运行吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我当前的设置:

  • 使用 HDFS 和 YARN 的 Spark EC2 集群
  • JuputerHub(0.7.0)
  • 使用 python27 的 PySpark 内核

我用于这个问题的非常简单的代码:

rdd = sc.parallelize([1, 2])rdd.collect()

在 Spark 独立版中按预期工作的 PySpark 内核在内核 json 文件中具有以下环境变量:

"PYSPARK_SUBMIT_ARGS": "--master spark://:7077 pyspark-shell"

但是,当我尝试在 yarn-client 模式下运行时,它永远卡住了,而 JupyerHub 日志的日志输出是:

16/12/12 16:45:21 WARN YarnScheduler:初始作业没有接受任何资源;检查您的集群 UI 以确保工作人员已注册并拥有足够的资源16/12/12 16:45:36 WARN YarnScheduler:初始作业未接受任何资源;检查您的集群 UI 以确保工作人员已注册并拥有足够的资源16/12/12 16:45:51 WARN YarnScheduler:初始作业未接受任何资源;检查您的集群 UI 以确保工作人员已注册并拥有足够的资源16/12/12 16:46:06 WARN YarnScheduler:初始作业未接受任何资源;检查您的集群 UI 以确保工作人员已注册并拥有足够的资源

此处所述,我添加了HADOOP_CONF_DIR 环境.变量指向 Hadoop 配置所在的目录,并将 PYSPARK_SUBMIT_ARGS --master 属性更改为yarn-client".此外,我可以确认在此期间没有其他作业在运行,并且工作人员已正确注册.

我的印象是,可以将带有 PySpark 内核的 JupyterHub Notebook 配置为使用 YARN 作为 其他人已经这样做了,如果确实如此,我做错了什么?

解决方案

为了让你的 pyspark 在纱线模式下工作,你必须做一些额外的配置:

  1. 通过复制远程yarn连接配置yarn/hadoop-/share/hadoop/你的yarn集群的hadoop-yarn-server-web-proxy-.jaryarn/ 你的 jupyter 实例(你需要一个本地 hadoop)

  2. 将集群的hive-site.xml复制到/spark-/conf/

  3. 将集群的yarn-site.xml复制到/hadoop-/hadoop-/etc/hadoop/

  4. 设置环境变量:

    • export HADOOP_HOME=/hadoop-
    • export SPARK_HOME=/spark-
    • export HADOOP_CONF_DIR=/hadoop-/etc/hadoop
    • export YARN_CONF_DIR=/hadoop-/etc/hadoop
  5. 现在,您可以创建内核 vim/usr/local/share/jupyter/kernels/pyspark/kernel.json<代码>{"display_name": "pySpark (Spark 2.1.0)","语言": "蟒蛇",argv":["/opt/conda/envs/python35/bin/python","-m","ipykernel",-F",{connection_file}"],环境":{"PYSPARK_PYTHON": "/opt/conda/envs/python35/bin/python","SPARK_HOME": "/opt/mapr/spark/spark-2.1.0","PYTHONPATH": "/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/","PYTHONSTARTUP": "/opt/mapr/spark/spark-2.1.0/python/pyspark/shell.py","PYSPARK_SUBMIT_ARGS": "--master yarn pyspark-shell"}}

  6. 重新启动您的 jupyterhub,您应该会看到 pyspark.由于 uid=1,root 用户通常没有 yarn 权限.您应该与另一个用户连接到 jupyterhub

My Current Setup:

  • Spark EC2 Cluster with HDFS and YARN
  • JuputerHub(0.7.0)
  • PySpark Kernel with python27

The very simple code that I am using for this question:

rdd = sc.parallelize([1, 2])
rdd.collect()

The PySpark kernel that works as expected in Spark standalone has the following environment variable in the kernel json file:

"PYSPARK_SUBMIT_ARGS": "--master spark://<spark_master>:7077 pyspark-shell"

However, when I try to run in yarn-client mode it is getting stuck forever, while the log output from the JupyerHub logs is:

16/12/12 16:45:21 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
16/12/12 16:45:36 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
16/12/12 16:45:51 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
16/12/12 16:46:06 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

As described here I have added the HADOOP_CONF_DIR env. variable to point to the directory where the Hadoop configurations are, and changed PYSPARK_SUBMIT_ARGS --master property to "yarn-client". Also i can confirm that there are no other jobs running during this and that the workers are correctly registered.

I am under the impression that it is possible to configure a JupyterHub Notebook with a PySpark kernel to run with YARN as other people have done it, if this indeed is the case what I am I doing wrong?

解决方案

In order to have your pyspark works in yarn mode you'll have to do some additional configurations:

  1. Configure yarn for remote yarn connection by copying the hadoop-yarn-server-web-proxy-<version>.jar of your yarn cluster in the <local hadoop directory>/hadoop-<version>/share/hadoop/yarn/ of your jupyter instance (You need a local hadoop)

  2. Copy the hive-site.xml of your cluster in the <local spark directory>/spark-<version>/conf/

  3. Copy the yarn-site.xml of your cluster in the <local hadoop directory>/hadoop-<version>/hadoop-<version>/etc/hadoop/

  4. Set environment variables:

    • export HADOOP_HOME=<local hadoop directory>/hadoop-<version>
    • export SPARK_HOME=<local spark directory>/spark-<version>
    • export HADOOP_CONF_DIR=<local hadoop directory>/hadoop-<version>/etc/hadoop
    • export YARN_CONF_DIR=<local hadoop directory>/hadoop-<version>/etc/hadoop
  5. Now, you can create your kernel vim /usr/local/share/jupyter/kernels/pyspark/kernel.json { "display_name": "pySpark (Spark 2.1.0)", "language": "python", "argv": [ "/opt/conda/envs/python35/bin/python", "-m", "ipykernel", "-f", "{connection_file}" ], "env": { "PYSPARK_PYTHON": "/opt/conda/envs/python35/bin/python", "SPARK_HOME": "/opt/mapr/spark/spark-2.1.0", "PYTHONPATH": "/opt/mapr/spark/spark-2.1.0/python/lib/py4j-0.10.4-src.zip:/opt/mapr/spark/spark-2.1.0/python/", "PYTHONSTARTUP": "/opt/mapr/spark/spark-2.1.0/python/pyspark/shell.py", "PYSPARK_SUBMIT_ARGS": "--master yarn pyspark-shell" } }

  6. Relaunch your jupyterhub, you should see pyspark. Root user doesn't usually have yarn permission because of uid=1. You should connect to jupyterhub with another user

这篇关于PySpark 内核(JupyterHub)可以在纱线客户端模式下运行吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆