提交Kubernetes集群上的spark-submit [英] spark-submit on kubernetes cluster

查看:135
本文介绍了提交Kubernetes集群上的spark-submit的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经创建了简单的单词计数程序jar文件,该文件经过了测试并可以正常工作.但是,当我尝试在Kubernetes群集上运行相同的jar文件时,会引发错误.以下是我的spark-submit代码以及引发的错误.

spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-new:spark-new local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt  

最后一个local参数是wordcount程序将在其上运行并获取结果的数据文件.

以下是我的错误:

    status: [ContainerStatus(containerID=null, image=gcr.io/spark-operator/spark:v2.4.5, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=Back-off pulling image "gcr.io/spark-operator/spark:v2.4.5", reason=ImagePullBackOff, additionalProperties={}), additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:13 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Running
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=2020-02-11T17:18:11Z, additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={started=true})]
20/02/11 22:48:19 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Failed
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, exitCode=1, finishedAt=2020-02-11T17:18:18Z, message=null, reason=Error, signal=null, startedAt=2020-02-11T17:18:11Z, additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:21 INFO LoggingPodStatusWatcherImpl: Container final statuses:


         Container name: spark-kubernetes-driver
         Container image: gcr.io/spark-operator/spark:v2.4.5
         Container state: Terminated
         Exit code: 1
20/02/11 22:48:21 INFO Client: Application WordCount finished.
20/02/11 22:48:23 INFO ShutdownHookManager: Shutdown hook called
20/02/11 22:48:23 INFO ShutdownHookManager: Deleting directory C:\Users\siddh\AppData\Local\Temp\spark-1a3ee936-d430-4f9d-976c-3305617678df

如何解决此错误?如何传递本地文件?
注意:JAR文件和数据文件存在于我的桌面上,而不存在于Docker映像中.

解决方案

不幸的是,尚未在Kubernetes上正式发布Spark传递本地文件到作业. Spark分支中有一种解决方案,要求添加官方文档.

希望有帮助.

I have created simple word count program jar file which is tested and works fine. However, when I am trying to run the same jar file on my Kubernetes cluster it's throwing an error. Below is my spark-submit code along with the error thrown.

spark-submit --master k8s://https://192.168.99.101:8443 --deploy-mode cluster --name WordCount --class com.sample.WordCount --conf spark.executor.instances=5 --conf spark.kubernetes.container.image=debuggerrr/spark-new:spark-new local:///C:/Users/siddh/OneDrive/Desktop/WordCountSample/target/WordCountSample-0.0.1-SNAPSHOT.jar local:///C:/Users/siddh/OneDrive/Desktop/initialData.txt  

The last local argument is the data file on which the wordcount program will run and fetch the results.

Below is my error:

    status: [ContainerStatus(containerID=null, image=gcr.io/spark-operator/spark:v2.4.5, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=Back-off pulling image "gcr.io/spark-operator/spark:v2.4.5", reason=ImagePullBackOff, additionalProperties={}), additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:13 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Running
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=2020-02-11T17:18:11Z, additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={started=true})]
20/02/11 22:48:19 INFO LoggingPodStatusWatcherImpl: State changed, new state:
         pod name: wordcount-1581441237366-driver
         namespace: default
         labels: spark-app-selector -> spark-386c19d289a54e2da1733376821985b1, spark-role -> driver
         pod uid: a9e74d13-cf77-4de0-a16d-a71a21118ef8
         creation time: 2020-02-11T17:13:59Z
         service account name: default
         volumes: spark-local-dir-1, spark-conf-volume, default-token-wbvkb
         node name: minikube
         start time: 2020-02-11T17:13:59Z
         container images: gcr.io/spark-operator/spark:v2.4.5
         phase: Failed
         status: [ContainerStatus(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, image=gcr.io/spark-operator/spark:v2.4.5, imageID=docker-pullable://gcr.io/spark-operator/spark@sha256:0d2c7d9d66fb83a0311442f0d2830280dcaba601244d1d8c1704d72f5806cc4c, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://7b46d9483cf22d94c7553455dd06a6a9530b2947a6db71d089cfe9dcce656c26, exitCode=1, finishedAt=2020-02-11T17:18:18Z, message=null, reason=Error, signal=null, startedAt=2020-02-11T17:18:11Z, additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={started=false})]
20/02/11 22:48:21 INFO LoggingPodStatusWatcherImpl: Container final statuses:


         Container name: spark-kubernetes-driver
         Container image: gcr.io/spark-operator/spark:v2.4.5
         Container state: Terminated
         Exit code: 1
20/02/11 22:48:21 INFO Client: Application WordCount finished.
20/02/11 22:48:23 INFO ShutdownHookManager: Shutdown hook called
20/02/11 22:48:23 INFO ShutdownHookManager: Deleting directory C:\Users\siddh\AppData\Local\Temp\spark-1a3ee936-d430-4f9d-976c-3305617678df

How do I resolve this error? How can I pass the local file?
NOTE: JAR files and data files are present on my desktop and not in the docker image.

解决方案

unfortunately passing local files to the job is not yet available for official release of Spark on Kubernetes. There is one solution in a Spark fork requiring to add Resource Staging Server deployment to the cluster, but it is not included in the released builds.

Why it is not so easy to support? Imagine how to configure the network communication between your machine and Spark Pods in Kubernetes: in order to pull your local jars Spark Pod should be able to access you machine (probably you need to run web-server locally and expose its endpoints), and vice-versa in order to push jar from you machine to the Spark Pod your spark-submit script needs to access Spark Pod (which can be done via Kubernetes Ingress and requires several more components to be integrated).

The solution Spark allows is to store your artefacts (jars) in the http-accessible place, including hdfs-compatible storage systems. Please refer the official docs.

Hope it helps.

这篇关于提交Kubernetes集群上的spark-submit的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆