无法在运行Sparkapplication的Pod上使用configmap挂载配置文件 [英] unable to mount the config file using configmap on a pod running Sparkapplication

查看:45
本文介绍了无法在运行Sparkapplication的Pod上使用configmap挂载配置文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的目标是设置configmap,然后在spark应用程序中使用配置文件.详细信息如下:

My goal is to setup configmap and then use the config file in the spark application. Here are the details:

我有一个看起来像这样的配置文件(test_config.cfg):

I have a config file (test_config.cfg) that looks like this:

[test_tracker]
url = http://localhost:8080/testsomething/
username = TEST
password = SECRET

我通过运行以下命令创建了配置映射:

I created the config map by running the following command:

kubectl create configmap testcfg1 --from-file test_config.cfg

现在,我有一个带有SparkApplication规范的YAML文件(testprog.yaml),如下所示:

Now, I have a YAML file(testprog.yaml) with SparkApplication specs that look like this:

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: testprog
  namespace: default
spec:
  type: Python
  pythonVersion: "3"
  mode: cluster
  image: "<ip-url>:5000/schemamatcher/schemamatcher-spark-py:latest"
  imagePullPolicy: Always
  mainApplicationFile: local:///opt/spark/dependencies/testprog.py
  arguments: ['s3a://f1.parquet', 's3a://f2.parquet', '--tokenizer-type', 'param']
  sparkVersion: "3.0.0"
  restartPolicy:
    type: OnFailure
    onFailureRetries: 3
    onFailureRetryInterval: 10
    onSubmissionFailureRetries: 5
    onSubmissionFailureRetryInterval: 20
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "16g"
    labels:
      version: 3.0.0
    serviceAccount: default
    configMaps:
      - name: testcfg1
        path: /mnt/config-maps
  executor:
    cores: 1
    instances: 2
    memory: "20g"
    labels:
      version: 3.0.0
  hadoopConf:
    "fs.s3a.access.key": minio
    "fs.s3a.secret.key": minio123
    "fs.s3a.endpoint": http://<ip-url>:9000

现在,我可以使用以下程序运行该程序:

Now, I am able to run the program using:

kubectl apply -f testprog.yaml

吊舱运行正常,不会引发任何错误.但是我无法在给定的路径中看到我的配置文件,而且我也不明白为什么.当Pod执行时,我会这样做:

the pod just runs fine and doesn't throw any error. But I am unable to see my config file at the path given and I don't understand why. When the pod is executing I do:

kubectl exec --stdin --tty test-driver -- /bin/bash

,我尝试在/mnt/config-maps路径中查找配置文件,但没有看到任何内容.我尝试了几件事,但是没有运气.此外,一些文档还说应该设置突变Webhook,我认为以前的家伙已经做到了,但是我不确定如何检查它(但是我认为它在那里).

and I try to look for the config file in the path /mnt/config-maps I don't see anything. I tried a couple of things but no luck. Besides, some of the documentation says that mutation webhook should be setup and I think the previous guy did it but I am not sure how to check it (but I think it is there).

当我是新手并且我仍在学习k8s时,任何帮助都会很棒.

Any help would be great as I am new and I am still learning about k8s.

更新:还尝试过更新这样的规格并运行并且仍然没有运气.

Update: Have also tried to update the specs like this and run and still no luck.

  volumes:
    - name: config
      configMap:
        name: testcfg1
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "16g"
    labels:
      version: 3.0.0
    serviceAccount: default
    volumeMounts:
      - name: config
        mountPath: /opt/spark
  executor:
    cores: 1
    instances: 2
    memory: "20g"
    labels:
      version: 3.0.0
    volumeMounts:
      - name: config
        mountPath: /opt/spark

推荐答案

不确定在Spark v3.0.0(您似乎正在使用)中是否解决了此问题,但是Kubernetes上的Spark中有一个错误阻止了ConfigMaps无法正确安装.查看此讨论: https://stackoverflow.com/a/58508313/8570169

Not sure if this issue was solved in Spark v3.0.0 (that you seem to be using), but there was a bug in Spark on Kubernetes that was preventing ConfigMaps from mounting properly. See this discussion: https://stackoverflow.com/a/58508313/8570169

这篇关于无法在运行Sparkapplication的Pod上使用configmap挂载配置文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆