Zeppelin k8s:更改解释器 pod 配置 [英] Zeppelin k8s: change interpreter pod configuration

查看:164
本文介绍了Zeppelin k8s:更改解释器 pod 配置的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用以下命令在 kubernetes 上配置了我的 zeppelin:

I've configured my zeppelin on kubernetes using:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: zeppelin
  labels: [...]
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: zeppelin
      app.kubernetes.io/instance: zeppelin
  template:
    metadata:
      labels:
        app.kubernetes.io/name: zeppelin
        app.kubernetes.io/instance: zeppelin
    spec:
      serviceAccountName: zeppelin
      containers:
        - name: zeppelin
          image: "apache/zeppelin:0.9.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          [...]
          env:
            - name: ZEPPELIN_PORT
              value: "8080"
            - name: ZEPPELIN_K8S_CONTAINER_IMAGE
              value: apache/zeppelin:0.9.0
            - name: ZEPPELIN_RUN_MODE
              value: k8s
            - name: ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE
              value: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5

当执行一个新的段落作业时,zeppelin 因为在 k8s 模式下运行会创建一个 pod:

When a new paragraph job is performed, zeppelin since is running in k8s mode creates a pod:

$ kubectl get pods
NAME                         READY   STATUS                  RESTARTS   AGE
spark-ghbvld                 0/1     Completed               0          9m   -----<<<<<<<
spark-master-0               1/1     Running                 0          38m
spark-worker-0               1/1     Running                 0          38m
zeppelin-6cc658d59f-gk2lp    1/1     Running                 0          24m

很快,这个容器开始首先将 spark home 文件夹从 ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE 复制到主容器中,然后执行解释器.

Shortly, this container is engaged to first copy spark home folder from ZEPPELIN_K8S_SPARK_CONTAINER_IMAGE into main container and then executes interpreter.

问题出现在这里:

我在创建的 pod 上收到此错误消息:

I'm getting this error message on created pod:

Interpreter launch command:  /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dfile.encoding=UTF-8 -Dlog4j.configuration=file:///zeppelin/conf/log4j.properties -Dzeppelin.log.file='/zeppelin/logs/zeppelin-interpreter-spark-shared_process--spark-ghbvld.log' -Xms1024m -Xmx2048m -XX:MaxPermSize=512m -cp ":/zeppelin/interpreter/spark/dep/*:/zeppelin/interpreter/spark/*::/zeppelin/interpreter/zeppelin-interpreter-shaded-0.9.0-preview1.jar" org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc 36161 "spark-shared_process" 12321:12321
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/dep/zeppelin-spark-dependencies-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/zeppelin/interpreter/spark/spark-interpreter-0.9.0-preview1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings  for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 WARN [2020-06-05 06:35:05,694] ({main} ZeppelinConfiguration.java[create]:159) - Failed to load configuration, proceeding with a default
 INFO [2020-06-05 06:35:05,745] ({main} ZeppelinConfiguration.java[create]:171) - Server Host: 0.0.0.0
Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"
    at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
    at java.lang.Integer.parseInt(Integer.java:580)
    at java.lang.Integer.parseInt(Integer.java:615)
    at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:248)
    at org.apache.zeppelin.conf.ZeppelinConfiguration.getInt(ZeppelinConfiguration.java:243)
    at org.apache.zeppelin.conf.ZeppelinConfiguration.getServerPort(ZeppelinConfiguration.java:327)
    at org.apache.zeppelin.conf.ZeppelinConfiguration.create(ZeppelinConfiguration.java:173)
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:144)
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.<init>(RemoteInterpreterServer.java:152)
    at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.main(RemoteInterpreterServer.java:321)

如您所见,主要问题是:

As you can see, main problem is:

线程main"中的异常java.lang.NumberFormatException:对于输入字符串:tcp://172.30.203.33:80"

Exception in thread "main" java.lang.NumberFormatException: For input string: "tcp://172.30.203.33:80"

我尝试使用 zeppelin Web 前端在解释器配置上添加 zeppelin.server.port 属性,导航到 Interpreters ->Spark 解释器 ->添加属性,见下图:

I've tried to add zeppelin.server.port property on interpreter configuration using zeppelin web frontend, navigating to Interpreters -> Spark Interpreter -> add property, see bellow:

然而,问题一直存在.

关于如何在生成的解释器 pod 上覆盖 zeppelin.server.portZEPPELIN_PORT 的任何想法?

Any ideas about how to override zeppelin.server.port, or ZEPPELIN_PORT on generated interpreter pod?

我还转储了由 zeppelin 创建的解释器 pod 清单:

I also dump interpreter pod manifest created by zeppelin:

$ kubectl get pods -o=yaml spark-ghbvld
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"spark-ghbvld","interpreterGroupId":"spark-shared_process","interpreterSettingName":"spark"},"name":"spark-ghbvld","namespace":"ra-iot-dev"},"spec":{"automountServiceAccountToken":true,"containers":[{"command":["sh","-c","$(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r 12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process -l /tmp/local-repo -g spark"],"env":[{"name":"PYSPARK_PYTHON","value":"python"},{"name":"PYSPARK_DRIVER_PYTHON","value":"python"},{"name":"SERVICE_DOMAIN","value":null},{"name":"ZEPPELIN_HOME","value":"/zeppelin"},{"name":"INTERPRETER_GROUP_ID","value":"spark-shared_process"},{"name":"SPARK_HOME","value":null}],"image":"apache/zeppelin:0.9.0","lifecycle":{"preStop":{"exec":{"command":["sh","-c","ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer | grep -v grep | awk '{print $2}' | xargs kill"]}}},"name":"spark","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"initContainers":[{"command":["sh","-c","cp -r /opt/spark/* /spark/"],"image":"docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5","name":"spark-home-init","volumeMounts":[{"mountPath":"/spark","name":"spark-home"}]}],"restartPolicy":"Never","terminationGracePeriodSeconds":30,"volumes":[{"emptyDir":{},"name":"spark-home"}]}}
    openshift.io/scc: anyuid
  creationTimestamp: "2020-06-05T06:34:36Z"
  labels:
    app: spark-ghbvld
    interpreterGroupId: spark-shared_process
    interpreterSettingName: spark
  name: spark-ghbvld
  namespace: ra-iot-dev
  resourceVersion: "224863130"
  selfLink: /api/v1/namespaces/ra-iot-dev/pods/spark-ghbvld
  uid: a04a0d70-a6f6-11ea-9e39-0050569f5f65
spec:
  automountServiceAccountToken: true
  containers:
  - command:
    - sh
    - -c
    - $(ZEPPELIN_HOME)/bin/interpreter.sh -d $(ZEPPELIN_HOME)/interpreter/spark -r
      12321:12321 -c zeppelin-6cc658d59f-gk2lp.ra-iot-dev.svc -p 36161 -i spark-shared_process
      -l /tmp/local-repo -g spark
    env:
    - name: PYSPARK_PYTHON
      value: python
    - name: PYSPARK_DRIVER_PYTHON
      value: python
    - name: SERVICE_DOMAIN
    - name: ZEPPELIN_HOME
      value: /zeppelin
    - name: INTERPRETER_GROUP_ID
      value: spark-shared_process
    - name: SPARK_HOME
    image: apache/zeppelin:0.9.0
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        exec:
          command:
          - sh
          - -c
          - ps -ef | grep org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer
            | grep -v grep | awk '{print $2}' | xargs kill
    name: spark
    resources: {}
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /spark
      name: spark-home
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-n4lpw
      readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
  - name: default-dockercfg-qs7sj
  initContainers:
  - command:
    - sh
    - -c
    - cp -r /opt/spark/* /spark/
    image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
    imagePullPolicy: IfNotPresent
    name: spark-home-init
    resources: {}
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /spark
      name: spark-home
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-n4lpw
      readOnly: true
  nodeName: node2.si-origin-cluster.t-systems.es
  nodeSelector:
    region: primary
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c30,c0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - emptyDir: {}
    name: spark-home
  - name: default-token-n4lpw
    secret:
      defaultMode: 420
      secretName: default-token-n4lpw
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-06-05T06:35:03Z"
    reason: PodCompleted
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-06-05T06:35:07Z"
    reason: PodCompleted
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: null
    reason: PodCompleted
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-06-05T06:34:37Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
    image: docker.io/apache/zeppelin:0.9.0
    imageID: docker-pullable://docker.io/apache/zeppelin@sha256:0691909f6884319d366f5d3a5add8802738d6240a83b2e53e980caeb6c658092
    lastState: {}
    name: spark
    ready: false
    restartCount: 0
    state:
      terminated:
        containerID: docker://8c3977241a20be1600180525e4f8b737c8dc5954b6dc0826a7fc703ff6020a70
        exitCode: 0
        finishedAt: "2020-06-05T06:35:05Z"
        reason: Completed
        startedAt: "2020-06-05T06:35:05Z"
  hostIP: 10.49.160.21
  initContainerStatuses:
  - containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
    image: docker-registry.default.svc:5000/ra-iot-dev/spark:2.4.5
    imageID: docker-pullable://docker-registry.default.svc:5000/ra-iot-dev/spark@sha256:1cbcdacbcc55b2fc97795a4f051429f69ff3666abbd936e08e180af93a11ab65
    lastState: {}
    name: spark-home-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://34701d70eec47367a928dc382326014c76fc49c95be92562e68911f36b4c6242
        exitCode: 0
        finishedAt: "2020-06-05T06:35:02Z"
        reason: Completed
        startedAt: "2020-06-05T06:35:02Z"
  phase: Succeeded
  podIP: 10.131.0.203
  qosClass: BestEffort
  startTime: "2020-06-05T06:34:37Z"

环境变量:

PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=spark-xonray2
PYSPARK_PYTHON=python
PYSPARK_DRIVER_PYTHON=python
SERVICE_DOMAIN=
ZEPPELIN_HOME=/zeppelin
INTERPRETER_GROUP_ID=spark-shared_process
SPARK_HOME=
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
MONGODB_PORT_27017_TCP_PORT=27017
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.50.211
ZEPPELIN_PORT_80_TCP_ADDR=172.30.57.29
MONGODB_PORT=tcp://172.30.240.109:27017
MONGODB_PORT_27017_TCP=tcp://172.30.240.109:27017
SPARK_MASTER_SVC_PORT_7077_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT_7077_TCP_ADDR=172.30.88.254
SPARK_MASTER_SVC_PORT_80_TCP=tcp://172.30.88.254:80
MONGODB_PORT_27017_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT=tcp://172.30.235.145:9094
KAFKA_PORT_9092_TCP=tcp://172.30.164.40:9092
KUBERNETES_PORT_53_UDP_PROTO=udp
ZOOKEEPER_PORT_2888_TCP=tcp://172.30.222.17:2888
ZEPPELIN_PORT_80_TCP=tcp://172.30.57.29:80
ZEPPELIN_PORT_80_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.133.154
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ORION_PORT_80_TCP_ADDR=172.30.55.76
SPARK_MASTER_SVC_PORT_7077_TCP_PORT=7077
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.229.165
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP_ADDR=172.30.235.145
KAFKA_PORT_9092_TCP_PORT=9092
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.245.33
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_SERVICE_HOST=172.30.222.17
ZEPPELIN_SERVICE_PORT=80
KAFKA_0_EXTERNAL_SERVICE_PORT=9094
GREENPLUM_SERVICE_PORT_HTTP=5432
KAFKA_0_EXTERNAL_SERVICE_HOST=172.30.235.145
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KUBERNETES_SERVICE_HOST=172.30.0.1
ZOOKEEPER_PORT_2181_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_PORT=3888
ORION_PORT_80_TCP_PORT=80
MONGODB_SERVICE_PORT_MONGODB=27017
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
ZOOKEEPER_PORT_2888_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_SERVICE_PORT_HTTP=80
GREENPLUM_SERVICE_PORT=5432
GREENPLUM_PORT_5432_TCP_PORT=5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ZOOKEEPER_PORT_3888_TCP=tcp://172.30.222.17:3888
ZOOKEEPER_PORT_3888_TCP_PROTO=tcp
MONGODB_SERVICE_PORT=27017
KAFKA_SERVICE_PORT_TCP_CLIENT=9092
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.50.211
ZOOKEEPER_SERVICE_PORT_TCP_CLIENT=2181
ZOOKEEPER_SERVICE_PORT_FOLLOWER=2888
KAFKA_SERVICE_PORT=9092
SPARK_MASTER_SVC_PORT_80_TCP_PORT=80
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.50.211:1
ORION_SERVICE_HOST=172.30.55.76
KAFKA_PORT_9092_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
ZOOKEEPER_PORT_3888_TCP_ADDR=172.30.222.17
ZEPPELIN_SERVICE_PORT_HTTP=80
ORION_PORT_80_TCP=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_PROTO=tcp
SPARK_MASTER_SVC_PORT=tcp://172.30.88.254:7077
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.178.127
MONGODB_SERVICE_HOST=172.30.240.109
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.245.33
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.178.127:1
ORION_PORT=tcp://172.30.55.76:80
GREENPLUM_PORT_5432_TCP_ADDR=172.30.0.147
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT=tcp://172.30.50.211:1
ORION_SERVICE_PORT=80
ORION_PORT_80_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_PORT_9094_TCP=tcp://172.30.235.145:9094
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.167.19:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.229.165
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_PROTO=tcp
KAFKA_0_EXTERNAL_SERVICE_PORT_TCP_KAFKA=9094
KAFKA_0_EXTERNAL_PORT_9094_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_HOST=172.30.88.254
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_53_UDP_PORT=53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.178.127:1
ZEPPELIN_SERVICE_HOST=172.30.57.29
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_SERVICE_PORT=1
SPARK_MASTER_SVC_PORT_80_TCP_ADDR=172.30.88.254
KUBERNETES_PORT=tcp://172.30.0.1:443
ZOOKEEPER_PORT_2181_TCP_PORT=2181
ZOOKEEPER_PORT_2888_TCP_PROTO=tcp
SPARK_MASTER_SVC_SERVICE_PORT=7077
GLUSTERFS_DYNAMIC_247E77C4_9F59_11EA_9E39_0050569F5F65_PORT=tcp://172.30.245.33:1
GLUSTERFS_DYNAMIC_025DF8B3_A642_11EA_9E39_0050569F5F65_PORT=tcp://172.30.229.165:1
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_PORT_1_TCP_PORT=1
ZOOKEEPER_SERVICE_PORT_TCP_ELECTION=3888
ZOOKEEPER_PORT=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2181_TCP_ADDR=172.30.222.17
SPARK_MASTER_SVC_PORT_7077_TCP=tcp://172.30.88.254:7077
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
ZEPPELIN_PORT_80_TCP_PORT=80
KAFKA_0_EXTERNAL_PORT_9094_TCP_PORT=9094
GREENPLUM_SERVICE_HOST=172.30.0.147
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT=tcp://172.30.117.125:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.133.154
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.167.19
KUBERNETES_PORT_53_TCP_PROTO=tcp
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_SERVICE_PORT=1
ORION_SERVICE_PORT_HTTP=80
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.167.19:1
SPARK_MASTER_SVC_SERVICE_PORT_CLUSTER=7077
KAFKA_SERVICE_HOST=172.30.164.40
GREENPLUM_PORT=tcp://172.30.0.147:5432
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.117.125
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
GLUSTERFS_DYNAMIC_243832C0_9F59_11EA_9E39_0050569F5F65_PORT_1_TCP_ADDR=172.30.178.127
ZEPPELIN_PORT=tcp://172.30.57.29:80
KAFKA_PORT=tcp://172.30.164.40:9092
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_53_TCP_PORT=53
SPARK_MASTER_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT=443
GLUSTERFS_DYNAMIC_C6419348_9FEB_11EA_9E39_0050569F5F65_SERVICE_PORT=1
GLUSTERFS_DYNAMIC_85199C48_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.117.125
MONGODB_PORT_27017_TCP_ADDR=172.30.240.109
GREENPLUM_PORT_5432_TCP=tcp://172.30.0.147:5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KAFKA_PORT_9092_TCP_ADDR=172.30.164.40
ZOOKEEPER_SERVICE_PORT=2181
ZOOKEEPER_PORT_2181_TCP=tcp://172.30.222.17:2181
ZOOKEEPER_PORT_2888_TCP_PORT=2888
GLUSTERFS_DYNAMIC_2ECF75EB_A4D2_11EA_9E39_0050569F5F65_PORT_1_TCP=tcp://172.30.133.154:1
GLUSTERFS_DYNAMIC_3673E93E_9E97_11EA_9E39_0050569F5F65_SERVICE_HOST=172.30.167.19
Z_VERSION=0.9.0-preview1
LOG_TAG=[ZEPPELIN_0.9.0-preview1]:
Z_HOME=/zeppelin
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
ZEPPELIN_ADDR=0.0.0.0
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HOME=/

推荐答案

ZEPPELIN_PORT 是由 k8s 服务发现设置的,因为你的 pod/服务名称是 zeppelin!

ZEPPELIN_PORT is set by k8s service discovery, because your pod/service name is zeppelin!

只需通过其他方式更改 pod/服务名称,或禁用发现环境变量,请参阅 https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service,这只是enableServiceLinks: false 在您的 zeppelin pod 模板定义中.

Just change pod / service name by something else, or disable discovery env variables, see https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#accessing-the-service, this is just enableServiceLinks: false in your zeppelin pod template definition.

这篇关于Zeppelin k8s:更改解释器 pod 配置的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆