Kubernetes-从工作到不同的Pod连接到Cassandra [英] Kubernetes - connect to cassandra from job to different pod

查看:92
本文介绍了Kubernetes-从工作到不同的Pod连接到Cassandra的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我尝试执行以下命令时

  [ / bin / sh, -c, cqlsh cassandra.my-namespace.svc.cluster.local -f /path/to/schema.cql] 

我的工作,我收到以下错误:

  Traceback(最近一次通话):
文件 / usr / bin / cqlsh.py,第2443行,在< module>中
main(* read_options(sys.argv [1:],os.environ))
文件 /usr/bin/cqlsh.py,第2421行,主
编码= options.encoding)
文件 /usr/bin/cqlsh.py,第485行,位于__init__
load_balancing_policy = WhiteListRoundRobinPolicy([self.hostname]),
文件 / usr /share/cassandra/lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/policies.py,第417行,位于__init__
套接字中.gaierror:[Errno -2]未知的名称或服务

我的工作定义为带有<$ c $的Helm Hook c>安装后注释。我的Cassandra Pod使用StatefulSet定义。

 种类:StatefulSet 
元数据:
名称:cassandra
规格:
服务名称:cassandra
副本:1
模板:
元数据:
标签:
应用程序:cassandra
规格:
容器:
-名称:cassandra
图像:cassandra:3
imagePullPolicy:IfNotPresent
端口:
-containerPort:7000
名称:节点内
-containerPort:7001
名称:tls-intra-node
-containerPort:7199
名称:jmx
-containerPort:9042
名称:cql
env:
-名称:CASSANDRA_SEEDS
值:cassandra-0.cassandra.default.svc.cluster.local
-名称:MAX_HEAP_SIZE
值:256M
-名称:HEAP_NEWSIZE
价值:1亿
-名称:CASSANDRA_CLUSTER_NAME
值: Cassandra
-名称:CASSANDRA_DC
值: DC1
-名称:CASSANDRA_RACK
值: Rack1
-名称:CASSANDRA_ENDPOINT_SNITCH
值:GossipingPropertyFileSnitch
volume安装:
-名称:cassandra-data
mountPath:/ var / lib / cassandra / data
volumeClaimTemplates :
-元数据:
名称:cassandra-data
注释:#注释行,如果您想使用StorageClass
#或指定哪个StorageClass
volume.beta。 kubernetes.io/storage-class: #注释行,如果您
#要使用StorageClass或指定哪个StorageClass
规格:
accessModes:[ ReadWriteOnce]
资源:
请求:
存储:1Gi

这是我的服务:

 类型:服务
元数据:
标签:
应用程序:cassandra
名称:cassandra
规范:
clusterIP:无
端口:
-端口:9042
选择器
应用程序:cassandra

当我从容器中手动运行 cqlsh 命令时,一切正常。不幸的是,自动化解决方案引发了上述错误。


我在服务配置中缺少什么吗?自从从Job创建的Pod连接到服务以来,我一直认为它应该起作用。


编辑:
Job看起来像这样:

  api版本:batch / v1 
类型:作业
元数据:
名称:init-db
规范:
模板:
元数据:
名称:init-db
批注:
helm.sh/hooks:发布后安装
规范:
容器:
-名称:cqlsh
图片:< cassandra-image>
命令:[ / bin / sh,-c, cqlsh cassandra.my-namespace.svc.cluster.local -f / path / to / schema.cql]
volumeMounts:
-名称:cass-init
mountPath:/ etc / config
卷:
...

这是 etc / resolv.conf 的输出:

 名称服务器10.96.0.10 
搜索default.svc.cluster.local svc.cluster.local cluster.local
选项ndtos:5


解决方案

您发布的错误表明,无论在哪里运行cqlsh命令,它都无法解析服务名称。 / p>

根据k8s集群的配置方式以及作业的运行位置(在同一k8s集群内部还是外部),您需要使用公开对pod的访问权限Ingress NodePort


AlešNosek很好地解释了如何访问他的博客文章在这里。干杯!


When I'm trying to execute the following command

["/bin/sh", "-c", "cqlsh cassandra.my-namespace.svc.cluster.local -f /path/to/schema.cql"]

from my Job, I am receiving the following error:

Traceback (most recent call last):
  File "/usr/bin/cqlsh.py", line 2443, in <module>
    main(*read_options(sys.argv[1:], os.environ))
  File "/usr/bin/cqlsh.py", line 2421, in main
    encoding=options.encoding)
  File "/usr/bin/cqlsh.py", line 485, in __init__
    load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
  File "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.11.0-bb96859b.zip/cassandra-driver-3.11.0-bb96859b/cassandra/policies.py", line 417, in __init__
socket.gaierror: [Errno -2] Name or service not known

My Job is defined as Helm Hook with post-install annotation. My Cassandra Pod is defined using StatefulSet.

kind: StatefulSet
metadata:
  name: cassandra
spec:
  serviceName: cassandra
  replicas: 1
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      containers:
        - name: cassandra
          image: cassandra:3
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 7000
              name: intra-node
            - containerPort: 7001
              name: tls-intra-node
            - containerPort: 7199
              name: jmx
            - containerPort: 9042
              name: cql
          env:
            - name: CASSANDRA_SEEDS
              value: cassandra-0.cassandra.default.svc.cluster.local
            - name: MAX_HEAP_SIZE
              value: 256M
            - name: HEAP_NEWSIZE
              value: 100M
            - name: CASSANDRA_CLUSTER_NAME
              value: "Cassandra"
            - name: CASSANDRA_DC
              value: "DC1"
            - name: CASSANDRA_RACK
              value: "Rack1"
            - name: CASSANDRA_ENDPOINT_SNITCH
              value: GossipingPropertyFileSnitch
          volumeMounts:
            - name: cassandra-data
              mountPath: /var/lib/cassandra/data
  volumeClaimTemplates:
    - metadata:
        name: cassandra-data
        annotations:  # comment line if you want to use a StorageClass
          # or specify which StorageClass
          volume.beta.kubernetes.io/storage-class: ""   # comment line if you
          # want to use a StorageClass or specify which StorageClass
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Gi

And this is my Service:

kind: Service
metadata:
  labels:
    app: cassandra
  name: cassandra
spec:
  clusterIP: None
  ports:
    - port: 9042
  selector
    app: cassandra

When I run the cqlsh command manually from the container, everything works. Unfortunately, the automated solution throws the mentioned error.

Am I missing something in the Service configuration? I have thought since I am connecting to service from the Pod created by Job, it should work.

EDIT: Job looks like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: init-db
spec:
  template:
    metadata: 
      name: init-db
      annotations: 
        "helm.sh/hooks": postn-install
    spec:
      containers:
      - name: cqlsh
        image: <cassandra-image>
        command: ["/bin/sh", "-c", "cqlsh cassandra.my-namespace.svc.cluster.local -f /path/to/schema.cql"]
        volumeMounts:
        - name: cass-init
          mountPath: /etc/config
    volumes:
      ...

And here is the output of etc/resolv.conf:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local
options ndtos:5

解决方案

The error you posted indicates that wherever you're running the cqlsh command, it cannot resolve the service name.

Depending on how your k8s cluster is configured and where the job runs (inside the same k8s cluster or external), you'll need to expose access to the pods with Ingress or NodePort.

Aleš Nosek has a good explanation of how to access pods in his blog post here. Cheers!

这篇关于Kubernetes-从工作到不同的Pod连接到Cassandra的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆