Kubernetes PetSet DNS无法正常工作 [英] Kubernetes PetSet DNS not working

查看:108
本文介绍了Kubernetes PetSet DNS无法正常工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个名称== elasticsearch和serviceName == es的Kubernetes PetSet.它确实创建了pod,并且正如所期望的那样,它们的名称类似于elasticsearch-0elasticsearch-1.但是,DNS似乎无法正常工作. elasticsearch-0.es无法解析(elasticsearch-0.default也不解析).如果查看生成的srv记录,它们似乎是随机的,而不是可预测的:

I have a Kubernetes PetSet with name == elasticsearch and serviceName == es. It does create pods and, as expected, they have names like elasticsearch-0 and elasticsearch-1. However, DNS does not seem to be working. elasticsearch-0.es does not resolve (nor does elasticsearch-0.default, etc.). If you look at the generated srv records they seem to be random instead of predictable:

# nslookup -type=srv elasticsearch
Server:        10.1.0.2
Address:    10.1.0.2#53

elasticsearch.default.svc.cluster.local    service = 10 100 0 9627d60e.elasticsearch.default.svc.cluster.local.

有人有什么主意吗?

详细信息

这是实际的PetSet和服务定义:

Here's the actual PetSet and Service definition:

---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  ports:
  - name: rest
    port: 9200
  - name: native
    port: 9300
  clusterIP: None
  selector:
    app: elasticsearch
---
apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: elasticsearch
spec:
  serviceName: "es"
  replicas: 2
  template:
    metadata:
      labels:
        app: elasticsearch
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: elasticsearch
        image: 672129611065.dkr.ecr.us-west-2.amazonaws.com/elasticsearch:v1
        ports:
          - containerPort: 9200
          - containerPort: 9300
        volumeMounts:
        - name: es-data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: ES_CLUSTER_NAME
            value: EsEvents
  volumeClaimTemplates:
  - metadata:
      name: es-data
      annotations:
        volume.alpha.kubernetes.io/storage-class: anything
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

推荐答案

这是我误读文档的问题. 文档说:

This was an issue of me mis-reading the documentation. The docs say:

网络标识分为两个部分.首先,我们创建了一个无头服务,该服务控制着创建Pets的域.此服务管理的域的格式为:$(服务名称).$(名称空间).svc.cluster.local,其中"cluster.local"是群集域.创建每个宠物时,它将获得一个匹配的DNS子域,格式为:$(petname).$(管理服务域),其中管理服务由Pet Set上的serviceName字段定义.

The network identity has 2 parts. First, we created a headless Service that controls the domain within which we create Pets. The domain managed by this Service takes the form: $(service name).$(namespace).svc.cluster.local, where "cluster.local" is the cluster domain. As each pet is created, it gets a matching DNS subdomain, taking the form: $(petname).$(governing service domain), where the governing service is defined by the serviceName field on the Pet Set.

我的意思是说serviceDomain字段的值是管理服务域"的值,但这不是什么意思.这意味着serviceDomain的值必须与现有的无头服务的名称匹配,并且该服务将用作控制服务域.如果不存在这样的服务,那么您不会收到错误消息-您只会获得宠物的随机DNS名称.

I took this to mean that the value of the serviceDomain field is the value of the "governing service domain", but that's not what it means. It means that the value of serviceDomain must match the name of an existing headless service and that service will be used to as the governing service domain. If no such service exists you don't get an error - you just get random DNS names for you pets.

这篇关于Kubernetes PetSet DNS无法正常工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆