通过Docker运行Kubernetes时配置Kube DNS [英] Configure Kube DNS when running Kubernetes via Docker

查看:119
本文介绍了通过Docker运行Kubernetes时配置Kube DNS的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试为我的团队准备一个开发环境,以便我们可以在相同(或接近相同)的环境中进行开发,登台和部署.

I am trying to prepare a dev environment for my team, so we can develop, stage and deploy with the same (or near same) environment.

获取通过 http://kubernetes在本地运行的Kubernetes集群.io/v1.0/docs/getting-started-guides/docker.html 非常简单.然后,我可以使用kubectl为我的应用程序启动pod和服务.

Getting a Kubernetes Cluster running locally via http://kubernetes.io/v1.0/docs/getting-started-guides/docker.html was nice and simple. I could then use kubectl to start the pods and services for my application.

但是,每次启动时,服务IP地址都会有所不同.如果您的代码需要使用它们,这是一个问题.在Google Container Engine kube DNS中,您可以按名称访问服务.这意味着使用服务的代码在部署之间可以保持不变.

However, the services IP addresses are going to be different each time you start up. Which is a problem, if your code needs to use them. In Google Container Engine kube DNS means you can access a service by name. Which means the code that uses the service can remain constant between deployments.

现在,我知道我们可以通过环境变量将IP和PORT组合在一起,但是我希望设置完全相同.

Now, I know we could piece together the IP and PORT via environment variables, but I wanted to have an identical set up as possible.

因此,我遵循了在各地和Kubernetes回购中找到的一些说明,例如

So I followed some instructions found in various places, both here and in the Kubernetes repo like this.

通过对yml文件的一点编辑就足够了,KubeDNS就会启动.

Sure enough with a little editing of the yml files KubeDNS starts up.

但是在kubernetes.default上的nslookup失败. DNS的运行状况检查也失败了(因为它无法解决测试查找问题),并且实例已关闭并重新启动.

But an nslookup on kubernetes.default fails. The health check on the DNS also fails (because it can't resolve the test look up) and the instance is shut down and restarted.

运行kubectl cluster-info会导致:

Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

一切都很好.但是,达到该端点将导致:

So all good. However, hitting that endpoint results in:

{
  kind: "Status",
  apiVersion: "v1",
  metadata: { },
  status: "Failure",
  message: "no endpoints available for "kube-dns"",
  code: 500
}

我现在不知所措,并且知道它似乎很容易解决,因为它似乎都可以正常工作.这是我启动群集和DNS的方法.

I am now at a loss, and know it is something obvious or easy to fix as it seems to all be working. Here is how I start up the cluster and DNS.

# Run etcd
docker run --net=host \
 -d gcr.io/google_containers/etcd:2.0.12 /usr/local/bin/etcd  \
 --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data

# Run the master
docker run \
    --volume=/:/rootfs:ro \
    --volume=/sys:/sys:ro \
    --volume=/dev:/dev \
    --volume=/var/lib/docker/:/var/lib/docker:ro \
    --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
    --volume=/var/run:/var/run:rw \
    --net=host \
    --privileged=true \
    -d \
    gcr.io/google_containers/hyperkube:v1.0.6 \
    /hyperkube kubelet --containerized --hostname-override="127.0.0.1" \
     --address="0.0.0.0" --api-servers=http://localhost:8080 \
      --config=/etc/kubernetes/manifests \
      --cluster_dns=10.0.0.10  --cluster_domain=cluster.local

# Run the service proxy
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v1.0.6 \
 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2

# forward local port - after this you should be able to user kubectl locally

machine=default; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -L 8080:localhost:8080

所有容器旋转正常,kubectl get节点报告正常.请注意,我传入了dns标志.

All the containers spin up ok, kubectl get nodes reports ok. Note I pass in the dns flags.

然后我使用该文件启动DNS rc,该文件是

I then start the DNS rc with this file, which is the edited version from here

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v9
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    version: v9
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v9
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v9
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd:2.0.9
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
      - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.11
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/kube2sky"
        - -domain=cluster.local
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://localhost:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain=cluster.local
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 1
          timeoutSeconds: 5
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default  # Don't use cluster DNS.

然后启动服务(同样基于回购)

Then start the service (again based on the file in the repo)

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP:  10.0.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

我基于另一个SO问题做出了一个假设,即clusterIP是我传递给主机的值,而不是主机的ip.我确信它一定是我想念的明显或简单的东西.有人可以帮忙吗?

I made the assumption based on another SO question that clusterIP is the value I passed into the master, and not the ip of the host machine. I am sure it has to be something obvious or simple that I have missed. Anyone out there who can help?

谢谢!

更新

我在GitHub存储库中发现了已解决的问题.似乎我有一个完全相同的问题.

I found this closed issue over in the GitHub repo. Seems I have an identical problem.

我已经在GitHub上添加了该线程,并尝试了很多东西,但仍然没有任何进展.我尝试使用不同的图像,但是它们有不同的错误(或者我无法分辨出相同的错误代表自己).

I have added to the thread on GitHub, and tried lots of things but still no progress. I tried using different images, but they had different errors (or the same error representing itself differently, I couldn't tell).

我发现的与此相关的所有内容都表明IP限制或防火墙/安全设置.因此,我决定从容器本身卷曲api.

Everything relating to this that I have found suggests IP restrictions, or firewall/security settings. So I decided to curl the api from the container itself.

docker exec  49705c38846a  echo $(curl http://0.0.0.0:8080/api/v1/services?labels=)

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   908  100   908    0     0   314k      0 --:--:-- --:--:-- --:--:--  443k
{ "kind": "ServiceList", "apiVersion": "v1", "metadata": { "selfLink": "/api/v1/services", "resourceVersion": "948" }, "items": [ { "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "369a9307-796e-11e5-87de-7a0704d1fdad", "resourceVersion": "6", "creationTimestamp": "2015-10-23T10:09:57Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 443, "nodePort": 0 } ], "clusterIP": "10.0.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } } ] }

似乎是对我的有效回应,那么为什么JSON解析错误来自kube2Sky!?

Seems like a valid response to me, so why the JSON parse error coming from kube2Sky!?

Failed to list *api.Service: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value
Failed to list *api.Endpoints: couldn't get version/kind; json parse error: invalid character '<' looking for beginning of value

推荐答案

问题在于网络和kube2sky无法访问API,因此无法获得服务.

The problem was with the networking and kube2sky not accessing the API, so couldn't get the services.

从以下位置更改主服务器的docker run,

Changing the docker run for the master from,

--config=/etc/kubernetes/manifests

--config=/etc/kubernetes/manifests-multi

然后在skydns-rc.yaml中为kube2sky设置域并设置主机IP地址.

Then in the skydns-rc.yaml the for kube2sky as well as setting the domain, set the host IP address.

- -kube_master_url=http://192.168.99.100:8080 #<- your docker machine IP

没有manifests-multi,则主机IP无法访问.

Without the manifests-multi, the host IP is not accessible.

这是一个简单的更改,但需要一些时间来进行跟踪.

This was a simple change but took a bit to track down.

我在GitHub上创建了一个简单的设置,并将对此进行维护,因此人们不必为了建立并运行本地开发环境而经历繁琐的工作.

I have created a simple set up on GitHub and will maintain this so people don't have to go through this pain just to get a local dev environment up and running.

https://github.com/justingrayston/kubernetes-docker-dns

这篇关于通过Docker运行Kubernetes时配置Kube DNS的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆