Kubernetes中的DNS无法正常工作 [英] DNS in Kubernetes not working
问题描述
我在 https://github.com/GoogleCloudPlatform中遵循示例/kubernetes/tree/master/cluster/addons/dns
但是我无法获得nslookup输出作为示例.
But I cannot get the nslookup output as the example.
执行时
kubectl exec busybox -- nslookup kubernetes
应该返回
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: kubernetes
Address 1: 10.0.0.1
但我只能得到
nslookup: can't resolve 'kubernetes'
Server: 10.0.2.3
Address 1: 10.0.2.3
error: Error executing remote command: Error executing command in container: Error executing in Docker Container: 1
我的Kubernetes在VM上运行,其ifconfig输出如下:
My Kubernetes is running on a VM, and its ifconfig output is as below:
docker0 Link encap:Ethernet HWaddr 56:84:7a:fe:97:99
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::5484:7aff:fefe:9799/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:50 errors:0 dropped:0 overruns:0 frame:0
TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2899 (2.8 KB) TX bytes:2343 (2.3 KB)
eth0 Link encap:Ethernet HWaddr 08:00:27:ed:09:81
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:feed:981/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4735 errors:0 dropped:0 overruns:0 frame:0
TX packets:2762 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:367445 (367.4 KB) TX bytes:280749 (280.7 KB)
eth1 Link encap:Ethernet HWaddr 08:00:27:1f:0d:84
inet addr:192.168.144.17 Bcast:192.168.144.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe1f:d84/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3 errors:0 dropped:0 overruns:0 frame:0
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:330 (330.0 B) TX bytes:1746 (1.7 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:127976 errors:0 dropped:0 overruns:0 frame:0
TX packets:127976 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13742978 (13.7 MB) TX bytes:13742978 (13.7 MB)
veth142cdac Link encap:Ethernet HWaddr e2:b6:29:d1:f5:dc
inet6 addr: fe80::e0b6:29ff:fed1:f5dc/64 Scope:Link
UP BROADCAST RUNNING MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:18 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1336 (1.3 KB) TX bytes:1336 (1.3 KB)
这是我尝试启动Kubernetes的步骤:
Here is the steps I tried to start the Kubernetes:
vagrant@kubernetes:~/kubernetes$ hack/local-up-cluster.sh
+++ [0623 11:18:47] Building go targets for linux/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/kubelet
cmd/hyperkube
cmd/kubernetes
plugin/cmd/kube-scheduler
cmd/kubectl
cmd/integration
cmd/gendocs
cmd/genman
cmd/genbashcomp
cmd/genconversion
cmd/gendeepcopy
examples/k8petstore/web-server
github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
+++ [0623 11:18:52] Placing binaries
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
API SERVER port is free, proceeding...
Starting etcd
etcd -data-dir /tmp/test-etcd.FcQ75s --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null
Waiting for etcd to come up.
+++ [0623 11:18:53] etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ [0623 11:18:55] apiserver:
{
"kind":
"PodList",
"apiVersion":
"v1beta3",
"metadata":
{
"selfLink":
"/api/v1beta3/pods",
"resourceVersion":
"11"
},
"items":
[]
}
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.
Logs:
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log
/tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log
To start using your cluster, open up another terminal/tab and run:
cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
cluster/kubectl.sh
然后在新的终端窗口中,执行:
Then in a new terminal window, I executed:
cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true
cluster/kubectl.sh config set-context local --cluster=local
cluster/kubectl.sh config use-context local
在那之后,我创建了busybox Pod为
After that, I created the busybox Pod as
kubectl create -f busybox.yaml
busybox.yaml的内容来自 https ://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md
The content of the busybox.yaml is from https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cluster/addons/dns/README.md
推荐答案
集合中标记传递给kubelet ,因此kubelet不会尝试联系您为名称解析服务创建的DNS窗格.
It doesn't appear that local-cluster-up.sh supports DNS out of the box. For DNS to work, the kubelet needs to be passed the flags --cluster_dns=<ip-of-dns-service>
and --cluster_domain=cluster.local
at startup. This flag isn't included in the set of flags passed to the kubelet, so the kubelet won't try to contact the DNS pod that you've created for name resolution services.
要解决此问题,可以修改脚本以将这两个标志添加到kubelet中,然后在创建DNS服务时,需要确保设置与传递给--cluster_dns
标志相同的IP地址.作为服务规范的portalIP
字段(请参见示例此处).
To fix this, you can modify the script to add these two flags to the kubelet and then when you create a DNS service, you need to make sure that you set the same ip address that you passed to the --cluster_dns
flag as the portalIP
field of the service spec (see an example here).
这篇关于Kubernetes中的DNS无法正常工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!