calico-policy-controller请求其他Coreos服务器的etcd2证书 [英] calico-policy-controller requests etcd2 certificates of a different coreos server
问题描述
我有两个coreos稳定服务器,
i have two coreos stable servers,
每个服务器都包含一个etcd2服务器,并且它们共享相同的发现URL.
each one includes an etcd2 server and they share the same discovery url.
每个为每个etcd2守护程序生成一个不同的证书.我在(coreos-2.tux-in.com
)上安装了kubernetes控制器,在coreos-3.tux-in.com
上安装了一个工作程序. calico配置为对coreos-2.tux-in.com
,
each generated a different certificate for each of the etcd2 daemons. i installed kubernetes controller on one, (coreos-2.tux-in.com
) and a worker on coreos-3.tux-in.com
. calico is configured to use the etcd2 certificates for coreos-2.tux-in.com
,
,但是kuberenetes似乎在coreos-3.tux-in.com
上启动了calico-policy-controller,因此它找不到etcd2证书. coreos-2.tux-in.com
证书文件名以etcd1
开头,coreos-3.tux-in.com
证书文件以etcd2
开头.
but it seems that kuberenetes started the calico-policy-controller on coreos-3.tux-in.com
so it can't find the etcd2 certificates. coreos-2.tux-in.com
certificates file names start with etcd1
and coreos-3.tux-in.com
certificates start with etcd2
.
所以..我只是将两个etcd2守护程序的证书都放在两个coreos服务器上吗?我是否需要限制kube-policy-controller
从coreos-2.tux-in.com
开始?我在这里做什么?
so.. do i just place certificates for both etcd2 daemons on both coreos servers? do I need to restrict kube-policy-controller
to start on coreos-2.tux-in.com
? what do i do here?
这是我的/srv/kubernetes/manifests/calico.yaml
文件.
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
etcd_ca: "/etc/ssl/etcd/ca.pem"
etcd_key: "/etc/ssl/etcd/etcd1-key.pem"
etcd_cert: "/etc/ssl/etcd/etcd1.pem"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "calico",
"type": "flannel",
"delegate": {
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_ca": "/etc/ssl/etcd/ca.pem"
"etcd_key": "/etc/ssl/etcd/etcd1-key.pem"
"etcd_cert": "/etc/ssl/etcd/etcd1.pem"
"log_level": "info",
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "/etc/kubernetes/cni/net.d/__KUBECONFIG_FILENAME__"
}
}
}
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
hostNetwork: true
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v1.0.0
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Choose the backend to use.
- name: ETCD_CA_CERT_FILE
value: "/etc/ssl/etcd/ca.pem"
- name: ETCD_CERT_FILE
value: "/etc/ssl/etcd/etcd1.pem"
- name: ETCD_KEY_FILE
value: "/etc/ssl/etcd/etcd1-key.pem"
- name: CALICO_NETWORKING_BACKEND
value: "none"
# Disable file logging so 'kubectl logs' works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
- name: NO_DEFAULT_POOLS
value: "true"
securityContext:
privileged: true
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /etc/resolv.conf
name: dns
readOnly: true
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.5.2
imagePullPolicy: Always
command: ["/install-cni.sh"]
env:
# CNI configuration filename
- name: CNI_CONF_NAME
value: "10-calico.conf"
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/kubernetes/cni/net.d
- name: dns
hostPath:
path: /etc/resolv.conf
---
# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
# The policy controller can only have a single active instance.
replicas: 1
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
# The policy controller must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-policy-controller
image: calico/kube-policy-controller:v0.4.0
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The location of the Kubernetes API. Use the default Kubernetes
# service for API access.
- name: ETCD_CA_CERT_FILE
value: "/etc/ssl/etcd/ca.pem"
- name: ETCD_CERT_FILE
value: "/etc/ssl/etcd/etcd1.pem"
- name: ETCD_KEY_FILE
value: "/etc/ssl/etcd/etcd1-key.pem"
- name: K8S_API
value: "https://kubernetes.default:443"
# Since we're running in the host namespace and might not have KubeDNS
# access, configure the container's /etc/hosts to resolve
# kubernetes.default to the correct service clusterIP.
- name: CONFIGURE_ETC_HOSTS
value: "true"
推荐答案
好..所以我在家里的设置是2台电脑,上面安装了coreos进行测试.所以在我的情况下,我有2个etcd2服务器,每台服务器上有一个.
ok.. so my setup at home is 2 pcs with coreos installed on them for testing. so in my scenario i have 2 etcd2 servers, one on each server.
通常kubernetes用于大型服务器,并且他们建议不要在同一台机器上安装etcd2服务器和kubernetes容器,因为在我的情况下,我必须将它们都放在同一服务器上,因此我打开etcd2来环回设备http
协议在端口4001
上.因此,现在每当我需要在kubernetes容器中配置etcd2服务器时,我只需指向http:/127.0.0.1:4001
,然后它指向同一服务器上的etcd2,而无需请求ssl证书.因此,同一服务器上的服务不需要etcd2的https,而服务器外的服务则需要.
in general kubernetes is for a large scale of servers and they recommend not to have etcd2 servers and kubernetes containers on same machine, since in my case I must have them both in the same server, I opened etcd2 to loopback device 127.0.0.1
on port 4001
with http
protocol. so now whenever I need to configure etcd2 servers in kubernetes containers, i just point to http:/127.0.0.1:4001
, and then it points to the etcd2 on that same server without requesting ssl certificates. so services in that same server won't need https for etcd2, but services outside the server will need.
这篇关于calico-policy-controller请求其他Coreos服务器的etcd2证书的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!