当使用"cloud-provider = aws"功能时,kube-controller-manager无法启动.与kubeadm [英] kube-controller-manager doesn't start when using "cloud-provider=aws" with kubeadm

查看:86
本文介绍了当使用"cloud-provider = aws"功能时,kube-controller-manager无法启动.与kubeadm的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将Kubernetes与AWS集成,但是kube-controller-manager无法启动. (顺便说一句:没有ASW选项,一切都可以完美运行)

I'm trying to use Kubernetes integration with AWS, but kube-controller-manager don't start. (BTW: Everything works perfectly without the ASW option)

这是我的工作:

-1-

ubuntu @ ip-172-31-17-233:〜$ more/etc/kubernetes/aws.conf

ubuntu@ip-172-31-17-233:~$ more /etc/kubernetes/aws.conf

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
cloudProvider: aws
kubernetesVersion: 1.10.3

-2-

ubuntu @ ip-172-31-17-233:〜$ more/etc/kubernetes/cloud-config.conf

ubuntu@ip-172-31-17-233:~$ more /etc/kubernetes/cloud-config.conf

[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes

(根据我发现的示例,我在这里尝试了很多组合,包括"ws_access_key_id","aws_secret_access_key",省略了.conf或删除了该文件,但无济于事)

(I tried lots of combinations here, according to the examples which I found, including "ws_access_key_id", "aws_secret_access_key", omitting the .conf, or removing this file, but nothing worked)

-3-

ubuntu @ ip-172-31-17-233:〜$ sudo kubeadm init --config/etc/kubernetes/aws.conf

ubuntu@ip-172-31-17-233:~$ sudo kubeadm init --config /etc/kubernetes/aws.conf

[init] Using Kubernetes version: v1.10.3
[init] Using Authorization modes: [Node RBAC]
[init] WARNING: For cloudprovider integrations to work --cloud-provider must be set for all kubelets in the cluster.
        (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf should be edited for this purpose)
[preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip-172-31-17-233 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.17.233]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip-172-31-17-233] and IPs [172.31.17.233]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.001348 seconds
[uploadconfig]Â Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ip-172-31-17-233 as master by adding a label and a taint
[markmaster] Master ip-172-31-17-233 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: x8hi0b.uxjr40j9gysc7lcp
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.17.233:6443 --token x8hi0b.uxjr40j9gysc7lcp --discovery-token-ca-cert-hash sha256:8ad9dfbcacaeba5bc3242c811b1e83c647e2e88f98b0d783875c2053f7a40f44

-4-

ubuntu@ip-172-31-17-233:~$ mkdir -p $HOME/.kube
ubuntu@ip-172-31-17-233:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite '/home/ubuntu/.kube/config'? y
ubuntu@ip-172-31-17-233:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

-5-

ubuntu @ ip-172-31-17-233:〜$ kubectl获取pods --all-namespaces

ubuntu@ip-172-31-17-233:~$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   etcd-ip-172-31-17-233                      1/1       Running            0          40s
kube-system   kube-apiserver-ip-172-31-17-233            1/1       Running            0          45s
kube-system   kube-controller-manager-ip-172-31-17-233   0/1       CrashLoopBackOff   3          1m
kube-system   kube-scheduler-ip-172-31-17-233            1/1       Running            0          35s

kubectl版本

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

有什么主意吗? 我是Kubernetes的新手,我不知道该怎么办...

Any idea? I'm new to Kubernetes, and I have no idea what I can do...

谢谢, 米哈尔.

推荐答案

有什么主意吗?

Any idea?

检查以下几点作为潜在问题:

Check following points as potential issues:

  • kubelet设置了正确的提供程序,请检查/etc/systemd/system/kubelet.service.d/20-cloud-provider.conf包含以下内容:

  • kubelet has proper provider set, check /etc/systemd/system/kubelet.service.d/20-cloud-provider.conf containing:

Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

如果没有,请添加并重新启动kubelet服务.

if not, add and restart kubelet service.

/etc/kubernetes/manifests/中,检查以下文件的配置是否正确:

In /etc/kubernetes/manifests/ check following files have proper configuration:

  • kube-controller-manager.yamlkube-apiserver.yaml:

--cloud-provider=aws

如果没有,只需添加,然后pod将自动重新启动.

if not, just add, and pod will be automatically restarted.

如果您可以按照Artem的要求提供日志,可以在此问题上提供更多的信息.

If you could supply logs as requested by Artem in comments that could shed more light on the issue.

根据评论中的要求,IAM策略处理的简短概述:

As requested in comment, short overview of IAM policy handling:

  • 创建新的IAM策略(或如果已经创建,则进行适当的编辑),例如k8s-default-policy.下面给出的是一个宽松的策略,您可以细化确切的设置以匹配您的安全首选项.在您的情况下,请注意负载均衡器部分.在描述中,在允许EC2实例代表您调用AWS服务"的行中加上一些内容.或类似的...

  • create new IAM policy (or edit appropriately if already created), say k8s-default-policy. Given below is quite a liberal policy and you can fine grain exact settings to match you security preferences. Pay attention to load balancer section in your case. In the description put something along the lines of "Allows EC2 instances to call AWS services on your behalf." or similar...

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::kubernetes-*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "ec2:Describe*",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "ec2:AttachVolume",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": "ec2:DetachVolume",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["ec2:*"],
      "Resource": ["*"]
    },
    {
      "Effect": "Allow",
      "Action": ["elasticloadbalancing:*"],
      "Resource": ["*"]
    }  ]
} 

  • 创建新角色(或如果已创建,则进行适当编辑)并附加先前的策略,例如将k8s-default-policy附加到k8s-default-role.

  • create new role (or edit approptiately if already created) and attach previous policy to it, say attach k8s-default-policy to k8s-default-role.

    将角色附加到可以处理AWS资源的实例.您可以根据需要为主管和工作人员创建不同的角色. EC2-> Instances->(选择实例)-> Actions-> Instance Settings-> Attach/Replace IAM Role->(选择适当的角色)

    Attach Role to instances that can handle AWS resources. You can create different roles for master and for workers if you need to. EC2 -> Instances -> (select instance) -> Actions -> Instance Settings -> Attach/Replace IAM Role -> (select appropriate role)

    此外,除了要检查所有有问题的资源都标记有kubernetes标记之外.

    Also, apart from this check that all resources in question are tagged with kubernetes tag.

    这篇关于当使用"cloud-provider = aws"功能时,kube-controller-manager无法启动.与kubeadm的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    相关文章
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆