AWS负载均衡器未向实例注册 [英] aws load-balancer is not registered with instances
问题描述
我使用 kubeadm 在 AWS 上启动集群.我可以使用 kubectl 在 AWS 上成功创建负载均衡器,但是该负载均衡器未在任何EC2实例中注册.这会导致无法从公共访问服务的问题.
I use kubeadm to launch cluster on AWS. I can successfully create a load balancer on AWS by using kubectl, but the load balancer is not registered with any EC2 instances. That causes problem that the service cannot be accessed from public.
从观察结果看,创建ELB时,它无法在所有子网下找到任何正常的实例.我很确定我可以正确标记所有实例.
From the observation, when the ELB is created, it cannot find any healthy instances under all subnets. I am pretty sure I tag all my instances correctly.
已更新:我正在从 k8s-controller-manager 中读取日志,它显示我的节点未设置ProviderID.并根据 Github 评论,ELB将忽略无法从提供者确定实例ID的节点.这会引起问题吗?我应该如何设置providerID?
Updated: I am reading the log from k8s-controller-manager, it shows my node does not have ProviderID set. And according to Github comment, ELB will ignore nodes where instance ID cannot be determined from provider. Could this cause the issue? How Should I set the providerID?
apiVersion: v1
kind: Service
metadata:
name: load-balancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "elb"
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
selector:
app: replica
type: LoadBalancer
部署配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: replica-deployment
labels:
app: replica
spec:
replicas: 1
selector:
matchLabels:
app: replica
template:
metadata:
labels:
app: replica
spec:
containers:
- name: web
image: web
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
- containerPort: 443
command: ["/bin/bash"]
args: ["-c", "script_to_start_server.sh"]
节点输出status
部分
node output status
section
status:
addresses:
- address: 172.31.35.209
type: InternalIP
- address: k8s
type: Hostname
allocatable:
cpu: "4"
ephemeral-storage: "119850776788"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16328856Ki
pods: "110"
capacity:
cpu: "4"
ephemeral-storage: 130046416Ki
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16431256Ki
pods: "110"
conditions:
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient disk space available
reason: KubeletHasSufficientDisk
status: "False"
type: OutOfDisk
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient memory available
reason: KubeletHasSufficientMemory
status: "False"
type: MemoryPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has no disk pressure
reason: KubeletHasNoDiskPressure
status: "False"
type: DiskPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet has sufficient PID available
reason: KubeletHasSufficientPID
status: "False"
type: PIDPressure
- lastHeartbeatTime: 2018-07-12T04:01:54Z
lastTransitionTime: 2018-07-11T15:45:06Z
message: kubelet is posting ready status. AppArmor enabled
reason: KubeletReady
status: "True"
type: Ready
如何解决此问题?
谢谢!
推荐答案
在我的情况下,问题是工作节点未正确分配providerId.
In My case the issue was with the worker nodes not getting the providerId assigned properly.
我设法对节点进行了修补-kubectl修补节点ip-xxxxx.ap-southeast-2.compute.internal -p'{"spec":{"providerID":"aws:///ap-southeast- 2a/i-0xxxxx}}'
I managed to patch the node like - kubectl patch node ip-xxxxx.ap-southeast-2.compute.internal -p '{"spec":{"providerID":"aws:///ap-southeast-2a/i-0xxxxx"}}'
添加ProviderID.然后,当我部署该服务时. ELB已创建.节点组已添加,并自始至终起作用.这不是一个简单的答案.但是,直到我找到更好的解决方案,让我们留在这里
to add the ProviderID. And then when i deployed the service . The ELB got created. the node group got added and end to end it worked. This is not a straight forward answer . But until i find a better solution let remain here
这篇关于AWS负载均衡器未向实例注册的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!