aks报告“豆荚不足"; [英] aks reporting "Insufficient pods"

查看:89
本文介绍了aks报告“豆荚不足";的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经阅读了

I've gone through the Azure Cats&Dogs tutorial described here and I am getting an error in the final step where the apps are launched in AKS. Kubernetes is reporting that I have insufficent pods but I'm not sure why this would be. I've run through this same tutorial a few weeks ago without problems.

$ kubectl apply -f azure-vote-all-in-one-redis.yaml
deployment.apps/azure-vote-back created
service/azure-vote-back created
deployment.apps/azure-vote-front created
service/azure-vote-front created

$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
azure-vote-back-655476c7f7-mntrt    0/1     Pending   0          6s
azure-vote-front-7c7d7f6778-mvflj   0/1     Pending   0          6s

$ kubectl get events
LAST SEEN   TYPE      REASON                 KIND         MESSAGE
3m36s       Warning   FailedScheduling       Pod          0/1 nodes are available: 1 Insufficient pods.
84s         Warning   FailedScheduling       Pod          0/1 nodes are available: 1 Insufficient pods.
70s         Warning   FailedScheduling       Pod          skip schedule deleting pod: default/azure-vote-back-655476c7f7-l5j28
9s          Warning   FailedScheduling       Pod          0/1 nodes are available: 1 Insufficient pods.
53m         Normal    SuccessfulCreate       ReplicaSet   Created pod: azure-vote-back-655476c7f7-kjld6
99s         Normal    SuccessfulCreate       ReplicaSet   Created pod: azure-vote-back-655476c7f7-l5j28
24s         Normal    SuccessfulCreate       ReplicaSet   Created pod: azure-vote-back-655476c7f7-mntrt
53m         Normal    ScalingReplicaSet      Deployment   Scaled up replica set azure-vote-back-655476c7f7 to 1
99s         Normal    ScalingReplicaSet      Deployment   Scaled up replica set azure-vote-back-655476c7f7 to 1
24s         Normal    ScalingReplicaSet      Deployment   Scaled up replica set azure-vote-back-655476c7f7 to 1
9s          Warning   FailedScheduling       Pod          0/1 nodes are available: 1 Insufficient pods.
3m36s       Warning   FailedScheduling       Pod          0/1 nodes are available: 1 Insufficient pods.
53m         Normal    SuccessfulCreate       ReplicaSet   Created pod: azure-vote-front-7c7d7f6778-rmbqb
24s         Normal    SuccessfulCreate       ReplicaSet   Created pod: azure-vote-front-7c7d7f6778-mvflj
53m         Normal    ScalingReplicaSet      Deployment   Scaled up replica set azure-vote-front-7c7d7f6778 to 1
53m         Normal    EnsuringLoadBalancer   Service      Ensuring load balancer
52m         Normal    EnsuredLoadBalancer    Service      Ensured load balancer
46s         Normal    DeletingLoadBalancer   Service      Deleting load balancer
24s         Normal    ScalingReplicaSet      Deployment   Scaled up replica set azure-vote-front-7c7d7f6778 to 1

$ kubectl get nodes
NAME                       STATUS   ROLES   AGE    VERSION
aks-nodepool1-27217108-0   Ready    agent   7d4h   v1.9.9

我唯一能想到的改变是我现在也正在运行其他(较大)集群,而我再次阅读此Cats& Dogs教程的主要原因是因为我今天遇到了同样的问题其他集群.这是我的Azure帐户的资源限制问题吗?

The only thing I can think of that has changed is that I have other (larger) clusters running now as well, and the main reason I went through this Cats&Dogs tutorial again was because I hit this same problem today with my other clusters. Is this a resources limit issue with my Azure account?

更新10-20/3:15 PST:请注意,尽管这三个群集是在不同的资源组中创建的,但它们如何显示它们都使用相同的节点池.还请注意gem2-cluster的"get-credentials"调用如何报告错误.我确实有一个名为gem2-cluster的集群,该集群使用相同的名称删除并重新创建了(实际上,我删除了wole资源组).正确的做法是什么?

Update 10-20/3:15 PST: Notice how these three clusters all show that they use the same nodepool, even though they were created in different resource groups. Also note how the "get-credentials" call for gem2-cluster reports an error. I did have a cluster earlier called gem2-cluster which I deleted and recreated using the same name (in fact I deleted the wole resource group). What's the correct process for doing this?

$ az aks get-credentials --name gem1-cluster --resource-group gem1-rg
Merged "gem1-cluster" as current context in /home/psteele/.kube/config

$ kubectl get nodes -n gem1
NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-27217108-0   Ready    agent   3h26m   v1.9.11

$ az aks get-credentials --name gem2-cluster --resource-group gem2-rg
A different object named gem2-cluster already exists in clusters

$ az aks get-credentials --name gem3-cluster --resource-group gem3-rg
Merged "gem3-cluster" as current context in /home/psteele/.kube/config

$ kubectl get nodes -n gem1
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-14202150-0   Ready    agent   26m   v1.9.11

$ kubectl get nodes -n gem2
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-14202150-0   Ready    agent   26m   v1.9.11

$ kubectl get nodes -n gem3
NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-14202150-0   Ready    agent   26m   v1.9.11

推荐答案

您的max-pods设置为什么?当您达到每个节点的窗格数限制时,这是一个正常错误.

What is your max-pods set to? This is a normal error when you've reached the limit of pods per node.

您可以使用以下方法查看每个节点当前的最大Pod数量:

You can check your current maximum number of pods per node with:

$ kubectl get nodes -o yaml | grep pods
  pods: "30"
  pods: "30"

您当前的状态:

$ kubectl get pods --all-namespaces | grep Running | wc -l
  18

这篇关于aks报告“豆荚不足";的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆