gke-resource-quotas应用于具有10个以上节点的集群 [英] gke-resource-quotas applied on clusters with 10+ nodes

查看:316
本文介绍了gke-resource-quotas应用于具有10个以上节点的集群的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于资源配额的GKE文档说,这些硬限制仅仅是适用于节点数少于或等于10的群集.

The GKE documentation about resource quotas says that those hard limits are only applied for clusters with 10 or fewer nodes.

即使我们有10个以上的节点,此配额也已创建,无法删除

Even though we have more than 10 nodes, this quota has been created and cannot be deleted

这是GKE方面的漏洞还是故意的,并且文档无效?

Is this a bug on GKE side or intentional and the documentation is invalid?

推荐答案

我今天使用GKE遇到了一个非常奇怪的错误.我们托管的gitlab-runner停止运行新作业,消息为:

I had experienced a really strange error today using GKE. Our hosted gitlab-runner stopped running new jobs, and the message was:

pods "xxxx" is forbidden: exceeded quota: gke-resource-quotas, requested: pods=1, used: pods=1500, limited: pods=1500

因此配额资源是不可编辑的(如文档所述).但是问题是,只有5个Pod在运行,而不是1500个.因此,这可能是kubernetes错误,无法确定其计算节点数的方式. 升级控制平面和节点后,错误并没有消失,我也不知道如何重置节点计数器.

So the quota resource is non-editable (as documentation says). The problem, however, that there was just 5 pods running, not 1500. So it can be a kubernetes bug, the way it calculated nodes count, not sure. After upgrading control plane and nodes, the error didn't go away and I didn't know how to reset the counter of nodes.

对我有用的是简单地删除此资源配额.甚至被允许耸耸肩感到惊讶.

What did work for me was to simply delete this resource quota. Was surprised that it was even allowed to /shrug.

kubectl delete resourcequota gke-resource-quotas -n gitlab-runner

此后,重新创建了相同的资源配额,并且Pod能够再次运行.

After that, same resource quota was recreated, and the pods were able to run again.

这篇关于gke-resource-quotas应用于具有10个以上节点的集群的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆