如何减少kubernetes系统资源的CPU限制? [英] How to reduce CPU limits of kubernetes system resources?

查看:77
本文介绍了如何减少kubernetes系统资源的CPU限制?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将我的GKE集群中的内核数保持在3以下.如果将K8s复制控制器和Pod的CPU限制从100m减少到最多50m,这将变得更加可行.否则,仅K8s吊舱占据一个内核的70%.

I'd like to keep the number of cores in my GKE cluster below 3. This becomes much more feasible if the CPU limits of the K8s replication controllers and pods are reduced from 100m to at most 50m. Otherwise, the K8s pods alone take 70% of one core.

我决定不增加节点的CPU能力.我认为这在概念上是错误的,因为CPU限制被定义为以内核为单位进行测量.相反,我做了以下事情:

I decided against increasing the CPU power of a node. This would be conceptually wrong in my opinion because the CPU limit is defined to be measured in cores. Instead, I did the following:

  • 使用"50m"作为默认CPU限制的版本替换limitranges/limits(不是必需的,但我认为更干净)
  • 在kube-system命名空间中修补所有复制控制器,以将50m用于所有容器
  • 删除他们的豆荚
  • 用对所有容器使用50m的版本替换kube-system命名空间中的所有非rc容器

这是很多工作,而且可能很脆弱.即将发布的K8版本中的任何进一步更改,或GKE配置的更改,都可能破坏它.

This is a lot of work and probably fragile. Any further changes in upcoming versions of K8s, or changes in the GKE configuration, may break it.

那么,有没有更好的方法呢?

So, is there a better way?

推荐答案

更改默认命名空间的LimitRange spec.limits.defaultRequest.cpu应该是更改新Pod默认设置的合理解决方案.请注意,LimitRange对象是命名空间的名称,因此,如果您使用额外的命名空间,则可能要考虑一下它们的默认设置.

Changing the default Namespace's LimitRange spec.limits.defaultRequest.cpu should be a legitimate solution for changing the default for new Pods. Note that LimitRange objects are namespaced, so if you use extra Namespaces you probably want to think about what a sane default is for them.

正如您所指出的那样,这不会影响kube系统命名空间中的现有对象或对象.

As you point out, this will not affect existing objects or objects in the kube-system Namespace.

基于观察值,基于经验的大小主要是对kube系统命名空间中的对象进行大小调整.更改这些设置可能会产生不利影响,但是如果您的集群很小,则可能不会这样做.

The objects in the kube-system Namespace were mostly sized empirically - based on observed values. Changing those might have detrimental effects, but maybe not if your cluster is very small.

我们有一个未解决的问题( https://github.com/kubernetes/kubernetes/issues/13048 ),以根据总集群大小调整kube-system请求,但这尚未实现.我们还有另一个未解决的问题( https://github.com/kubernetes/kubernetes/issues/13695)可能将较低的QoS用于某些kube系统资源,但同样-尚未实现.

We have an open issue (https://github.com/kubernetes/kubernetes/issues/13048) to adjust the kube-system requests based on total cluster size, but that is not is not implemented yet. We have another open issue (https://github.com/kubernetes/kubernetes/issues/13695) to perhaps use a lower QoS for some kube-system resources, but again - not implemented yet.

其中,我认为#13048是实现您所要求的正确方法.目前,可悲的是,是否有更好的方法"的答案是否".我们为中型集群选择了默认值-对于非常小的集群,您可能需要做自己的工作.

Of these, I think that #13048 is the right way to implement what you 're asking for. For now, the answer to "is there a better way" is sadly "no". We chose defaults for medium sized clusters - for very small clusters you probably need to do what you are doing.

这篇关于如何减少kubernetes系统资源的CPU限制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆