为 Kubernetes 中的 Pod 分配或限制资源? [英] Allocate or Limit resource for pods in Kubernetes?

查看:32
本文介绍了为 Kubernetes 中的 Pod 分配或限制资源?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Pod的资源限制已经设置为:

资源限制中央处理器:500m内存:5Gi

节点上还剩下 10G 内存.

我在短时间内成功创建了 5 个 pod,节点可能还有一些内存,例如8G.

随着时间的推移,mem 使用量越来越大,达到极限(5G x 5 = 25G > 10G),节点就会失去响应.

为了保证可用性,有没有办法在节点上设置资源限制?

更新

核心问题是pod内存使用并不总是等于限制,尤其是在刚启动的时候.因此可以尽快创建无限的 pod,然后使所有节点满载.这不好.可能需要分配资源而不是设置限制.

更新 2

我再次测试了限制和资源:

资源:限制:中央处理器:500m内存:5Gi要求:中央处理器:500m内存:5Gi

总的mem是15G,还剩14G,但是有3个pod被调度并运行成功:

>免费-mh可用的免费共享 buff/缓存总数内存:15G 1.1G 8.3G 3.4M 6.2G 14G交换:0B 0B 0B>码头统计容器 CPU % MEM 使用/限制 MEM % NET I/O 块 I/O44eaa3e2d68c 0.63% 1.939 GB/5.369 GB 36.11% 0 B/0 B 47.84 MB/0 B87099000037c 0.58% 2.187 GB/5.369 GB 40.74% 0 B/0 B 48.01 MB/0 Bd5954ab37642 0.58% 1.936 GB/5.369 GB 36.07% 0 B/0 B 47.81 MB/0 B

看来节点很快就要被压垮了XD

更新 3

现在我改变资源限制,请求5G和限制8G:

资源:限制:中央处理器:500m内存:5Gi要求:中央处理器:500m内存:8Gi

结果如下:

根据

总内存只有15G,所有的pod都需要24G,所以所有的pod都可能被杀掉.(如果没有限制,我的单个容器通常会花费超过 16G.)

这意味着您最好保持 requests 完全等于limits 以避免 pod 被杀死或节点崩溃.好像没有指定 requests 值,它将是 设置为默认的limit,那么requests 究竟是用来做什么的呢?我认为只有limits就足够了,或者IMO,与K8s声称的相反,我更喜欢将资源请求设置为大于限制,以确保节点的可用性.

更新 4

Kubernetes 通过公式:

(capacity - memoryRequested) >= podRequest.memory

似乎 kubernetes 并不像 Vishnu Kannan 所说的那样关心内存使用情况.所以如果mem被其他应用占用太多,节点就会被粉碎.

幸运的是,从提交 e64fe822 中,公式已更改为:

(allocatable - memoryRequested) >= podRequest.memory

等待 k8s v1.2!

解决方案

Kubernetes 资源规范有两个字段,requestlimit.

limits 限制了容器可以使用的资源量.对于内存,如果容器超出其限制,它将被 OOM 杀死.对于 CPU,它的使用可能会受到限制.

requests 的不同之处在于,它们确保放置 Pod 的节点至少有那么多可用容量.如果您想确保您的 Pod 能够增长到特定大小而节点不会耗尽资源,请指定该大小的请求.不过,这将限制您可以安排的 Pod 数量——10G 节点只能容纳 2 个具有 5G 内存请求的 Pod.

The resource limit of Pod has been set as:

resource
  limit
    cpu: 500m
    memory: 5Gi

and there's 10G mem left on the node.

I've created 5 pods in a short time successfully, and the node maybe still have some mem left, e.g. 8G.

The mem usage is growing as the time goes on, and reach the limit (5G x 5 = 25G > 10G), then the node will be out of response.

In order to ensure the usability, is there a way to set the resource limit on the node?

Update

The core problem is that pod memory usage does not always equal to the limit, especially in the time when it just starts. So there can be unlimited pods created as soon as possible, then make all nodes full load. That's not good. There might be something to allocate resources rather than setting the limit.

Update 2

I've tested again for the limits and resources:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 5Gi

The total mem is 15G and left 14G, but 3 pods are scheduled and running successfully:

> free -mh
              total        used        free      shared  buff/cache   available
Mem:            15G        1.1G        8.3G        3.4M        6.2G         14G
Swap:            0B          0B          0B

> docker stats

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O
44eaa3e2d68c        0.63%               1.939 GB / 5.369 GB   36.11%              0 B / 0 B           47.84 MB / 0 B
87099000037c        0.58%               2.187 GB / 5.369 GB   40.74%              0 B / 0 B           48.01 MB / 0 B
d5954ab37642        0.58%               1.936 GB / 5.369 GB   36.07%              0 B / 0 B           47.81 MB / 0 B

It seems that the node will be crushed soon XD

Update 3

Now I change the resources limits, request 5G and limit 8G:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 8Gi

The results are:

According to the k8s source code about the resource check:

The total memory is only 15G, and all the pods needs 24G, so all the pods may be killed. (my single one container will cost more than 16G usually if not limited.)

It means that you'd better keep the requests exactly equals to the limits in order to avoid pod killed or node crush. As if the requests value is not specified, it will be set to the limit as default, so what exactly requests used for? I think only limits is totally enough, or IMO, on the contrary of what K8s claimed, I rather like to set the resource request greater than the limit, in order to ensure the usability of nodes.

Update 4

Kubernetes 1.1 schedule the pods mem requests via the formula:

(capacity - memoryRequested) >= podRequest.memory

It seems that kubernetes is not caring about memory usage as Vishnu Kannan said. So the node will be crushed if the mem used much by other apps.

Fortunately, from the commit e64fe822, the formula has been changed as:

(allocatable - memoryRequested) >= podRequest.memory

waiting for the k8s v1.2!

解决方案

Kubernetes resource specifications have two fields, request and limit.

limits place a cap on how much of a resource a container can use. For memory, if a container goes above its limits, it will be OOM killed. For CPU, its usage may be throttled.

requests are different in that they ensure the node that the pod is put on has at least that much capacity available for it. If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size. This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.

这篇关于为 Kubernetes 中的 Pod 分配或限制资源?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆