在Kubernetes中分配或限制Pod的资源? [英] Allocate or Limit resource for pods in Kubernetes?

查看:133
本文介绍了在Kubernetes中分配或限制Pod的资源?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Pod的资源限制已设置为:

resource
  limit
    cpu: 500m
    memory: 5Gi

,节点上还有10G个内存.

我在短时间内成功创建了5个Pod,该节点可能还剩下一些内存,例如8G.

随着时间的流逝,mem的使用量不断增长,并达到限制(5G x 5 = 25G > 10G),则该节点将无响应.

为了确保可用性,是否可以在节点上设置资源限制?

更新

核心问题是Pod内存使用并不总是等于限制,尤其是在刚启动时.因此,可以尽快创建无限的吊舱,然后使所有节点满负荷.这不好.可能有一些事情可以分配资源,而不是设置限制.

更新2

我再次测试了限制和资源:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 5Gi

总内存为15G,剩下的14G,但已安排3个Pod并成功运行:

> free -mh
              total        used        free      shared  buff/cache   available
Mem:            15G        1.1G        8.3G        3.4M        6.2G         14G
Swap:            0B          0B          0B

> docker stats

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O
44eaa3e2d68c        0.63%               1.939 GB / 5.369 GB   36.11%              0 B / 0 B           47.84 MB / 0 B
87099000037c        0.58%               2.187 GB / 5.369 GB   40.74%              0 B / 0 B           48.01 MB / 0 B
d5954ab37642        0.58%               1.936 GB / 5.369 GB   36.07%              0 B / 0 B           47.81 MB / 0 B

似乎该节点将很快被XD压缩

更新3

现在,我更改资源限制,请求5G和限制8G:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 8Gi

结果是:

根据有关资源检查的k8s源代码:

总内存仅为15G,并且所有Pod都需要24G,因此所有Pod可能会被杀死. (如果没有限制,我的单个集装箱通常要比16G贵.)

这意味着您最好将requests 完全等于limits保持一致,以避免吊舱死亡或节点压扁.好像未指定requests值,它将是设置为默认的limit ,那么requests到底是用来做什么的呢?与K8声称的相反,我认为仅limits就足够了,或者说IMO,我宁愿将资源请求设置为大于限制,以确保节点的可用性.

更新4

Kubernetes 1.1计划豆荚内存请求通过以下公式:

(capacity - memoryRequested) >= podRequest.memory

正如 Vishnu Kannan 所说,看来kubernetes并不关心内存使用情况.因此,如果mem被其他应用程序使用过多,则该节点将被压缩.

幸运的是,从提交 e64fe822 中,公式已更改为: /p>

(allocatable - memoryRequested) >= podRequest.memory

等待k8s v1.2!

解决方案

Kubernetes资源规范有两个字段,分别为requestlimit.

limits限制容器可以使用多少资源.对于内存,如果容器超出其限制,它将被OOM杀死.对于CPU,其使用率可能会受到限制.

requests的不同之处在于,它们确保放置Pod的节点至少具有足够的可用容量.如果要确保您的Pod能够增长到特定大小,而节点不会耗尽资源,请指定该大小的请求.不过,这将限制您可以调度的Pod数量-一个10G节点只能容纳2个Pod,具有5G内存请求.

The resource limit of Pod has been set as:

resource
  limit
    cpu: 500m
    memory: 5Gi

and there's 10G mem left on the node.

I've created 5 pods in a short time successfully, and the node maybe still have some mem left, e.g. 8G.

The mem usage is growing as the time goes on, and reach the limit (5G x 5 = 25G > 10G), then the node will be out of response.

In order to ensure the usability, is there a way to set the resource limit on the node?

Update

The core problem is that pod memory usage does not always equal to the limit, especially in the time when it just starts. So there can be unlimited pods created as soon as possible, then make all nodes full load. That's not good. There might be something to allocate resources rather than setting the limit.

Update 2

I've tested again for the limits and resources:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 5Gi

The total mem is 15G and left 14G, but 3 pods are scheduled and running successfully:

> free -mh
              total        used        free      shared  buff/cache   available
Mem:            15G        1.1G        8.3G        3.4M        6.2G         14G
Swap:            0B          0B          0B

> docker stats

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O
44eaa3e2d68c        0.63%               1.939 GB / 5.369 GB   36.11%              0 B / 0 B           47.84 MB / 0 B
87099000037c        0.58%               2.187 GB / 5.369 GB   40.74%              0 B / 0 B           48.01 MB / 0 B
d5954ab37642        0.58%               1.936 GB / 5.369 GB   36.07%              0 B / 0 B           47.81 MB / 0 B

It seems that the node will be crushed soon XD

Update 3

Now I change the resources limits, request 5G and limit 8G:

resources:
  limits:
    cpu: 500m
    memory: 5Gi
  requests:
    cpu: 500m
    memory: 8Gi

The results are:

According to the k8s source code about the resource check:

The total memory is only 15G, and all the pods needs 24G, so all the pods may be killed. (my single one container will cost more than 16G usually if not limited.)

It means that you'd better keep the requests exactly equals to the limits in order to avoid pod killed or node crush. As if the requests value is not specified, it will be set to the limit as default, so what exactly requests used for? I think only limits is totally enough, or IMO, on the contrary of what K8s claimed, I rather like to set the resource request greater than the limit, in order to ensure the usability of nodes.

Update 4

Kubernetes 1.1 schedule the pods mem requests via the formula:

(capacity - memoryRequested) >= podRequest.memory

It seems that kubernetes is not caring about memory usage as Vishnu Kannan said. So the node will be crushed if the mem used much by other apps.

Fortunately, from the commit e64fe822, the formula has been changed as:

(allocatable - memoryRequested) >= podRequest.memory

waiting for the k8s v1.2!

解决方案

Kubernetes resource specifications have two fields, request and limit.

limits place a cap on how much of a resource a container can use. For memory, if a container goes above its limits, it will be OOM killed. For CPU, its usage may be throttled.

requests are different in that they ensure the node that the pod is put on has at least that much capacity available for it. If you want to make sure that your pods will be able to grow to a particular size without the node running out of resources, specify a request of that size. This will limit how many pods you can schedule, though -- a 10G node will only be able to fit 2 pods with a 5G memory request.

这篇关于在Kubernetes中分配或限制Pod的资源?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆