GKE CPU不足,无法用于小型Node.js应用程序吊舱 [英] GKE Insufficient CPU for small Node.js app pods

查看:125
本文介绍了GKE CPU不足,无法用于小型Node.js应用程序吊舱的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,在GKE上,我有一个Node.js app,每个pod使用以下内容:CPU(cores): 5m, MEMORY: 100Mi

So on GKE I have a Node.js app which for each pod uses about: CPU(cores): 5m, MEMORY: 100Mi

但是,每个节点只能部署1个Pod.我正在使用每个节点1 vCPU, 3.75 GB的GKE n1-standard-1群集.

However I am only able to deploy 1 pod of it per node. I am using the GKE n1-standard-1 cluster which has 1 vCPU, 3.75 GB per node.

因此,为了使2个app的豆荚总数= CPU(cores): 10m, MEMORY: 200Mi,它需要另一个完整的+1节点= 2节点= 2 vCPU, 7.5 GB才能正常工作.如果我尝试在同一单个节点上部署这两个Pod,则会出现insufficient CPU错误.

So in order to get 2 pods of app up total = CPU(cores): 10m, MEMORY: 200Mi, it requires another entire +1 node = 2 nodes = 2 vCPU, 7.5 GB to make it work. If I try to deploy those 2 pods on the same single node, I get insufficient CPU error.

我觉得我实际上应该能够在f1-micro(1 vCPU,0.6 GB)或f1-small(1 vCPU,1.7 GB)的1个节点上运行少量的pod副本(例如3个副本和更多副本) ),而我在这里的配置太高了,浪费了我的钱.

I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of f1-micro (1 vCPU, 0.6 GB) or f1-small (1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money.

但是我不确定为什么我似乎受insufficient CPU的限制.我需要更改一些配置吗?任何指导将不胜感激.

But I am not sure why I seem so restricted by insufficient CPU. Is there some config I need to change? Any guidance would be appreciated.

Allocatable:
 cpu:                940m
 ephemeral-storage:  47093746742
 hugepages-2Mi:      0
 memory:             2702216Ki
 pods:               110
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                                ------------  ----------  ---------------  -------------
  default                    mission-worker-5cf6654687-fwmk4                     100m (10%)    0 (0%)      0 (0%)           0 (0%)
  default                    mission-worker-5cf6654687-lnwkt                     100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                fluentd-gcp-v3.1.1-5b6km                            100m (10%)    1 (106%)    200Mi (7%)       500Mi (18%)
  kube-system                kube-dns-76dbb796c5-jgljr                           260m (27%)    0 (0%)      110Mi (4%)       170Mi (6%)
  kube-system                kube-proxy-gke-test-cluster-pool-1-96c6d8b2-m15p    100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                metadata-agent-nb4dp                                40m (4%)      0 (0%)      50Mi (1%)        0 (0%)
  kube-system                prometheus-to-sd-gwlkv                              1m (0%)       3m (0%)     20Mi (0%)        20Mi (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests     Limits
  --------  --------     ------
  cpu       701m (74%)   1003m (106%)
  memory    380Mi (14%)  690Mi (26%)
Events:     <none>

推荐答案

部署后,使用kubectl describe nodes检查节点容量.例如:在答案底部的代码示例中:

After the deployment, check the node capacities with kubectl describe nodes. For e.g: In the code example at the bottom of the answer:

可分配的CPU:1800m

kube系统命名空间中的Pod已使用:100m + 260m + + 100m + 200m + 20m = 680m

这意味着剩下1800m-680m = 1120m供您使用

因此,如果您的Pod或Pod请求的Cpu超过1120m,则它们将不适合该节点

因此,为了获得2个Pod,总计= CPU(内核):10m,内存: 200Mi,则需要另一个完整的+1节点= 2节点= 2 vCPU,7.5 GB 使它工作.如果我尝试将这两个Pod部署在同一单个上 节点,我收到的CPU错误不足.

So in order to get 2 pods of app up total = CPU(cores): 10m, MEMORY: 200Mi, it requires another entire +1 node = 2 nodes = 2 vCPU, 7.5 GB to make it work. If I try to deploy those 2 pods on the same single node, I get insufficient CPU error.

如果执行上述练习,您将找到答案.万一有足够的CPU供您的Pod使用,而您仍然收到不足的CPU错误,请检查您是否设置了CPU请求并正确限制了参数.参见

If you do the exercise described above, you will find your answer. In case, there is enough cpu for your pods to use and still you are getting insufficient CPU error, check if you are setting the cpu request and limit params correctly. See here

如果您执行上述所有操作,仍然是一个问题.然后,我认为在您的情况下,可能会发生的事情是,您为节点应用分配了5-10m cpu,而该cpu分配得太少了.尝试将其增加到5000万个cpu.

我觉得我实际上应该能够运行几个吊舱 f1-micro(1 vCPU,0.6 GB)或f1-small(1个vCPU,1.7 GB),而且我的配置空间过大 在这里,浪费我的钱.

I have a feeling I should actually be able to run a handful of pod replicas (like 3 replicas and more) on 1 node of f1-micro (1 vCPU, 0.6 GB) or f1-small (1 vCPU, 1.7 GB), and that I am way overprovisioned here, and wasting my money.

同样,执行上述练习以总结

Again, do the exercise describe above to conclude that

Name:            e2e-test-minion-group-4lw4
[ ... lines removed for clarity ...]
Capacity:
 cpu:                               2
 memory:                            7679792Ki
 pods:                              110
Allocatable:
 cpu:                               1800m
 memory:                            7474992Ki
 pods:                              110
[ ... lines removed for clarity ...]
Non-terminated Pods:        (5 in total)
  Namespace    Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------    ----                                  ------------  ----------  ---------------  -------------
  kube-system  fluentd-gcp-v1.38-28bv1               100m (5%)     0 (0%)      200Mi (2%)       200Mi (2%)
  kube-system  kube-dns-3297075139-61lj3             260m (13%)    0 (0%)      100Mi (1%)       170Mi (2%)
  kube-system  kube-proxy-e2e-test-...               100m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system  monitoring-influxdb-grafana-v4-z1m12  200m (10%)    200m (10%)  600Mi (8%)       600Mi (8%)
  kube-system  node-problem-detector-v0.1-fj7m3      20m (1%)      200m (10%)  20Mi (0%)        100Mi (1%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ------------    ----------    ---------------    -------------
  680m (34%)      400m (20%)    920Mi (12%)        1070Mi (14%)

这篇关于GKE CPU不足,无法用于小型Node.js应用程序吊舱的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆